PyTorch for Deep Learning Bootcamp - Device issue #827
Unanswered
Packetouille
asked this question in
Q&A
Replies: 1 comment
-
To me the code looks ok however you can check the code by using printing statements as well # TRAIN METHOD ########################################
def train_step(model: torch.nn.Module,
data_loader: torch.utils.data.DataLoader,
loss_fn: torch.nn.Module,
optimizer: torch.optim.Optimizer,
accuracy_fn,
device: torch.device = device):
train_loss, train_acc = 0, 0
model.to(device)
print(model.to(device)) Do same for |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have run into a device error that for the life of me I cannot understand. I am following along with the course but have run into the following device issue. Data is on the "cpu", but we have device agnostic code for the data (and for the model though we send that to the device during during instantiation) within the TRAIN & TEST METHODS.
I tested with print statements that both the model and data are on the gpu during the training step. Yet I am still receiving the error that says "Expected all tensors to be on the same device...." If i'm reading the error correctly, the issue seems to happen during the loss function call (using
nn.CrossEntropyLoss()
). But within the training method, we are callingloss.backward()
after the data has been sent to the gpu....The code below is code that I literally just copied from the course to see if I had typed something wrong, and still I am receiving the same error. Hoping that something will pop out to the more experienced eyes.
Error that I'm receiving: #################################################

Beta Was this translation helpful? Give feedback.
All reactions