01_PyTorch Workflow (Runtime Error: Expected all tensors to be on the same device) #929
-
In the last portion of section 01 PyTorch workflow fundamentals, I get an error in the training loop.
I have already put both model_1 and X_train to cuda before starting the training loop.
While printing out X_train.device and next(model_1.parameters()).device the output is device(type='cuda', index=0) |
Beta Was this translation helpful? Give feedback.
Answered by
allen-ajith
May 19, 2024
Replies: 1 comment
-
Solved! It was a typo in forward() method while creating model
Correction is
|
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
allen-ajith
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Solved!
It was a typo in forward() method while creating model
def forward(self, x: torch.Tensor) -> torch.Tensor: return self.linear_layer(X)
Correction is
def forward(self, x: torch.Tensor) -> torch.Tensor: return self.linear_layer(x)