RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead. #54
Answered
by
mrdbourke
creatorcao
asked this question in
Q&A
-
Hi all!
|
Beta Was this translation helpful? Give feedback.
Answered by
mrdbourke
May 10, 2022
Replies: 1 comment
-
Hi @creatorcao, Thank you for the question, it seems there was an update to PyTorch causing this. In the cell above the one you're trying to run, you can change the code to: # Print out what's happening
if epoch % 10 == 0:
epoch_count.append(epoch)
train_loss_values.append(loss.detach().numpy()) # New
test_loss_values.append(test_loss.detach().numpy()) # New
print(f"Epoch: {epoch} | MAE Train Loss: {loss} | MAE Test Loss: {test_loss} ") Notice the use of Without this, the code will error. I've updated the notebook to reflect this. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
creatorcao
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi @creatorcao,
Thank you for the question, it seems there was an update to PyTorch causing this.
In the cell above the one you're trying to run, you can change the code to:
Notice the use of
loss.detach().numpy()
to remove the need for gradient tracking when appending the loss values.Without this, the code will error.
I've updated the notebook to reflect this.