Skip to content

RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead. #54

Answered by mrdbourke
creatorcao asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @creatorcao,

Thank you for the question, it seems there was an update to PyTorch causing this.

In the cell above the one you're trying to run, you can change the code to:

# Print out what's happening
      if epoch % 10 == 0:
            epoch_count.append(epoch)
            train_loss_values.append(loss.detach().numpy()) # New
            test_loss_values.append(test_loss.detach().numpy()) # New
            print(f"Epoch: {epoch} | MAE Train Loss: {loss} | MAE Test Loss: {test_loss} ")

Notice the use of loss.detach().numpy() to remove the need for gradient tracking when appending the loss values.

Without this, the code will error.

I've updated the notebook to reflect this.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by creatorcao
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants