You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for batch, (X, y) in enumerate(data_loader):
# Put data on target device
X, y = X.to(device), y.to(device)
# Forward pass
y_pred = model(X)
# Calculate the loss and accuracy (per batch)
train_loss += loss_fn(y_pred, y) # accumulate train loss
train_acc += accuracy_fn(y_true=y, y_pred=y_pred.argmax(dim=1)) # accumulate train acc (go from logits -> prediction labels)
# Optimizer zero grad
optimizer.zero_grad()
# Loss backward
loss.backward()
# Optimizer step
optimizer.step() # Optimizer will update model's parameters once per batch instead of once per epoch
train_loss /= len(data_loader) # train loss per batch
train_acc /= len(data_loader) # train acc per batch
print(f"Train loss: {train_loss:.5f} | Train acc: {train_acc:.2f}%")`
And I'm getting the error following error when I call the train_step() function: RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
I tried replacing loss.backward() with loss.backward(retain_graph=True) but it seems to not make a difference. How to solve this error?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I wrote the following code to functionize my training:
`def train_step(model: torch.nn.Module,
data_loader: torch.utils.data.DataLoader,
loss_fn: torch.nn.Module,
optimizer: torch.optim.Optimizer,
accuracy_fn,
device: torch.device=device):
train_loss, train_acc = 0, 0
model.train()
for batch, (X, y) in enumerate(data_loader):
# Put data on target device
X, y = X.to(device), y.to(device)
train_loss /= len(data_loader) # train loss per batch
train_acc /= len(data_loader) # train acc per batch
print(f"Train loss: {train_loss:.5f} | Train acc: {train_acc:.2f}%")`
And I'm getting the error following error when I call the train_step() function:
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
I tried replacing
loss.backward()
withloss.backward(retain_graph=True)
but it seems to not make a difference. How to solve this error?Beta Was this translation helpful? Give feedback.
All reactions