Error during teardown of trainer.test() #15851
Unanswered
Alec-Wright
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment
-
did you figure this out? i am getting the same error when loading a checkpoint with trainer.fit(ckpt_path=...). If I load the checkpoint with just torch.load and load_state_dict then it loads just fine. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I've been getting this strange error. The training all completes fine. Then when the test step is running, it computes the test loss just fine, it saves it to the logger and everything. Then as it exits trainer.test() (at least I think that's where the error occurs), I get this error:
So it seems like its trying to use an inference tensor where it shouldn't be, however I'm not clear why this is happening, or what the trainer is trying to do when this error occurs. As far as I can tell, I'm handling the RNN hidden state in the exact same way as in the validation set, and that doesn't produce this error.
Any help would be appreciated! Thanks.
Beta Was this translation helpful? Give feedback.
All reactions