Different result from train and validation #8921
Unanswered
chamecall
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment
-
Dear @chamecall, It would be hard to provide an answer without access to the code. However, be aware the data for training are randomly shuffled. Best, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I wanted to check training and validation tendency on one fixed batch so I created two equal datasets (train and val) and appropriate dataloaders and called Trainer.fit() method for 40 epochs.
The objective is binary classification. Loss function is binary_crossentropy_with_logits.
In case of training it all looks right cause with every following epoch loss is approximately decreasing and predictions become more and more right but in case of validation ( I would like to remind you that this is the same data which used in the training process) we don't get loss deacrease and we have bad class predictions.
Can you explain what could be the reason of the behaviour?
Below I attached outputs for the 40 epochs and losses plot.
Beta Was this translation helpful? Give feedback.
All reactions