trainer.validate()
and trainer.test()
on the same dataset give different results
#17326
Unanswered
sarafrr
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I am using:
pytorch-lightning version 1.9.4.
I report a strange behaviour of the two functions on the same dataset (i am using the same
Dataset
and the sameDataLoader
).I wanted to see if the metric computed during training for the validation set was coherent with the one that I computed in inference using the same (validation) dataset and the saved checkpoint.
I saw that I have the correct results (the one reported in the logs and the one reported in the checkpoint's name) using the
trainer.validate()
function. Indeed, it gives the same results as the one which I store in the logger and the name of the checkpoint itself.This is not the case when using
trainer.test()
.Note: I am using the same batch size which is 16.
I also use the same seed, with the function:
import pytorch_lightning as PL
PL.seed_everything(123)
The model does not have any randomness other than the ones in the layers
nn.Dropout
andnn.BatchNorm2d
which however should be disabled by default during validation.Beta Was this translation helpful? Give feedback.
All reactions