Replies: 1 comment
-
I have an updated to this observation. I ran 1000 random samples from the test_dataloader and compared a predicted label, using rounding up or down form 0.5, to the correct label I provided. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Just a general question. I don't understand how I can have a loss graph for train and test that look alike and also look good, but when I try to test individual data, in eval mode the results make no sense. I have a binary set of targets, just 0 and 1. Looking at the output preds they are always around 0.5. Some a little higher and some lower, but not by much.And the targets they align to make no sense at all. It would be as good if there had been no training. It it possible the weights get cleared somehow?
If anyone has seen this before and knows what it is likely to be I would really like to hear how this can be fixed.
I know my strategy could also be bogus, but then it seems to me I should not see and loss improvement in the training run.
Beta Was this translation helpful? Give feedback.
All reactions