Replies: 1 comment
-
Hi, I was having a problem trying to do the same thing, the way I did it was to write a plugin that evaluates the strategy on the test stream after each epoch like so: class EpochTesting(SupervisedPlugin):
def __init__(self, test_stream):
"""
Plugin that allows you to test the model after each epoch.
"""
super().__init__()
self.test_stream = test_stream
def after_training_epoch(self, strategy: "BaseStrategy", **kwargs):
"""
We test after each epoch.
"""
print("Testing after epoch")
strategy.eval(self.test_stream) And then added this to the plugins for the strategy. Hope this helps! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I'm trying to log the test accuracy after each epoch during training.
When I use EpochAccuracy the result documented as:
'Top1_Acc_Epoch/train_phase/train_stream/Task000'
If I understand correctly, this accuracy is calculated only on the train stream. I tried to change the mode to
mode="eval"
but it does not allow it forreset_at="epoch", emit_at="epoch"
. I can definereset_at="iteration", emit_at="iteration"
but I'm not sure what it means.How can I add a metric that calculates the accuracy on the test stream after in several points during training? It doesn't have to be after each epoch, but I need a few points between the experiences on the test stream.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions