Replies: 1 comment
-
Interestingly, I did this yesterday for my code base, which has many experiments, and I believe that the callback usage is more modular. I had a similar concern as you but then the way I justified it was - if you were running a manual loop you would be collecting the results of the batch anyways. Lightning is running the loop for you, so "theoretically" there should not be much extra overhead. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am wondering what is the proper way to add a metric, and if its something that can be done using a callback, making the code very clean (no metric in the actual training code).
One issue I am facing is that we need to return in the validation_step the entire predictions of the model, and with that pl stores in memory all of the predictions to a list.
This fills up the memory very fast, especially with a large model and large validation dataset.
Anyway to disable storing the predicitions in memory ? or a work around to transfer predictions from the lightning module to the callback?
my callback is something like this:
class MetricCallBack(pl.Callback):
def init(self):
self.metric = MyMetricClass()
return
Thanks in advance,
Yoni
Beta Was this translation helpful? Give feedback.
All reactions