Multiple Sequential trainings slows down speed #17490
Closed
Answered
by
HadiSDev
HadiSDev
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
-
Hi.
However, after each loop the the training it self slows down incrementally (epoch / sec). I am thinking it is because I need to reset something but I have not been able to find that information. I am using Torch-cpu 2.0.0 Any idea on what I am doing wrong? :( |
Beta Was this translation helpful? Give feedback.
Answered by
HadiSDev
Apr 27, 2023
Replies: 1 comment
-
I tried to remove the neptune logger and model saving functionality and it seemed to have solved the issue |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
HadiSDev
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I tried to remove the neptune logger and model saving functionality and it seemed to have solved the issue