-
Beta Was this translation helpful? Give feedback.
Answered by
LuluW8071
Jul 4, 2024
Replies: 1 comment 3 replies
-
Try using more complex architecture and larger batch size. A smaller batch size and less complex architecture generally result in slower speeds during training a NN. Although the GPU can also affect speed, in this case, the A100 GPU from 2020 should not be the cause. Therefore, the smaller batch size and less complex architecture are likely the reasons for the slower speeds. |
Beta Was this translation helpful? Give feedback.
3 replies
Answer selected by
OrangeAoo
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Try using more complex architecture and larger batch size. A smaller batch size and less complex architecture generally result in slower speeds during training a NN. Although the GPU can also affect speed, in this case, the A100 GPU from 2020 should not be the cause. Therefore, the smaller batch size and less complex architecture are likely the reasons for the slower speeds.