[NOT A CRUCIAL QUESTION] Are built-in torch functions really "faster"? #1099
Unanswered
sachinlodhi
asked this question in
Q&A
Replies: 1 comment
-
Interesting experiment. Yes they could be faster for 1D ones. But for 2D and higher dimensions, torch is faster. This is because |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
So I was watching the matrix multiplication section and was trying to compare the manual (for-loop) method and the PyTorch built-in method i.e. torch.matmul().

Interestingly, I found that, even for 1D vector, for loop was faster than torch.matmul().
The screenshot is attached.
Beta Was this translation helpful? Give feedback.
All reactions