What is the difference between torch.tensor and torch.as_tensor? #428
Replies: 1 comment
-
When working with large numpy arrays in PyTorch, it is generally more efficient to use torch.as_tensor() instead of torch.tensor(). The reason for this is that torch.tensor() creates a new copy of the data, which can be time-consuming and memory-intensive for large arrays. On the other hand, torch.as_tensor() shares the same memory location as the original numpy array, avoiding the need to copy the data. This can lead to significant performance improvements, especially when working with large datasets. However, it's worth noting that torch.as_tensor() doesn't always work with all types of numpy arrays, so it's important to read the documentation carefully to ensure that it's appropriate for your use case. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
After googling it, I can understand torch.tensor() will copy data from input and torch.as_tensor() will share the same memory of the input data. I'm confused when to use what and what is the advantage or disadvantage of each.
Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions