Replies: 1 comment
-
I see it now, for non-parameters, I have to use self.register_buffer https://pytorch-lightning.readthedocs.io/en/latest/accelerators/accelerator_prepare.html |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
New to Lightning. Saw some examples of this error but didn't seem to be my case. Below is the relevant part of the module. I run this with
trainer = pl.Trainer(gpus=1)
. However, I get the error:Looking closer, self.A is not on the GPU. I thought it would be placed there automatically since it is in init. How should I place it on there? Its a fixed matrix (not learnable parameters, but does need to be backpropagated through). I see I can use
self.device
to put tensors on the GPU, but I want this matrix to be placed on the GPU in the init, without placing it every training step.Beta Was this translation helpful? Give feedback.
All reactions