Replies: 1 comment 1 reply
-
@kami93 that's the default init for those layers |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi. I noticed that the constant initializations for LayerNorms (ones for the weights, zeros for the biases) in ViT are removed in the latest master. I am wondering about the reason for this change and what is the consequence? I tried to look for clues in commit msgs or changelogs, but I could not figure out any. I really appreciate any help you can provide.
Comparing:
https://github.com/rwightman/pytorch-image-models/blob/372ad5fa0dbeb74dcec81db06e9ff69b3d5a2eb6/timm/models/vision_transformer.py#L379-L403 (the latest master)
vs.
https://github.com/rwightman/pytorch-image-models/blob/v0.5.4/timm/models/vision_transformer.py#L376-L408 (tag v0.5.4)
Beta Was this translation helpful? Give feedback.
All reactions