Skip to content

Commit e741370

Browse files
authored
Update README.md
1 parent de15b8b commit e741370

File tree

1 file changed

+42
-0
lines changed

1 file changed

+42
-0
lines changed

README.md

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,48 @@
2626
* The Hugging Face Hub (https://huggingface.co/timm) is now the primary source for `timm` weights. Model cards include link to papers, original source, license.
2727
* Previous 0.6.x can be cloned from [0.6.x](https://github.com/rwightman/pytorch-image-models/tree/0.6.x) branch or installed via pip with version.
2828

29+
### April 11, 2024
30+
* Prepping for a long overdue 1.0 release, things have been stable for a while now.
31+
* Significant feature that's been missing for a while, `features_only=True` support for ViT models with flat hidden states or non-std module layouts (so far covering `'vit_*', 'twins_*', 'deit*', 'beit*', 'mvitv2*', 'eva*', 'samvit_*', 'flexivit*'`)
32+
* Above feature support achieved through a new `forward_intermediates()` API that can be used with a feature wrapping module or direclty.
33+
```python
34+
model = timm.create_model('vit_base_patch16_224')
35+
final_feat, intermediates = model.forward_intermediates(input)
36+
output = model.forward_head(final_feat) # pooling + classifier head
37+
38+
print(final_feat.shape)
39+
torch.Size([2, 197, 768])
40+
41+
for f in intermediates:
42+
print(f.shape)
43+
torch.Size([2, 768, 14, 14])
44+
torch.Size([2, 768, 14, 14])
45+
torch.Size([2, 768, 14, 14])
46+
torch.Size([2, 768, 14, 14])
47+
torch.Size([2, 768, 14, 14])
48+
torch.Size([2, 768, 14, 14])
49+
torch.Size([2, 768, 14, 14])
50+
torch.Size([2, 768, 14, 14])
51+
torch.Size([2, 768, 14, 14])
52+
torch.Size([2, 768, 14, 14])
53+
torch.Size([2, 768, 14, 14])
54+
torch.Size([2, 768, 14, 14])
55+
56+
print(output.shape)
57+
torch.Size([2, 1000])
58+
```
59+
60+
```python
61+
model = timm.create_model('eva02_base_patch16_clip_224', pretrained=True, img_size=512, features_only=True, out_indices=(-3, -2,))
62+
output = model(torch.randn(2, 3, 512, 512))
63+
64+
for o in output:
65+
print(o.shape)
66+
torch.Size([2, 768, 32, 32])
67+
torch.Size([2, 768, 32, 32])
68+
```
69+
* TinyCLIP vision tower weights added, thx [Thien Tran](https://github.com/gau-nernst)
70+
2971
### Feb 19, 2024
3072
* Next-ViT models added. Adapted from https://github.com/bytedance/Next-ViT
3173
* HGNet and PP-HGNetV2 models added. Adapted from https://github.com/PaddlePaddle/PaddleClas by [SeeFun](https://github.com/seefun)

0 commit comments

Comments
 (0)