timm
/

Image Classification
timm
PyTorch
Safetensors
rwightman HF staff commited on
Commit
14da9fc
1 Parent(s): 42e5d06

Update model config and README

Browse files
Files changed (1) hide show
  1. README.md +36 -14
README.md CHANGED
@@ -26,6 +26,7 @@ NOTE: So far, these are the only known MNV4 weights. Official weights for Tensor
26
  - **Dataset:** ImageNet-1k
27
  - **Papers:**
28
  - MobileNetV4 -- Universal Models for the Mobile Ecosystem: https://arxiv.org/abs/2404.10518
 
29
  - **Original:** https://github.com/tensorflow/models/tree/master/official/vision
30
 
31
  ## Model Usage
@@ -121,17 +122,38 @@ output = model.forward_head(output, pre_logits=True)
121
  ## Model Comparison
122
  ### By Top-1
123
 
124
- |model |top1 |top1_err|top5 |top5_err|param_count|img_size|
125
- |-------------------------------------------|------|--------|------|--------|-----------|--------|
126
- |mobilenetv4_conv_large.e500_r256_in1k |82.674|17.326 |96.31 |3.69 |32.59 |320 |
127
- |mobilenetv4_conv_large.e500_r256_in1k |81.862|18.138 |95.69 |4.31 |32.59 |256 |
128
- |mobilenetv4_hybrid_medium.e500_r224_in1k |81.276|18.724 |95.742|4.258 |11.07 |256 |
129
- |mobilenetv4_conv_medium.e500_r256_in1k |80.858|19.142 |95.768|4.232 |9.72 |320 |
130
- |mobilenetv4_hybrid_medium.e500_r224_in1k |80.442|19.558 |95.38 |4.62 |11.07 |224 |
131
- |mobilenetv4_conv_blur_medium.e500_r224_in1k|80.142|19.858 |95.298|4.702 |9.72 |256 |
132
- |mobilenetv4_conv_medium.e500_r256_in1k |79.928|20.072 |95.184|4.816 |9.72 |256 |
133
- |mobilenetv4_conv_medium.e500_r224_in1k |79.808|20.192 |95.186|4.814 |9.72 |256 |
134
- |mobilenetv4_conv_blur_medium.e500_r224_in1k|79.438|20.562 |94.932|5.068 |9.72 |224 |
135
- |mobilenetv4_conv_medium.e500_r224_in1k |79.094|20.906 |94.77 |5.23 |9.72 |224 |
136
- |mobilenetv4_conv_small.e1200_r224_in1k |74.292|25.708 |92.116|7.884 |3.77 |256 |
137
- |mobilenetv4_conv_small.e1200_r224_in1k |73.454|26.546 |91.34 |8.66 |3.77 |224 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  - **Dataset:** ImageNet-1k
27
  - **Papers:**
28
  - MobileNetV4 -- Universal Models for the Mobile Ecosystem: https://arxiv.org/abs/2404.10518
29
+ - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
30
  - **Original:** https://github.com/tensorflow/models/tree/master/official/vision
31
 
32
  ## Model Usage
 
122
  ## Model Comparison
123
  ### By Top-1
124
 
125
+ | model |top1 |top1_err|top5 |top5_err|param_count|img_size|
126
+ |--------------------------------------------------------------------------------------------------|------|--------|------|--------|-----------|--------|
127
+ | [mobilenetv4_conv_large.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_large.e500_r256_in1k) |82.674|17.326 |96.31 |3.69 |32.59 |320 |
128
+ | [mobilenetv4_conv_large.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_large.e500_r256_in1k) |81.862|18.138 |95.69 |4.31 |32.59 |256 |
129
+ | [mobilenetv4_hybrid_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.e500_r224_in1k) |81.276|18.724 |95.742|4.258 |11.07 |256 |
130
+ | [mobilenetv4_conv_medium.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r256_in1k) |80.858|19.142 |95.768|4.232 |9.72 |320 |
131
+ | [mobilenetv4_hybrid_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.e500_r224_in1k) |80.442|19.558 |95.38 |4.62 |11.07 |224 |
132
+ | [mobilenetv4_conv_blur_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_blur_medium.e500_r224_in1k) |80.142|19.858 |95.298|4.702 |9.72 |256 |
133
+ | [mobilenetv4_conv_medium.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r256_in1k) |79.928|20.072 |95.184|4.816 |9.72 |256 |
134
+ | [mobilenetv4_conv_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r224_in1k) |79.808|20.192 |95.186|4.814 |9.72 |256 |
135
+ | [mobilenetv4_conv_blur_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_blur_medium.e500_r224_in1k) |79.438|20.562 |94.932|5.068 |9.72 |224 |
136
+ | [mobilenetv4_conv_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r224_in1k) |79.094|20.906 |94.77 |5.23 |9.72 |224 |
137
+ | [mobilenetv4_conv_small.e1200_r224_in1k](http://hf.co/timm/mobilenetv4_conv_small.e1200_r224_in1k) |74.292|25.708 |92.116|7.884 |3.77 |256 |
138
+ | [mobilenetv4_conv_small.e1200_r224_in1k](http://hf.co/timm/mobilenetv4_conv_small.e1200_r224_in1k) |73.454|26.546 |91.34 |8.66 |3.77 |224 |
139
+
140
+ ## Citation
141
+ ```bibtex
142
+ @article{qin2024mobilenetv4,
143
+ title={MobileNetV4-Universal Models for the Mobile Ecosystem},
144
+ author={Qin, Danfeng and Leichner, Chas and Delakis, Manolis and Fornoni, Marco and Luo, Shixin and Yang, Fan and Wang, Weijun and Banbury, Colby and Ye, Chengxi and Akin, Berkin and others},
145
+ journal={arXiv preprint arXiv:2404.10518},
146
+ year={2024}
147
+ }
148
+ ```
149
+ ```bibtex
150
+ @misc{rw2019timm,
151
+ author = {Ross Wightman},
152
+ title = {PyTorch Image Models},
153
+ year = {2019},
154
+ publisher = {GitHub},
155
+ journal = {GitHub repository},
156
+ doi = {10.5281/zenodo.4414861},
157
+ howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
158
+ }
159
+ ```