billpsomas
commited on
Commit
•
1e9d11e
1
Parent(s):
30a3600
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,46 @@
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
+
datasets:
|
4 |
+
- imagenet-1k
|
5 |
+
metrics:
|
6 |
+
- accuracy
|
7 |
+
pipeline_tag: image-classification
|
8 |
+
language:
|
9 |
+
- en
|
10 |
+
tags:
|
11 |
+
- vision transformer
|
12 |
+
- simpool
|
13 |
+
- computer vision
|
14 |
+
- deep learning
|
15 |
---
|
16 |
+
|
17 |
+
# Supervised ViT-S/16 (small-sized Vision Transformer with patch size 16) model with SimPool
|
18 |
+
|
19 |
+
ViT-S model with SimPool (no gamma) trained on ImageNet-1k for 100 epochs.
|
20 |
+
|
21 |
+
SimPool is a simple attention-based pooling method at the end of network, introduced on this ICCV 2023 [paper](https://arxiv.org/pdf/2309.06891.pdf) and released in this [repository](https://github.com/billpsomas/simpool/).
|
22 |
+
Disclaimer: This model card is written by the author of SimPool, i.e. [Bill Psomas](http://users.ntua.gr/psomasbill/).
|
23 |
+
|
24 |
+
## Motivation
|
25 |
+
|
26 |
+
Convolutional networks and vision transformers have different forms of pairwise interactions, pooling across layers and pooling at the end of the network. Does the latter really need to be different?
|
27 |
+
As a by-product of pooling, vision transformers provide spatial attention for free, but this is most often of low quality unless self-supervised, which is not well studied. Is supervision really the problem?
|
28 |
+
|
29 |
+
## Method
|
30 |
+
|
31 |
+
SimPool is a simple attention-based pooling mechanism as a replacement of the default one for both convolutional and transformer encoders. For transformers, we completely discard the [CLS] token.
|
32 |
+
Interestingly, we find that, whether supervised or self-supervised, SimPool improves performance on pre-training and downstream tasks and provides attention maps delineating object boundaries in all cases.
|
33 |
+
One could thus call SimPool universal.
|
34 |
+
|
35 |
+
## BibTeX entry and citation info
|
36 |
+
|
37 |
+
```
|
38 |
+
@misc{psomas2023simpool,
|
39 |
+
title={Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?},
|
40 |
+
author={Bill Psomas and Ioannis Kakogeorgiou and Konstantinos Karantzalos and Yannis Avrithis},
|
41 |
+
year={2023},
|
42 |
+
eprint={2309.06891},
|
43 |
+
archivePrefix={arXiv},
|
44 |
+
primaryClass={cs.CV}
|
45 |
+
}
|
46 |
+
```
|