xvjiarui commited on
Commit
fd613a6
1 Parent(s): fa94e9e

add readme

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - vision
4
+ ---
5
+
6
+ # Model Card: GroupViT
7
+
8
+ This checkpoint is uploaded by Jiarui Xu.
9
+
10
+ ## Model Details
11
+
12
+ The GroupViT model was proposed in [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
13
+ Inspired by [CLIP](clip), GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories.
14
+
15
+ ### Model Date
16
+
17
+ June 2022
18
+
19
+ ### Abstract
20
+
21
+ Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision.
22
+
23
+
24
+ ### Documents
25
+
26
+ - [GroupViT Paper](https://arxiv.org/abs/2202.11094)
27
+
28
+
29
+ ### Use with Transformers
30
+
31
+
32
+ ```python
33
+ from PIL import Image
34
+ import requests
35
+ from transformers import AutoProcessor, GroupViTModel
36
+
37
+ model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
38
+ processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
39
+
40
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
41
+ image = Image.open(requests.get(url, stream=True).raw)
42
+
43
+ inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
44
+
45
+ outputs = model(**inputs)
46
+ logits_per_image = outputs.logits_per_image # this is the image-text similarity score
47
+ probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
48
+ ```
49
+
50
+ ## Data
51
+
52
+ The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
53
+
54
+
55
+ For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/groupvit.html#).
56
+
57
+ ### BibTeX entry and citation info
58
+
59
+ ```bibtex
60
+ @article{xu2022groupvit,
61
+ author = {Xu, Jiarui and De Mello, Shalini and Liu, Sifei and Byeon, Wonmin and Breuel, Thomas and Kautz, Jan and Wang, Xiaolong},
62
+ title = {GroupViT: Semantic Segmentation Emerges from Text Supervision},
63
+ journal = {arXiv preprint arXiv:2202.11094},
64
+ year = {2022},
65
+ }
66
+ ```