Transformers
PyTorch
English
bridgetower
gaudi
Inference Endpoints
anahita-b commited on
Commit
f0c6441
1 Parent(s): 8cf2f26

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -0
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - bridgetower
5
+ - gaudi
6
+ license: mit
7
+ datasets:
8
+ - conceptual_captions
9
+ - sbu_captions
10
+ - visual_genome
11
+ - mscoco_captions
12
+ ---
13
+
14
+ # BridgeTower large-itm-mlm model
15
+
16
+ The BridgeTower model was proposed in "BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning" by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
17
+ The model was pretrained on English language using masked language modeling (MLM) and image text matching (ITM)objectives. It was introduced in
18
+ [this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in
19
+ [this repository](https://github.com/microsoft/BridgeTower).
20
+
21
+ BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/).
22
+
23
+ ## Model description
24
+
25
+ The abstract from the paper is the following:
26
+ Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
27
+
28
+ ## Intended uses & limitations(TODO)
29
+
30
+
31
+ ### How to use
32
+
33
+ Here is how to use this model to perform image and text matching:
34
+
35
+ ```python
36
+ from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
37
+ import requests
38
+ from PIL import Image
39
+
40
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
41
+ image = Image.open(requests.get(url, stream=True).raw)
42
+ texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
43
+
44
+ processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
45
+ model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
46
+
47
+ # forward pass
48
+ scores = dict()
49
+ for text in texts:
50
+ # prepare inputs
51
+ encoding = processor(image, text, return_tensors="pt")
52
+ outputs = model(**encoding)
53
+ scores[text] = outputs.logits[0,1].item()
54
+ ```
55
+
56
+ Here is how to use this model to perfom masked language modeling:
57
+
58
+ ```python
59
+ from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
60
+ from PIL import Image
61
+ import requests
62
+
63
+ url = "http://images.cocodataset.org/val2017/000000360943.jpg"
64
+ image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
65
+ text = "a <mask> looking out of the window"
66
+
67
+ processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
68
+ model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
69
+
70
+ # prepare inputs
71
+ encoding = processor(image, text, return_tensors="pt")
72
+
73
+ # forward pass
74
+ outputs = model(**encoding)
75
+
76
+ results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist())
77
+
78
+ print(results)
79
+ .a cat looking out of the window.
80
+ ```
81
+
82
+ ### Limitations and bias
83
+
84
+ TODO
85
+
86
+ ## Training data
87
+
88
+ The BridgeTower model was pretrained on four public image-caption datasets:
89
+ - [Conceptual Captions(CC)](https://ai.google.com/research/ConceptualCaptions/),
90
+ - [SBU Captions](https://www.cs.rice.edu/~vo9/sbucaptions/),
91
+ - [MSCOCO Captions](https://arxiv.org/pdf/1504.00325.pdf),
92
+ - [Visual Genome](https://visualgenome.org/)
93
+
94
+ The total number of unique images in the combined data is 4M.
95
+
96
+ ## Training procedure
97
+
98
+ ### Preprocessing
99
+
100
+ TODO
101
+
102
+ ### Pretraining
103
+
104
+ The model was pre-trained for 100k steps on 8 NVIDIA A100 GPUs with a batch size of 4096.
105
+ The optimizer used was AdamW with a learning rate of 1e-5. No data augmentation was used except for center-crop. The image resolution in pre-training is set to 288 x 288.
106
+
107
+ ## Evaluation results
108
+ Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other down stream tasks.
109
+
110
+ ### BibTeX entry and citation info
111
+ ```bibtex
112
+ @article{xu2022bridge,
113
+ title={Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning},
114
+ author={Xu, Xiao and
115
+ Wu, Chenfei and
116
+ Rosenman, Shachar and
117
+ Lal, Vasudev and
118
+ Duan, Nan},
119
+ journal={arXiv preprint arXiv:2206.08657},
120
+ year={2022}
121
+ }
122
+ ```