kimihailv commited on
Commit
9bcbb0d
1 Parent(s): aa34daa

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +131 -0
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: feature-extraction
4
+ tags:
5
+ - clip
6
+ - vision
7
+ datasets:
8
+ - Ziyang/yfcc15m
9
+ - conceptual_captions
10
+ ---
11
+ <h1 align="center">UForm</h1>
12
+ <h3 align="center">
13
+ Multi-Modal Inference Library<br/>
14
+ For Semantic Search Applications<br/>
15
+ </h3>
16
+
17
+ ---
18
+
19
+ UForm is a Multi-Modal Modal Inference package, designed to encode Multi-Lingual Texts, Images, and, soon, Audio, Video, and Documents, into a shared vector space!
20
+
21
+ This is model card of the __English only model__ with:
22
+
23
+ * 4 layers BERT (2 layers for unimodal encoding and rest layers for multimodal encoding)
24
+ * ViT-S/16 (image resolution is 224x224)
25
+ * Multiple embedding sizes: 256
26
+
27
+
28
+ If you need Multilingual model, check [this](https://huggingface.co/unum-cloud/uform-vl-multilingual).
29
+
30
+ ## Evaluation
31
+
32
+ The following metrics were obtained with multimodal re-ranking (text-to-image retrieval):
33
+
34
+ | Dataset |Recall@1 | Recall@5 | Recall@10 |
35
+ | :------ | ------: | --------: | --------: |
36
+ | Zero-Shot Flickr | 0.565 | 0.790 | 0.860 |
37
+ | Zero-Shot MS-COCO | 0.281 | 0.525 | 0.645 |
38
+
39
+ ImageNet-Top1: 0.361 \
40
+ ImageNet-Top5: 0.608
41
+
42
+ ## Installation
43
+
44
+ ```bash
45
+ pip install uform[torch]
46
+ ```
47
+
48
+ ## Usage
49
+
50
+ To load the model:
51
+
52
+ ```python
53
+ import uform
54
+
55
+ model = uform.get_model('unum-cloud/uform-vl-english-small')
56
+ ```
57
+
58
+ To encode data:
59
+
60
+ ```python
61
+ from PIL import Image
62
+
63
+ text = 'a small red panda in a zoo'
64
+ image = Image.open('red_panda.jpg')
65
+
66
+ image_data = model.preprocess_image(image)
67
+ text_data = model.preprocess_text(text)
68
+
69
+ image_embedding = model.encode_image(image_data)
70
+ text_embedding = model.encode_text(text_data)
71
+ joint_embedding = model.encode_multimodal(image=image_data, text=text_data)
72
+ ```
73
+
74
+ To get features:
75
+
76
+ ```python
77
+ image_features, image_embedding = model.encode_image(image_data, return_features=True)
78
+ text_features, text_embedding = model.encode_text(text_data, return_features=True)
79
+ ```
80
+
81
+ These features can later be used to produce joint multimodal encodings faster, as the first layers of the transformer can be skipped:
82
+
83
+ ```python
84
+ joint_embedding = model.encode_multimodal(
85
+ image_features=image_features,
86
+ text_features=text_features,
87
+ attention_mask=text_data['attention_mask']
88
+ )
89
+ ```
90
+
91
+ There are two options to calculate semantic compatibility between an image and a text: [Cosine Similarity](#cosine-similarity) and [Matching Score](#matching-score).
92
+
93
+ ### Cosine Similarity
94
+
95
+ ```python
96
+ import torch.nn.functional as F
97
+
98
+ similarity = F.cosine_similarity(image_embedding, text_embedding)
99
+ ```
100
+
101
+ The `similarity` will belong to the `[-1, 1]` range, `1` meaning the absolute match.
102
+
103
+ __Pros__:
104
+
105
+ - Computationally cheap.
106
+ - Only unimodal embeddings are required, unimodal encoding is faster than joint encoding.
107
+ - Suitable for retrieval in large collections.
108
+
109
+ __Cons__:
110
+
111
+ - Takes into account only coarse-grained features.
112
+
113
+
114
+ ### Matching Score
115
+
116
+ Unlike cosine similarity, unimodal embedding are not enough.
117
+ Joint embedding will be needed and the resulting `score` will belong to the `[0, 1]` range, `1` meaning the absolute match.
118
+
119
+ ```python
120
+ score = model.get_matching_scores(joint_embedding)
121
+ ```
122
+
123
+ __Pros__:
124
+
125
+ - Joint embedding captures fine-grained features.
126
+ - Suitable for re-ranking – sorting retrieval result.
127
+
128
+ __Cons__:
129
+
130
+ - Resource-intensive.
131
+ - Not suitable for retrieval in large collections.