nielsr HF staff merve HF staff commited on
Commit
daf2a0a
1 Parent(s): 72fdc69

Add model card (#1)

Browse files

- Add model card (293925cbbc2a43154faca05c8dd27be56b7ffae5)


Co-authored-by: Merve Noyan <merve@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +131 -3
README.md CHANGED
@@ -6,12 +6,140 @@ tags:
6
  license: cc-by-nc-sa-4.0
7
  ---
8
 
9
- ## ImageBind-Huge model
10
 
11
- Here's how to use it:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  ```python
 
 
 
 
14
  from imagebind.models.imagebind_model import ImageBindModel
15
 
16
- reloaded_model = ImageBindModel.from_pretrained("nielsr/imagebind-huge")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ```
 
6
  license: cc-by-nc-sa-4.0
7
  ---
8
 
9
+ # ImageBind: One Embedding Space To Bind Them All
10
 
11
+ **[FAIR, Meta AI](https://ai.facebook.com/research/)**
12
+
13
+ To appear at CVPR 2023 (*Highlighted paper*)
14
+
15
+ [[`Paper`](https://facebookresearch.github.io/ImageBind/paper)] [[`Blog`](https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/)] [[`Demo`](https://imagebind.metademolab.com/)] [[`Supplementary Video`](https://dl.fbaipublicfiles.com/imagebind/imagebind_video.mp4)] [[`BibTex`](#citing-imagebind)]
16
+
17
+ PyTorch implementation and pretrained models for ImageBind. For details, see the paper: **[ImageBind: One Embedding Space To Bind Them All](https://facebookresearch.github.io/ImageBind/paper)**.
18
+
19
+ ImageBind learns a joint embedding across six different modalities - images, text, audio, depth, thermal, and IMU data. It enables novel emergent applications ‘out-of-the-box’ including cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation.
20
+
21
+
22
+
23
+ ![ImageBind](https://user-images.githubusercontent.com/8495451/236859695-ffa13364-3e39-4d99-a8da-fbfab17f9a6b.gif)
24
+
25
+ ## ImageBind model
26
+
27
+ Emergent zero-shot classification performance.
28
+
29
+ <table style="margin: auto">
30
+ <tr>
31
+ <th>Model</th>
32
+ <th><span style="color:blue">IN1k</span></th>
33
+ <th><span style="color:purple">K400</span></th>
34
+ <th><span style="color:green">NYU-D</span></th>
35
+ <th><span style="color:LightBlue">ESC</span></th>
36
+ <th><span style="color:orange">LLVIP</span></th>
37
+ <th><span style="color:purple">Ego4D</span></th>
38
+ </tr>
39
+ <tr>
40
+ <td>imagebind_huge</td>
41
+ <td align="right">77.7</td>
42
+ <td align="right">50.0</td>
43
+ <td align="right">54.0</td>
44
+ <td align="right">66.9</td>
45
+ <td align="right">63.4</td>
46
+ <td align="right">25.0</td>
47
+ </tr>
48
+
49
+ </table>
50
+
51
+ ## Usage
52
+
53
+ Install pytorch 1.13+ and other 3rd party dependencies.
54
+
55
+ ```shell
56
+ conda create --name imagebind python=3.8 -y
57
+ conda activate imagebind
58
+
59
+ pip install .
60
+ ```
61
+
62
+ For windows users, you might need to install `soundfile` for reading/writing audio files. (Thanks @congyue1977)
63
+
64
+ ```
65
+ pip install soundfile
66
+ ```
67
+
68
+
69
+ Extract and compare features across modalities (e.g. Image, Text and Audio).
70
 
71
  ```python
72
+ from imagebind import data
73
+ import torch
74
+ from imagebind.models import imagebind_model
75
+ from imagebind.models.imagebind_model import ModalityType
76
  from imagebind.models.imagebind_model import ImageBindModel
77
 
78
+ text_list=["A dog.", "A car", "A bird"]
79
+ image_paths=[".assets/dog_image.jpg", ".assets/car_image.jpg", ".assets/bird_image.jpg"]
80
+ audio_paths=[".assets/dog_audio.wav", ".assets/car_audio.wav", ".assets/bird_audio.wav"]
81
+
82
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
83
+
84
+ model = ImageBindModel.from_pretrained("nielsr/imagebind-huge")
85
+ model.eval()
86
+ model.to(device)
87
+
88
+ # Load data
89
+ inputs = {
90
+ ModalityType.TEXT: data.load_and_transform_text(text_list, device),
91
+ ModalityType.VISION: data.load_and_transform_vision_data(image_paths, device),
92
+ ModalityType.AUDIO: data.load_and_transform_audio_data(audio_paths, device),
93
+ }
94
+
95
+ with torch.no_grad():
96
+ embeddings = model(inputs)
97
+
98
+ print(
99
+ "Vision x Text: ",
100
+ torch.softmax(embeddings[ModalityType.VISION] @ embeddings[ModalityType.TEXT].T, dim=-1),
101
+ )
102
+ print(
103
+ "Audio x Text: ",
104
+ torch.softmax(embeddings[ModalityType.AUDIO] @ embeddings[ModalityType.TEXT].T, dim=-1),
105
+ )
106
+ print(
107
+ "Vision x Audio: ",
108
+ torch.softmax(embeddings[ModalityType.VISION] @ embeddings[ModalityType.AUDIO].T, dim=-1),
109
+ )
110
+
111
+ # Expected output:
112
+ #
113
+ # Vision x Text:
114
+ # tensor([[9.9761e-01, 2.3694e-03, 1.8612e-05],
115
+ # [3.3836e-05, 9.9994e-01, 2.4118e-05],
116
+ # [4.7997e-05, 1.3496e-02, 9.8646e-01]])
117
+ #
118
+ # Audio x Text:
119
+ # tensor([[1., 0., 0.],
120
+ # [0., 1., 0.],
121
+ # [0., 0., 1.]])
122
+ #
123
+ # Vision x Audio:
124
+ # tensor([[0.8070, 0.1088, 0.0842],
125
+ # [0.1036, 0.7884, 0.1079],
126
+ # [0.0018, 0.0022, 0.9960]])
127
+
128
+ ```
129
+
130
+
131
+ ## License
132
+
133
+ ImageBind code and model weights are released under the CC-BY-NC 4.0 license. See [LICENSE](LICENSE) for additional details.
134
+
135
+ ## Citation
136
+
137
+ ```
138
+ @inproceedings{girdhar2023imagebind,
139
+ title={ImageBind: One Embedding Space To Bind Them All},
140
+ author={Girdhar, Rohit and El-Nouby, Alaaeldin and Liu, Zhuang
141
+ and Singh, Mannat and Alwala, Kalyan Vasudev and Joulin, Armand and Misra, Ishan},
142
+ booktitle={CVPR},
143
+ year={2023}
144
+ }
145
  ```