OpenCLIP
vaishaal commited on
Commit
53f8b64
1 Parent(s): 5add2d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -0
README.md CHANGED
@@ -3,3 +3,109 @@ license: other
3
  license_name: apple-sample-code-license
4
  license_link: LICENSE
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license_name: apple-sample-code-license
4
  license_link: LICENSE
5
  ---
6
+
7
+
8
+
9
+ A CLIP (Contrastive Language-Image Pre-training) model trained on DFN-2B.
10
+ Data Filtering Networks (DFNs) are small used to automatically filter large pools of uncurated data.
11
+ This model was trained on 2B images that were filtered from a pool of 12.8B uncurated image-text pairs
12
+ (12.8B image-text pairs from CommonPool-12.8B).
13
+
14
+ These weights are directly usable in OpenCLIP (image + text).
15
+
16
+
17
+ ## Model Details
18
+
19
+ - **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
20
+ - **Dataset:** DFN-2b
21
+ - **Papers:**
22
+ - Data Filtering Networks: https://arxiv.org/abs/2309.17425
23
+ - **Examples Seen:** 12.8B
24
+
25
+
26
+ ## Model Metrics
27
+ | dataset | metric |
28
+ |:-----------------------|---------:|
29
+ | ImageNet 1k | 0.76236 |
30
+ | Caltech-101 | 0.942894 |
31
+ | CIFAR-10 | 0.9672 |
32
+ | CIFAR-100 | 0.8347 |
33
+ | CLEVR Counts | 0.232333 |
34
+ | CLEVR Distance | 0.245267 |
35
+ | Country211 | 0.19545 |
36
+ | Describable Textures | 0.575532 |
37
+ | EuroSAT | 0.54 |
38
+ | FGVC Aircraft | 0.248503 |
39
+ | Food-101 | 0.91303 |
40
+ | GTSRB | 0.469913 |
41
+ | ImageNet Sketch | 0.620684 |
42
+ | ImageNet v2 | 0.682 |
43
+ | ImageNet-A | 0.482133 |
44
+ | ImageNet-O | 0.493 |
45
+ | ImageNet-R | 0.830967 |
46
+ | KITTI Vehicle Distance | 0.192686 |
47
+ | MNIST | 0.782 |
48
+ | ObjectNet | 0.631851 |
49
+ | Oxford Flowers-102 | 0.819895 |
50
+ | Oxford-IIIT Pet | 0.936907 |
51
+ | Pascal VOC 2007 | 0.788528 |
52
+ | PatchCamelyon | 0.521545 |
53
+ | Rendered SST2 | 0.486546 |
54
+ | RESISC45 | 0.61381 |
55
+ | Stanford Cars | 0.90735 |
56
+ | STL-10 | 0.97525 |
57
+ | SUN397 | 0.714162 |
58
+ | SVHN | 0.598955 |
59
+ | Flickr | 0.7728 |
60
+ | MSCOCO | 0.518773 |
61
+ | WinoGAViL | 0.541748 |
62
+ | iWildCam | 0.155574 |
63
+ | Camelyon17 | 0.499283 |
64
+ | FMoW | 0.141149 |
65
+ | Dollar Street | 0.625 |
66
+ | GeoDE | 0.891023 |
67
+ | **Average** | **0.609232** |
68
+
69
+ ## Model Usage
70
+ ### With OpenCLIP
71
+ ```
72
+ import torch
73
+ import torch.nn.functional as F
74
+ from urllib.request import urlopen
75
+ from PIL import Image
76
+ from open_clip import create_model_from_pretrained, get_tokenizer
77
+
78
+ model, preprocess = create_model_from_pretrained('hf-hub:apple/DFN2B-CLIP-ViT-B-16')
79
+ tokenizer = get_tokenizer('ViT-B-16')
80
+
81
+ image = Image.open(urlopen(
82
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
83
+ ))
84
+ image = preprocess(image).unsqueeze(0)
85
+
86
+ labels_list = ["a dog", "a cat", "a donut", "a beignet"]
87
+ text = tokenizer(labels_list, context_length=model.context_length)
88
+
89
+ with torch.no_grad(), torch.cuda.amp.autocast():
90
+ image_features = model.encode_image(image)
91
+ text_features = model.encode_text(text)
92
+ image_features = F.normalize(image_features, dim=-1)
93
+ text_features = F.normalize(text_features, dim=-1)
94
+
95
+ text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
96
+
97
+ zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
98
+ print("Label probabilities: ", zipped_list)
99
+ ```
100
+
101
+ ## Citation
102
+ ```bibtex
103
+ @article{fang2023data,
104
+ title={Data Filtering Networks},
105
+ author={Fang, Alex and Jose, Albin Madappally and Jain, Amit and Schmidt, Ludwig and Toshev, Alexander and Shankar, Vaishaal},
106
+ journal={arXiv preprint arXiv:2309.17425},
107
+ year={2023}
108
+ }
109
+
110
+ ```
111
+