Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
fartashf commited on
Commit
4913808
1 Parent(s): ddea5db

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: custom-apple-license
4
+ license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE
5
+ task_categories:
6
+ - text-to-image
7
+ - image-to-text
8
+ language:
9
+ - en
10
+ ---
11
+
12
+ # Dataset Card for DataCompDR-12M
13
+
14
+ <!-- Provide a quick summary of the dataset. -->
15
+
16
+ This dataset contains UIDs of DataComp-12M that is a 12M subset of [DataComp-1B-BestPool](https://huggingface.co/datasets/mlfoundations/datacomp_1b).
17
+ Image-text models trained on DataComp-12M are significantly better than on CC-12M/YFCC-15M as well as DataComp-Small/Medium.
18
+ For details on this dataset and the improved [DataCompDR-12M](https://huggingface.co/datasets/apple/DataCompDR-12M),
19
+ please visit our [MobileCLIP paper](https://arxiv.org/abs/2311.17049).
20
+
21
+ ## Dataset Details
22
+
23
+ ### Dataset Description
24
+
25
+ <!-- Provide a longer summary of what this dataset is. -->
26
+
27
+ DataCompDR is an image-text dataset and an enhancement to the DataComp dataset.
28
+ We reinforce the DataComp dataset using our multi-modal dataset reinforcement strategy.
29
+ In particular, we create DataCompDR-1B and DataCompDR-12M by reinforcing the DataComp-1B (BestPool filtering) and a uniform subset of 12.8M samples, DataCompDR-12M.
30
+ We have a one-time generation process, the cost of which is amortized over multiple architectures and extensive ablations.
31
+ We generate 5 synthetic captions per image using the `coca_ViT-L-14` model in OpenCLIP, and strong random image augmentations (10 for DataCompDR-1B and 30 for DataCompDR-12M).
32
+ We compute embeddings of an ensemble of two strong teachers (`ViT-L-14` with pretrained weights `datacomp_xl_s13b_b90k` and openai in OpenCLIP) on augmented images as well as real and synthetic captions.
33
+ Embeddings are 1536-D concatenations of 2x768-D vectors.
34
+ One seen sample for DataCompDR is a triplet of one randomly augmented image, one ground-truth caption, and one randomly picked synthetic caption.
35
+
36
+ - **Curated by:** Original data by [DataComp](https://www.datacomp.ai/) and metadata by Apple.
37
+ - **License:** We distribute our metadata under our [license](https://github.com/apple/ml-mobileclip/blob/main/LICENSE). The original image url-text samples and metadata were released by [DataComp](https://www.datacomp.ai/) under Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
38
+ - **Repository:** [ml-mobileclip GitHub](https://github.com/apple/ml-mobileclip)
39
+ - **Paper:** [MobileCLIP paper](https://arxiv.org/abs/2311.17049)
40
+ - **Demo:** Coming Soon
41
+
42
+ ## Uses
43
+
44
+ <!-- Address questions around how the dataset is intended to be used. -->
45
+
46
+ Training with DataCompDR shows significant learning efficiency improvement compared to the standard CLIP training.
47
+ For example, with a single node of 8×A100 GPUs, we achieve 61.7% zero-shot classification on ImageNet-val in approximately one day when training a ViT-B/16 based CLIP from scratch on DataCompDR-12M.
48
+ Training with DataCompDR-1B sets new state-of-the-art performance on several metrics (Fig. 2) while still using a fraction of the training compute budget compared to previous works.
49
+ Using DataCompDR, we demonstrate 10x-1000x learning efficiency in comparison to DataComp.
50
+
51
+ ## Dataset Structure
52
+
53
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
54
+
55
+ ```
56
+ - uids.txt: List of 12779520 (65536*195) UIDs, one UID per line.
57
+ - uids.npy: List of 12779520 (65536*195) UIDs as a NumPy array of type `numpy.dtype("u8,u8")`.
58
+ ```
59
+
60
+
61
+ ## Citation
62
+
63
+ **[MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training](https://arxiv.org/pdf/2311.17049.pdf). (CVPR 2024)**
64
+ *Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.*
65
+
66
+ ```bibtex
67
+ @InProceedings{mobileclip2024,
68
+ author = {Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel},
69
+ title = {MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training},
70
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
71
+ month = {June},
72
+ year = {2024},
73
+ }
74
+ ```