penfever commited on
Commit
b22ce1b
1 Parent(s): 86c2401

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -1
README.md CHANGED
@@ -11,7 +11,49 @@ As of this writing, ours is the only public dataset which is both fully annotate
11
 
12
  It is designed to be used for controlled experiments with vision-language models.
13
 
14
- For more details on our dataset, please refer to the accompanying paper.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  If you find our dataset useful, please cite our paper --
17
 
 
11
 
12
  It is designed to be used for controlled experiments with vision-language models.
13
 
14
+ ## What is in JANuS?[](#what-is-in-JANuS)
15
+
16
+ JANuS provides metadata and image links for four new training datasets; all of these datasets are designed for evaluation on a subset of 100 classes chosen from ImageNet-1000.
17
+
18
+ Each dataset in JANuS is either a subset or a superset of an existing dataset, and each is fully captioned and fully labeled, either using annotated or synthetic labels.
19
+
20
+ For additional details on our methodology for gathering JANuS, as well as explanations of terms like "subset matching", please refer to our paper.
21
+
22
+ 1. **ImageNet-100:** A superset of ImageNet with over 50,000 newly
23
+ annotated samples, including flickr-captions and blip-captions.
24
+
25
+ 2. **OpenImages-100:** A subset of OpenImages with new mappings from
26
+ OpenImages to ImageNet classes, restored original flickr-captions,
27
+ and new BLIP-captions.
28
+
29
+ 3. **LAION-100:** A subset of LAION-15m with samples selected via
30
+ subset matching.
31
+
32
+ 4. **YFCC-100:** A subset of YFCC-15m with samples selected via subset
33
+ matching.
34
+
35
+ ## Training on JANuS[](#training-on-JANuS)
36
+
37
+ JANuS is designed to allow researchers to easily compare the effects of different labeling strategies on model performance. As such, every subset of JANuS includes at least two labeling sources.
38
+
39
+ * **idx** labels are integers, mapping to [ImageNet-1k class labels](https://deeplearning.cms.waikato.ac.nz/user-guide/class-maps/IMAGENET/)
40
+ * **caption** labels are natural language captions (usually in English), and are suitable for training VL-loss models like [CLIP](https://openai.com/blog/clip/)
41
+
42
+ For YFCC-100 and LAION-100, the idx labels are synthetic, and are generated via a simple subset matching strategy. For ImageNet-100 and OpenImages-100, the idx labels are annotated by humans.
43
+
44
+ YFCC-100, ImageNet-100 and OpenImages-100 contain captions sourced from Flickr. LAION-100 contains captions sourced from alt-text descriptions.
45
+
46
+ Additional labeling sources are available for some of the datasets in JANuS; please reference our paper for a reference key for all of the columns in the spreadsheets.
47
+
48
+ [VL Hub](https://github.com/penfever/vlhub/), a framework for vision language model training, can be used to reproduce the experiments in our paper.
49
+
50
+ ## Evaluation on JANuS[](#evaluation-on-JANuS)
51
+
52
+ Evaluation methods for JANuS models are the same as those for ImageNet models, except that we evaluate only on a subset of all ImageNet classes.
53
+
54
+ For details on which classes are included in JANuS, please see metadata/in100_classes.txt in this repo.
55
+
56
+ ## Citations
57
 
58
  If you find our dataset useful, please cite our paper --
59