|
--- |
|
task_categories: |
|
- zero-shot-classification |
|
size_categories: |
|
- 1M<n<10M |
|
--- |
|
|
|
This repository hosts the JANuS (Joint Annotations and Names) dataset introduced in the 2023 paper Distributionally Robust Classification on a Data Budget. |
|
|
|
As of this writing, ours is the only public dataset which is both fully annotated with ground-truth labels and fully captioned with web-scraped captions. |
|
|
|
It is designed to be used for controlled experiments with vision-language models. |
|
|
|
## What is in JANuS?[](#what-is-in-JANuS) |
|
|
|
JANuS provides metadata and image links for four new training datasets; all of these datasets are designed for evaluation on a subset of 100 classes chosen from ImageNet-1000. |
|
|
|
Each dataset in JANuS is either a subset or a superset of an existing dataset, and each is fully captioned and fully labeled, either using annotated or synthetic labels. |
|
|
|
For additional details on our methodology for gathering JANuS, as well as explanations of terms like "subset matching", please refer to our paper. |
|
|
|
1. **ImageNet-100:** A superset of ImageNet with over 50,000 newly |
|
annotated samples, including flickr-captions and blip-captions. |
|
|
|
2. **OpenImages-100:** A subset of OpenImages with new mappings from |
|
OpenImages to ImageNet classes, restored original flickr-captions, |
|
and new BLIP-captions. |
|
|
|
3. **LAION-100:** A subset of LAION-15m with samples selected via |
|
subset matching. |
|
|
|
4. **YFCC-100:** A subset of YFCC-15m with samples selected via subset |
|
matching. |
|
|
|
## Training on JANuS[](#training-on-JANuS) |
|
|
|
JANuS is designed to allow researchers to easily compare the effects of different labeling strategies on model performance. As such, every subset of JANuS includes at least two labeling sources. |
|
|
|
* **idx** labels are integers, mapping to [ImageNet-1k class labels](https://deeplearning.cms.waikato.ac.nz/user-guide/class-maps/IMAGENET/) |
|
* **caption** labels are natural language captions (usually in English), and are suitable for training VL-loss models like [CLIP](https://openai.com/blog/clip/) |
|
|
|
For YFCC-100 and LAION-100, the idx labels are synthetic, and are generated via a simple subset matching strategy. For ImageNet-100 and OpenImages-100, the idx labels are annotated by humans. |
|
|
|
YFCC-100, ImageNet-100 and OpenImages-100 contain captions sourced from Flickr. LAION-100 contains captions sourced from alt-text descriptions. |
|
|
|
Additional labeling sources are available for some of the datasets in JANuS; please reference our paper for a reference key for all of the columns in the spreadsheets. |
|
|
|
[VL Hub](https://github.com/penfever/vlhub/), a framework for vision language model training, can be used to reproduce the experiments in our paper. |
|
|
|
## Evaluation on JANuS[](#evaluation-on-JANuS) |
|
|
|
Evaluation methods for JANuS models are the same as those for ImageNet models, except that we evaluate only on a subset of all ImageNet classes. |
|
|
|
For details on which classes are included in JANuS, please see metadata/in100_classes.txt in this repo. |
|
|
|
## Citations |
|
|
|
If you find our dataset useful, please cite our paper -- |
|
|
|
``` |
|
@article{ |
|
feuer2023distributionally, |
|
title={Distributionally Robust Classification on a Data Budget}, |
|
author={Benjamin Feuer and Ameya Joshi and Minh Pham and Chinmay Hegde}, |
|
journal={Transactions on Machine Learning Research}, |
|
issn={2835-8856}, |
|
year={2023}, |
|
url={https://openreview.net/forum?id=D5Z2E8CNsD}, |
|
note={} |
|
} |
|
``` |