linyq's picture
Update README.md
a9ac106 verified
|
raw
history blame
2.37 kB
metadata
license: cc-by-4.0
dataset_info:
  features:
    - name: SAMPLE_ID
      dtype: int64
    - name: URL
      dtype: string
    - name: TEXT
      dtype: string
    - name: HEIGHT
      dtype: float64
    - name: WIDTH
      dtype: float64
    - name: LICENSE
      dtype: string
    - name: NSFW
      dtype: string
    - name: similarity
      dtype: float64
    - name: ase_scores
      dtype: float64
    - name: kmeans
      dtype: int64
    - name: __index_level_0__
      dtype: int64
  splits:
    - name: train
      num_bytes: 28506248899
      num_examples: 107166507
  download_size: 16353125308
  dataset_size: 28506248899
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

100M Text Debiased Subset from LAION 2B

  • Captions in LAION-2B have a significant bias towards describing visual text content embedded in the images.
  • Released CLIP models have strong text spotting bias in almost every style of web images, resulting in the CLIP-filtering datasets inherently biased towards visual text dominant data.
  • CLIP models easily learn text spotting capacity from parrot captions while failing to connect the vision-language semantics, just like a text spotting parrot.

For more details, please see our paper.

Filtering Details

We provide an alternative solution by releasing a less biased filtered LAION-2B 100M(107,166,507) subset.

We construct a less biased 100M subset from the LAION-2B subset with Empty OCR results, CLIP score > 0.3, and Aesthetics score > 4.5.

We add the ase_scores and K-means labels (4000 total) for each image-text pair.

We also released the dataset on OpenDataLab.

The pre-trained CLIP model is released on github.

Reference

@article{lin2023parrot,
  title={Parrot Captions Teach CLIP to Spot Text}, 
  author={Yiqi Lin and Conghui He and Alex Jinpeng Wang and Bin Wang and Weijia Li and Mike Zheng Shou},
  journal={arXiv preprint arXiv:2312.14232},
  year={2023}
}
@misc{conghui2022opendatalab,
  author={He, Conghui and Li, Wei and Jin, Zhenjiang and Wang, Bin and Xu, Chao and Lin, Dahua},
  title={OpenDataLab: Empowering General Artificial Intelligence with Open Datasets},
  howpublished = {\url{https://opendatalab.com}},
  year={2022}
}