|
--- |
|
language: |
|
- en |
|
license: cc-by-4.0 |
|
size_categories: |
|
- 100M<n<1B |
|
pretty_name: OBELISC |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- config_name: opt_out_docs_removed |
|
data_files: |
|
- split: train |
|
path: opt_out_docs_removed/train-* |
|
dataset_info: |
|
- config_name: default |
|
features: |
|
- name: images |
|
sequence: string |
|
- name: metadata |
|
dtype: string |
|
- name: general_metadata |
|
dtype: string |
|
- name: texts |
|
sequence: string |
|
splits: |
|
- name: train |
|
num_bytes: 715724717192 |
|
num_examples: 141047697 |
|
download_size: 71520629655 |
|
dataset_size: 715724717192 |
|
- config_name: opt_out_docs_removed |
|
features: |
|
- name: images |
|
sequence: string |
|
- name: metadata |
|
dtype: string |
|
- name: general_metadata |
|
dtype: string |
|
- name: texts |
|
sequence: string |
|
splits: |
|
- name: train |
|
num_bytes: 684638314215 |
|
num_examples: 134648855 |
|
download_size: 266501092920 |
|
dataset_size: 684638314215 |
|
--- |
|
# Dataset Card for OBELISC |
|
|
|
## Dataset Description |
|
|
|
- **Repository: https://github.com/huggingface/OBELISC** |
|
- **Paper: OBELISC: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents** |
|
- **Point of Contact: hugo@huggingface.co** |
|
|
|
### Dataset Summary |
|
|
|
`OBELISC` is an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images. |
|
|
|
This dataset can be used to train large multimodal models, significantly improving their reasoning abilities compared to models trained solely on image/text pairs. Please refer to our paper for further details about the construction of the dataset, quantitative and qualitative analyses of `OBELISC`, and experiments we conducted. |
|
|
|
### Languages |
|
|
|
English |
|
|
|
## Data Fields |
|
|
|
There are 4 fields: `images`, `texts`, `metadata` and `general_metadata`. |
|
|
|
For each example, the data in the columns `images` and `texts` are two lists of the same size, where for each index, one element and only one is not `None`. |
|
|
|
For example, for the web document `<image_1>text<image_2>`, in `images`, we have `[image_1,None,image_2]` and in `texts` we have `[None,text,None]`. |
|
|
|
The images are replaced by their URLs, and the users have to download them themselves, for example with the library `img2dataset`. |
|
|
|
In `metadata`, there is a string that can be transformed into a list with `json.loads(example["metadata"])`. This list will have the same size as the lists of images and texts, and will have a dictionary for each index where there is an image, and a `None` value when there is a text. This dictionary will contain the metadata of the image (original source document, unformatted source, alt-text if present, ...). |
|
|
|
Finally, in `general_metadata`, there is a string that can be transformed into a dictionary, containing the URL of the document, and information about its location in the Common Crawl data. |
|
|
|
## Data Splits |
|
|
|
There is only one split, `train`, that contains 141,047,697 examples. |
|
|
|
## Size |
|
|
|
`OBELISC` with images replaced by their URLs weighs 666.6 GB (unwanted!) in arrow format and 377 GB in this uploaded `parquet` format. |
|
|
|
## Configs |
|
|
|
The default config, downloaded when nothing is specified in the config argument, with |
|
``` |
|
from datasets import load_dataset |
|
|
|
ds = load_dataset("HuggingFaceM4/OBELISC") |
|
``` |
|
corresponds to the original version of the dataset. |
|
|
|
When building the dataset, we sent every image URL to the Spawning AI API and removed all the opted-out images. |
|
However, we noticed afterward that some images might not be opted-out, but the whole web page containing them is. |
|
This is why we created another config of the dataset to additionally filter out opted-out web pages, that can be loaded with `ds = load_dataset("HuggingFaceM4/OBELISC", config_name="opt_out_docs_removed")`. |
|
|
|
### Visualization of OBELISC documents |
|
|
|
https://huggingface.co/spaces/HuggingFaceM4/obelisc_visualization |
|
|
|
### Research paper |
|
|
|
https://arxiv.org/abs/2306.16527 |
|
|
|
### GitHub repository |
|
|
|
https://github.com/huggingface/OBELISC |
|
|
|
## Terms of Use |
|
|
|
By using the dataset, you agree to comply with the original licenses of the source content as well as the dataset license (CC-BY-4.0). Additionally, if you use this dataset to train a Machine Learning model, you agree to disclose your use of the dataset when releasing the model or an ML application using the model. |
|
|
|
### Licensing Information |
|
|
|
License CC-BY-4.0. |
|
|
|
### Citation Information |
|
|
|
If you are using this dataset, please cite |
|
``` |
|
@inproceedings{ |
|
lauren{\c{c}}on2023obe, |
|
title={OBELISC: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents}, |
|
author={Hugo Lauren{\c{c}}on and Lucile Saulnier and L{\'e}o Tronchon and Stas Bekman and Amanpreet Singh and Anton Lozhkov and Thomas Wang and Siddharth Karamcheti and Alexander M Rush and Douwe Kiela and Matthieu Cord and Victor Sanh}, |
|
year={2023} |
|
} |
|
``` |