HugoLaurencon
commited on
Commit
·
1f41654
1
Parent(s):
a6a31f1
rename OBELICS
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ language:
|
|
4 |
license: cc-by-4.0
|
5 |
size_categories:
|
6 |
- 100M<n<1B
|
7 |
-
pretty_name:
|
8 |
configs:
|
9 |
- config_name: default
|
10 |
data_files:
|
@@ -48,15 +48,15 @@ dataset_info:
|
|
48 |
download_size: 266501092920
|
49 |
dataset_size: 684638314215
|
50 |
---
|
51 |
-
# Dataset Card for
|
52 |
|
53 |
## Dataset Description
|
54 |
|
55 |
-
- **Repository: https://github.com/huggingface/
|
56 |
-
- **Paper:
|
57 |
- **Point of Contact: hugo@huggingface.co**
|
58 |
|
59 |
-
`
|
60 |
|
61 |
Interleaved image-text web documents are a succession of text paragraphs interleaved by images, such as web pages that contain images. Models trained on these web documents outperform vision and language models trained solely on image-text pairs on various benchmarks. They can also generate long and coherent text about a set of multiple images. As an example, we trained [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-80b), a visual language model that accepts arbitrary sequences of image and text inputs and produces text outputs.
|
62 |
|
@@ -97,13 +97,13 @@ The images are replaced by their URLs, and the users need to download the images
|
|
97 |
|
98 |
There is only one split, `train`, that contains 141,047,697 documents.
|
99 |
|
100 |
-
`
|
101 |
|
102 |
## Opted-out content
|
103 |
|
104 |
To respect the preferences of content creators, we removed from OBELICS all images for which creators explicitly opted out of AI model training. We used the [Spawning API](https://api.spawning.ai/spawning-api) to verify that the images in the dataset respect the original copyright owners’ choices.
|
105 |
|
106 |
-
However, due to an error on our side, we did not remove entire documents (i.e. URLs) which are opted out of AI model training. As of July 12, 2023, it represents 4.25% of the totality of OBELICS. The config `opt_out_docs_removed_2023_07_12` applies the correct filtering at the web document level as of July 2023: `ds = load_dataset("HuggingFaceM4/
|
107 |
|
108 |
We recommend users of OBELICS to regularly check every document against the API.
|
109 |
|
@@ -123,8 +123,8 @@ License CC-BY-4.0.
|
|
123 |
|
124 |
If you are using this dataset, please cite
|
125 |
```
|
126 |
-
@misc{
|
127 |
-
title={
|
128 |
author={Hugo Laurençon and Lucile Saulnier and Léo Tronchon and Stas Bekman and Amanpreet Singh and Anton Lozhkov and Thomas Wang and Siddharth Karamcheti and Alexander M. Rush and Douwe Kiela and Matthieu Cord and Victor Sanh},
|
129 |
year={2023},
|
130 |
eprint={2306.16527},
|
|
|
4 |
license: cc-by-4.0
|
5 |
size_categories:
|
6 |
- 100M<n<1B
|
7 |
+
pretty_name: OBELICS
|
8 |
configs:
|
9 |
- config_name: default
|
10 |
data_files:
|
|
|
48 |
download_size: 266501092920
|
49 |
dataset_size: 684638314215
|
50 |
---
|
51 |
+
# Dataset Card for OBELICS
|
52 |
|
53 |
## Dataset Description
|
54 |
|
55 |
+
- **Repository: https://github.com/huggingface/OBELICS**
|
56 |
+
- **Paper: OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents**
|
57 |
- **Point of Contact: hugo@huggingface.co**
|
58 |
|
59 |
+
`OBELICS` is an open, massive and curated collection of interleaved image-text web documents, containing 141M English documents, 115B text tokens and 353M images, extracted from Common Crawl dumps between February 2020 and February 2023. The collection and filtering steps are described in our [paper](https://huggingface.co/papers/2306.16527).
|
60 |
|
61 |
Interleaved image-text web documents are a succession of text paragraphs interleaved by images, such as web pages that contain images. Models trained on these web documents outperform vision and language models trained solely on image-text pairs on various benchmarks. They can also generate long and coherent text about a set of multiple images. As an example, we trained [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-80b), a visual language model that accepts arbitrary sequences of image and text inputs and produces text outputs.
|
62 |
|
|
|
97 |
|
98 |
There is only one split, `train`, that contains 141,047,697 documents.
|
99 |
|
100 |
+
`OBELICS` with images replaced by their URLs weights 666.6 GB (😈) in arrow format and 377 GB in the uploaded `parquet` format.
|
101 |
|
102 |
## Opted-out content
|
103 |
|
104 |
To respect the preferences of content creators, we removed from OBELICS all images for which creators explicitly opted out of AI model training. We used the [Spawning API](https://api.spawning.ai/spawning-api) to verify that the images in the dataset respect the original copyright owners’ choices.
|
105 |
|
106 |
+
However, due to an error on our side, we did not remove entire documents (i.e. URLs) which are opted out of AI model training. As of July 12, 2023, it represents 4.25% of the totality of OBELICS. The config `opt_out_docs_removed_2023_07_12` applies the correct filtering at the web document level as of July 2023: `ds = load_dataset("HuggingFaceM4/OBELICS", "opt_out_docs_removed_2023_07_12")`.
|
107 |
|
108 |
We recommend users of OBELICS to regularly check every document against the API.
|
109 |
|
|
|
123 |
|
124 |
If you are using this dataset, please cite
|
125 |
```
|
126 |
+
@misc{laurencon2023obelics,
|
127 |
+
title={OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents},
|
128 |
author={Hugo Laurençon and Lucile Saulnier and Léo Tronchon and Stas Bekman and Amanpreet Singh and Anton Lozhkov and Thomas Wang and Siddharth Karamcheti and Alexander M. Rush and Douwe Kiela and Matthieu Cord and Victor Sanh},
|
129 |
year={2023},
|
130 |
eprint={2306.16527},
|