VictorSanh
commited on
Commit
•
f616da2
1
Parent(s):
a73938c
typos
Browse files
README.md
CHANGED
@@ -57,7 +57,7 @@ dataset_info:
|
|
57 |
- **Repository: https://github.com/huggingface/OBELICS**
|
58 |
- **Point of Contact: hugo@huggingface.co**
|
59 |
|
60 |
-
`OBELICS` is an open, massive and curated collection of interleaved image-text web documents, containing 141M English documents, 115B text tokens and 353M images, extracted from Common Crawl dumps between February 2020 and February 2023. The collection and filtering steps are described in our [paper](https://huggingface.co/papers/2306.16527).
|
61 |
|
62 |
Interleaved image-text web documents are a succession of text paragraphs interleaved by images, such as web pages that contain images. Models trained on these web documents outperform vision and language models trained solely on image-text pairs on various benchmarks. They can also generate long and coherent text about a set of multiple images. As an example, we trained [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-80b), a visual language model that accepts arbitrary sequences of image and text inputs and produces text outputs.
|
63 |
|
@@ -68,7 +68,7 @@ We provide an [interactive visualization](https://atlas.nomic.ai/map/f2fba2aa-36
|
|
68 |
|
69 |
## Data Fields
|
70 |
|
71 |
-
An example of sample looks as follows:
|
72 |
```
|
73 |
# The example has been cropped
|
74 |
|
@@ -86,11 +86,11 @@ An example of sample looks as follows:
|
|
86 |
}
|
87 |
```
|
88 |
|
89 |
-
Each sample is composed of the same 4 fields: `images`, `texts`, `metadata
|
90 |
|
91 |
-
The images are replaced by their URLs, and the users need to download the images, for instance with the library [img2dataset](https://github.com/rom1504/img2dataset).
|
92 |
|
93 |
-
`metadata` is the string representation of a list containing
|
94 |
|
95 |
`general_metadata` is the string representation of a dictionary containing the URL of the document, and information regarding the extraction from Common Crawl snapshots.
|
96 |
|
@@ -98,19 +98,19 @@ The images are replaced by their URLs, and the users need to download the images
|
|
98 |
|
99 |
There is only one split, `train`, that contains 141,047,697 documents.
|
100 |
|
101 |
-
`OBELICS` with images replaced by their URLs
|
102 |
|
103 |
## Opted-out content
|
104 |
|
105 |
To respect the preferences of content creators, we removed from OBELICS all images for which creators explicitly opted out of AI model training. We used the [Spawning API](https://api.spawning.ai/spawning-api) to verify that the images in the dataset respect the original copyright owners’ choices.
|
106 |
|
107 |
-
However, due to an error on our side, we did not remove entire documents (i.e
|
108 |
|
109 |
We recommend users of OBELICS to regularly check every document against the API.
|
110 |
|
111 |
## Content warnings
|
112 |
|
113 |
-
Despite our efforts on filtering, OBELICS contains a small proportion of documents that are not suitable for all
|
114 |
|
115 |
## Terms of Use
|
116 |
|
|
|
57 |
- **Repository: https://github.com/huggingface/OBELICS**
|
58 |
- **Point of Contact: hugo@huggingface.co**
|
59 |
|
60 |
+
`OBELICS` is an open, massive, and curated collection of interleaved image-text web documents, containing 141M English documents, 115B text tokens, and 353M images, extracted from Common Crawl dumps between February 2020 and February 2023. The collection and filtering steps are described in our [paper](https://huggingface.co/papers/2306.16527).
|
61 |
|
62 |
Interleaved image-text web documents are a succession of text paragraphs interleaved by images, such as web pages that contain images. Models trained on these web documents outperform vision and language models trained solely on image-text pairs on various benchmarks. They can also generate long and coherent text about a set of multiple images. As an example, we trained [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-80b), a visual language model that accepts arbitrary sequences of image and text inputs and produces text outputs.
|
63 |
|
|
|
68 |
|
69 |
## Data Fields
|
70 |
|
71 |
+
An example of a sample looks as follows:
|
72 |
```
|
73 |
# The example has been cropped
|
74 |
|
|
|
86 |
}
|
87 |
```
|
88 |
|
89 |
+
Each sample is composed of the same 4 fields: `images`, `texts`, `metadata`, and `general_metadata`. `images` and `texts` are two lists of the same size, where for each index, one element and only one is not `None`. For example, for the interleaved web document `<image_1>text<image_2>`, we would find `[image_1, None, image_2]` in `images` and `[None, text, None]` in `texts`.
|
90 |
|
91 |
+
The images are replaced by their URLs, and the users need to download the images, for instance, with the library [img2dataset](https://github.com/rom1504/img2dataset).
|
92 |
|
93 |
+
`metadata` is the string representation of a list containing information about each of the images. It has the same length as `texts` and `images` and logs for each image relevant information such as original source document, unformatted source, alternative text if present, etc.
|
94 |
|
95 |
`general_metadata` is the string representation of a dictionary containing the URL of the document, and information regarding the extraction from Common Crawl snapshots.
|
96 |
|
|
|
98 |
|
99 |
There is only one split, `train`, that contains 141,047,697 documents.
|
100 |
|
101 |
+
`OBELICS` with images replaced by their URLs weighs 666.6 GB (😈) in arrow format and 377 GB in the uploaded `parquet` format.
|
102 |
|
103 |
## Opted-out content
|
104 |
|
105 |
To respect the preferences of content creators, we removed from OBELICS all images for which creators explicitly opted out of AI model training. We used the [Spawning API](https://api.spawning.ai/spawning-api) to verify that the images in the dataset respect the original copyright owners’ choices.
|
106 |
|
107 |
+
However, due to an error on our side, we did not remove entire documents (i.e., URLs) which are opted out of AI model training. As of July 12, 2023, it represents 4.25% of the totality of OBELICS. The config `opt_out_docs_removed_2023_07_12` applies the correct filtering at the web document level as of July 2023: `ds = load_dataset("HuggingFaceM4/OBELICS", "opt_out_docs_removed_2023_07_12")`.
|
108 |
|
109 |
We recommend users of OBELICS to regularly check every document against the API.
|
110 |
|
111 |
## Content warnings
|
112 |
|
113 |
+
Despite our efforts on filtering, OBELICS contains a small proportion of documents that are not suitable for all audiences. For instance, while navigating the interactive map, you might find the cluster named "Sex" which predominantly contains descriptions of pornographic movies along with pornographic images. Other clusters would contain advertising for sex workers or reports of violent shootings. In our experience, these documents represent a small proportion of all the documents.
|
114 |
|
115 |
## Terms of Use
|
116 |
|