Datasets:
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
English
Size:
10M - 100M
Tags:
text-search
License:
metadata
paperswithcode_id: null
Dataset Card for "wiki_snippets"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://dumps.wikimedia.org
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 0.00 MB
- Size of the generated dataset: 35001.08 MB
- Total amount of disk used: 35001.08 MB
Dataset Summary
Wikipedia version split into plain text snippets for dense semantic indexing.
Supported Tasks and Leaderboards
Languages
Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
Data Instances
wiki40b_en_100_0
- Size of downloaded dataset files: 0.00 MB
- Size of the generated dataset: 12268.10 MB
- Total amount of disk used: 12268.10 MB
An example of 'train' looks as follows.
wikipedia_en_100_0
- Size of downloaded dataset files: 0.00 MB
- Size of the generated dataset: 22732.97 MB
- Total amount of disk used: 22732.97 MB
An example of 'train' looks as follows.
Data Fields
The data fields are the same among all splits.
wiki40b_en_100_0
_id
: astring
feature.datasets_id
: aint32
feature.wiki_id
: astring
feature.start_paragraph
: aint32
feature.start_character
: aint32
feature.end_paragraph
: aint32
feature.end_character
: aint32
feature.article_title
: astring
feature.section_title
: astring
feature.passage_text
: astring
feature.
wikipedia_en_100_0
_id
: astring
feature.datasets_id
: aint32
feature.wiki_id
: astring
feature.start_paragraph
: aint32
feature.start_character
: aint32
feature.end_paragraph
: aint32
feature.end_character
: aint32
feature.article_title
: astring
feature.section_title
: astring
feature.passage_text
: astring
feature.
Data Splits
name | train |
---|---|
wiki40b_en_100_0 | 17553713 |
wikipedia_en_100_0 | 30820408 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@ONLINE {wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
Contributions
Thanks to @thomwolf, @lhoestq, @mariamabarham, @yjernite for adding this dataset.