Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
system HF staff commited on
Commit
3a4a873
1 Parent(s): e042eb0

Update files from the datasets library (from 1.16.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.16.0

Files changed (3) hide show
  1. README.md +33 -7
  2. dataset_infos.json +1 -1
  3. wikitext.py +25 -21
README.md CHANGED
@@ -1,7 +1,25 @@
1
  ---
 
 
 
 
2
  languages:
3
  - en
 
 
 
 
 
4
  paperswithcode_id: wikitext-2
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
  # Dataset Card for "wikitext"
@@ -34,8 +52,8 @@ paperswithcode_id: wikitext-2
34
 
35
  - **Homepage:** [https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)
36
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
37
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
38
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
39
  - **Size of downloaded dataset files:** 373.28 MB
40
  - **Size of the generated dataset:** 1072.25 MB
41
  - **Total amount of disk used:** 1445.53 MB
@@ -45,6 +63,11 @@ paperswithcode_id: wikitext-2
45
  The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
46
  Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
47
 
 
 
 
 
 
48
  ### Supported Tasks and Leaderboards
49
 
50
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -196,16 +219,19 @@ The data fields are the same among all splits.
196
 
197
  ### Licensing Information
198
 
199
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
200
 
201
  ### Citation Information
202
 
203
  ```
204
- @InProceedings{wikitext,
205
- author={Stephen, Merity and Caiming ,Xiong and James, Bradbury and Richard Socher}
206
- year=2016
 
 
 
 
207
  }
208
-
209
  ```
210
 
211
 
 
1
  ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - crowdsourced
6
  languages:
7
  - en
8
+ licenses:
9
+ - cc-by-sa-3-0
10
+ - gfdl-1-3-or-later
11
+ multilinguality:
12
+ - monolingual
13
  paperswithcode_id: wikitext-2
14
+ pretty_name: WikiText
15
+ size_categories:
16
+ - 1M<n<10M
17
+ source_datasets:
18
+ - original
19
+ task_categories:
20
+ - sequence-modeling
21
+ task_ids:
22
+ - language-modeling
23
  ---
24
 
25
  # Dataset Card for "wikitext"
 
52
 
53
  - **Homepage:** [https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)
54
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
+ - **Paper:** [Pointer Sentinel Mixture Models](https://arxiv.org/abs/1609.07843)
56
+ - **Point of Contact:** [Stephen Merity](mailto:smerity@salesforce.com)
57
  - **Size of downloaded dataset files:** 373.28 MB
58
  - **Size of the generated dataset:** 1072.25 MB
59
  - **Total amount of disk used:** 1445.53 MB
 
63
  The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
64
  Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
65
 
66
+ Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over
67
+ 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation
68
+ and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models
69
+ that can take advantage of long term dependencies.
70
+
71
  ### Supported Tasks and Leaderboards
72
 
73
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
219
 
220
  ### Licensing Information
221
 
222
+ The dataset is available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
223
 
224
  ### Citation Information
225
 
226
  ```
227
+ @misc{merity2016pointer,
228
+ title={Pointer Sentinel Mixture Models},
229
+ author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
230
+ year={2016},
231
+ eprint={1609.07843},
232
+ archivePrefix={arXiv},
233
+ primaryClass={cs.CL}
234
  }
 
235
  ```
236
 
237
 
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"wikitext-103-raw-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified \n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.\n", "citation": "@InProceedings{wikitext,\n author={Stephen, Merity and Caiming ,Xiong and James, Bradbury and Richard Socher}\n year=2016\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "wikitext", "config_name": "wikitext-103-raw-v1", "version": {"version_str": "1.0.0", "description": null, "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1306182, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 546951363, "num_examples": 1801350, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1160232, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip": {"num_bytes": 191984949, "checksum": "91c00ae287f0d699e18605c84afc9e45c192bc6b7797ff8837e5474655a33794"}}, "download_size": 191984949, "dataset_size": 549417777, "size_in_bytes": 741402726}, "wikitext-2-raw-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified \n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.\n", "citation": "@InProceedings{wikitext,\n author={Stephen, Merity and Caiming ,Xiong and James, Bradbury and Richard Socher}\n year=2016\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "wikitext", "config_name": "wikitext-2-raw-v1", "version": {"version_str": "1.0.0", "description": null, "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1306182, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 11070901, "num_examples": 36718, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1160232, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zip": {"num_bytes": 4721645, "checksum": "ef7edb566e3e2b2d31b29c1fdb0c89a4cc683597484c3dc2517919c615435a11"}}, "download_size": 4721645, "dataset_size": 13537315, "size_in_bytes": 18258960}, "wikitext-103-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified \n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.\n", "citation": "@InProceedings{wikitext,\n author={Stephen, Merity and Caiming ,Xiong and James, Bradbury and Richard Socher}\n year=2016\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "wikitext", "config_name": "wikitext-103-v1", "version": {"version_str": "1.0.0", "description": null, "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1296669, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 545592329, "num_examples": 1801350, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1155695, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-v1.zip": {"num_bytes": 190229076, "checksum": "242ba0f20b329cfdf1ccc61e9e9e5b59becf189db7f7a81cd2a0e2fc31539590"}}, "download_size": 190229076, "dataset_size": 548044693, "size_in_bytes": 738273769}, "wikitext-2-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified \n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.\n", "citation": "@InProceedings{wikitext,\n author={Stephen, Merity and Caiming ,Xiong and James, Bradbury and Richard Socher}\n year=2016\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "wikitext", "config_name": "wikitext-2-v1", "version": {"version_str": "1.0.0", "description": null, "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1272041, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 10927302, "num_examples": 36718, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1135067, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip": {"num_bytes": 4475746, "checksum": "92675f1d63015c1c8b51f1656a52d5bdbc33aafa60cc47a218a66e7ee817488c"}}, "download_size": 4475746, "dataset_size": 13334410, "size_in_bytes": 17810156}}
 
1
+ {"wikitext-103-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified\n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike\n License.\n", "citation": "@misc{merity2016pointer,\n title={Pointer Sentinel Mixture Models},\n author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},\n year={2016},\n eprint={1609.07843},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wikitext", "config_name": "wikitext-103-v1", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1295579, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 545142639, "num_examples": 1801350, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1154755, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-v1.zip": {"num_bytes": 190229076, "checksum": "242ba0f20b329cfdf1ccc61e9e9e5b59becf189db7f7a81cd2a0e2fc31539590"}}, "download_size": 190229076, "post_processing_size": null, "dataset_size": 547592973, "size_in_bytes": 737822049}, "wikitext-2-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified\n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike\n License.\n", "citation": "@misc{merity2016pointer,\n title={Pointer Sentinel Mixture Models},\n author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},\n year={2016},\n eprint={1609.07843},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wikitext", "config_name": "wikitext-2-v1", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1270951, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 10918134, "num_examples": 36718, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1134127, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip": {"num_bytes": 4475746, "checksum": "92675f1d63015c1c8b51f1656a52d5bdbc33aafa60cc47a218a66e7ee817488c"}}, "download_size": 4475746, "post_processing_size": null, "dataset_size": 13323212, "size_in_bytes": 17798958}, "wikitext-103-raw-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified\n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike\n License.\n", "citation": "@misc{merity2016pointer,\n title={Pointer Sentinel Mixture Models},\n author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},\n year={2016},\n eprint={1609.07843},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wikitext", "config_name": "wikitext-103-raw-v1", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1305092, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 546501673, "num_examples": 1801350, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1159292, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip": {"num_bytes": 191984949, "checksum": "91c00ae287f0d699e18605c84afc9e45c192bc6b7797ff8837e5474655a33794"}}, "download_size": 191984949, "post_processing_size": null, "dataset_size": 548966057, "size_in_bytes": 740951006}, "wikitext-2-raw-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified\n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike\n License.\n", "citation": "@misc{merity2016pointer,\n title={Pointer Sentinel Mixture Models},\n author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},\n year={2016},\n eprint={1609.07843},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wikitext", "config_name": "wikitext-2-raw-v1", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1305092, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 11061733, "num_examples": 36718, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1159292, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zip": {"num_bytes": 4721645, "checksum": "ef7edb566e3e2b2d31b29c1fdb0c89a4cc683597484c3dc2517919c615435a11"}}, "download_size": 4721645, "post_processing_size": null, "dataset_size": 13526117, "size_in_bytes": 18247762}}
wikitext.py CHANGED
@@ -6,20 +6,24 @@ import os
6
  import datasets
7
 
8
 
9
- # TODO(wikitext): BibTeX citation
10
  _CITATION = """\
11
- @InProceedings{wikitext,
12
- author={Stephen, Merity and Caiming ,Xiong and James, Bradbury and Richard Socher}
13
- year={2016}
 
 
 
 
14
  }
15
  """
16
 
17
- # TODO(wikitext):
18
  _DESCRIPTION = """\
19
  The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
20
- Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
 
21
  """
22
- _URL = "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/"
 
23
  _DATA_URL = "https://s3.amazonaws.com/research.metamind.io/wikitext"
24
 
25
 
@@ -49,25 +53,25 @@ class Wikitext(datasets.GeneratorBasedBuilder):
49
  VERSION = datasets.Version("0.1.0")
50
  BUILDER_CONFIGS = [
51
  WikitextConfig(
52
- name="wikitext-103-raw-v1",
53
- data_url=_DATA_URL + "/" + "wikitext-103-raw-v1.zip",
54
- description="word level dataset. No processing is needed other than replacing newlines with <eos> tokens.",
55
  ),
56
  WikitextConfig(
57
- name="wikitext-2-raw-v1",
58
- data_url=_DATA_URL + "/" + "wikitext-2-raw-v1.zip",
59
- description="word level dataset. No processing is needed other than replacing newlines with <eos> tokens.",
60
  ),
61
  WikitextConfig(
62
- name="wikitext-103-v1",
63
- data_url=_DATA_URL + "/" + "wikitext-103-v1.zip",
64
- description="raw level dataset. The raw tokens before the addition of <unk> tokens. "
65
  "They should only be used for character level work or for creating newly derived datasets.",
66
  ),
67
  WikitextConfig(
68
- name="wikitext-2-v1",
69
- data_url=_DATA_URL + "/" + "wikitext-2-v1.zip",
70
- description="raw level dataset. The raw tokens before the addition of <unk> tokens. "
71
  "They should only be used for character level work or for creating newly derived datasets.",
72
  ),
73
  ]
@@ -88,8 +92,8 @@ class Wikitext(datasets.GeneratorBasedBuilder):
88
  # specify them here. They'll be used if as_supervised=True in
89
  # builder.as_dataset.
90
  supervised_keys=None,
91
- # Homepage of the dataset for documentation
92
- homepage=_URL,
93
  citation=_CITATION,
94
  )
95
 
 
6
  import datasets
7
 
8
 
 
9
  _CITATION = """\
10
+ @misc{merity2016pointer,
11
+ title={Pointer Sentinel Mixture Models},
12
+ author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
13
+ year={2016},
14
+ eprint={1609.07843},
15
+ archivePrefix={arXiv},
16
+ primaryClass={cs.CL}
17
  }
18
  """
19
 
 
20
  _DESCRIPTION = """\
21
  The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
22
+ Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
23
+ License.
24
  """
25
+ _HOMEPAGE = "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/"
26
+ _LICENSE = "Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)"
27
  _DATA_URL = "https://s3.amazonaws.com/research.metamind.io/wikitext"
28
 
29
 
 
53
  VERSION = datasets.Version("0.1.0")
54
  BUILDER_CONFIGS = [
55
  WikitextConfig(
56
+ name="wikitext-103-v1",
57
+ data_url=_DATA_URL + "/" + "wikitext-103-v1.zip",
58
+ description="Word level dataset. No processing is needed other than replacing newlines with <eos> tokens.",
59
  ),
60
  WikitextConfig(
61
+ name="wikitext-2-v1",
62
+ data_url=_DATA_URL + "/" + "wikitext-2-v1.zip",
63
+ description="Word level dataset. No processing is needed other than replacing newlines with <eos> tokens.",
64
  ),
65
  WikitextConfig(
66
+ name="wikitext-103-raw-v1",
67
+ data_url=_DATA_URL + "/" + "wikitext-103-raw-v1.zip",
68
+ description="Raw level dataset: the raw tokens before the addition of <unk> tokens. "
69
  "They should only be used for character level work or for creating newly derived datasets.",
70
  ),
71
  WikitextConfig(
72
+ name="wikitext-2-raw-v1",
73
+ data_url=_DATA_URL + "/" + "wikitext-2-raw-v1.zip",
74
+ description="Raw level dataset: the raw tokens before the addition of <unk> tokens. "
75
  "They should only be used for character level work or for creating newly derived datasets.",
76
  ),
77
  ]
 
92
  # specify them here. They'll be used if as_supervised=True in
93
  # builder.as_dataset.
94
  supervised_keys=None,
95
+ homepage=_HOMEPAGE,
96
+ license=_LICENSE,
97
  citation=_CITATION,
98
  )
99