system HF staff commited on
Commit
28b05c7
·
1 Parent(s): 572d0fd

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (2) hide show
  1. README.md +21 -21
  2. wiki_snippets.py +4 -2
README.md CHANGED
@@ -27,7 +27,7 @@
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
- ## [Dataset Description](#dataset-description)
31
 
32
  - **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -37,23 +37,23 @@
37
  - **Size of the generated dataset:** 35001.08 MB
38
  - **Total amount of disk used:** 35001.08 MB
39
 
40
- ### [Dataset Summary](#dataset-summary)
41
 
42
  Wikipedia version split into plain text snippets for dense semantic indexing.
43
 
44
- ### [Supported Tasks](#supported-tasks)
45
 
46
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
47
 
48
- ### [Languages](#languages)
49
 
50
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
 
52
- ## [Dataset Structure](#dataset-structure)
53
 
54
  We show detailed information for up to 5 configurations of the dataset.
55
 
56
- ### [Data Instances](#data-instances)
57
 
58
  #### wiki40b_en_100_0
59
 
@@ -77,7 +77,7 @@ An example of 'train' looks as follows.
77
 
78
  ```
79
 
80
- ### [Data Fields](#data-fields)
81
 
82
  The data fields are the same among all splits.
83
 
@@ -105,56 +105,56 @@ The data fields are the same among all splits.
105
  - `section_title`: a `string` feature.
106
  - `passage_text`: a `string` feature.
107
 
108
- ### [Data Splits Sample Size](#data-splits-sample-size)
109
 
110
  | name | train |
111
  |------------------|-------:|
112
  |wiki40b_en_100_0 |17553713|
113
  |wikipedia_en_100_0|30820408|
114
 
115
- ## [Dataset Creation](#dataset-creation)
116
 
117
- ### [Curation Rationale](#curation-rationale)
118
 
119
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
 
121
- ### [Source Data](#source-data)
122
 
123
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
 
125
- ### [Annotations](#annotations)
126
 
127
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
 
129
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
130
 
131
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
 
133
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
134
 
135
- ### [Social Impact of Dataset](#social-impact-of-dataset)
136
 
137
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
 
139
- ### [Discussion of Biases](#discussion-of-biases)
140
 
141
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
 
143
- ### [Other Known Limitations](#other-known-limitations)
144
 
145
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
 
147
- ## [Additional Information](#additional-information)
148
 
149
- ### [Dataset Curators](#dataset-curators)
150
 
151
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
 
153
- ### [Licensing Information](#licensing-information)
154
 
155
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
 
157
- ### [Citation Information](#citation-information)
158
 
159
  ```
160
  @ONLINE {wikidump,
 
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
+ ## Dataset Description
31
 
32
  - **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
37
  - **Size of the generated dataset:** 35001.08 MB
38
  - **Total amount of disk used:** 35001.08 MB
39
 
40
+ ### Dataset Summary
41
 
42
  Wikipedia version split into plain text snippets for dense semantic indexing.
43
 
44
+ ### Supported Tasks
45
 
46
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
47
 
48
+ ### Languages
49
 
50
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
 
52
+ ## Dataset Structure
53
 
54
  We show detailed information for up to 5 configurations of the dataset.
55
 
56
+ ### Data Instances
57
 
58
  #### wiki40b_en_100_0
59
 
 
77
 
78
  ```
79
 
80
+ ### Data Fields
81
 
82
  The data fields are the same among all splits.
83
 
 
105
  - `section_title`: a `string` feature.
106
  - `passage_text`: a `string` feature.
107
 
108
+ ### Data Splits Sample Size
109
 
110
  | name | train |
111
  |------------------|-------:|
112
  |wiki40b_en_100_0 |17553713|
113
  |wikipedia_en_100_0|30820408|
114
 
115
+ ## Dataset Creation
116
 
117
+ ### Curation Rationale
118
 
119
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
 
121
+ ### Source Data
122
 
123
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
 
125
+ ### Annotations
126
 
127
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
 
129
+ ### Personal and Sensitive Information
130
 
131
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
 
133
+ ## Considerations for Using the Data
134
 
135
+ ### Social Impact of Dataset
136
 
137
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
 
139
+ ### Discussion of Biases
140
 
141
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
 
143
+ ### Other Known Limitations
144
 
145
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
 
147
+ ## Additional Information
148
 
149
+ ### Dataset Curators
150
 
151
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
 
153
+ ### Licensing Information
154
 
155
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
 
157
+ ### Citation Information
158
 
159
  ```
160
  @ONLINE {wikidump,
wiki_snippets.py CHANGED
@@ -1,10 +1,12 @@
1
  import json
2
- import logging
3
  import math
4
 
5
  import datasets
6
 
7
 
 
 
 
8
  _CITATION = """\
9
  @ONLINE {wikidump,
10
  author = {Wikimedia Foundation},
@@ -194,7 +196,7 @@ class WikiSnippets(datasets.GeneratorBasedBuilder):
194
  ]
195
 
196
  def _generate_examples(self, wikipedia):
197
- logging.info(
198
  "generating examples from = {} {}".format(self.config.wikipedia_name, self.config.wikipedia_version_name)
199
  )
200
  for split in wikipedia:
 
1
  import json
 
2
  import math
3
 
4
  import datasets
5
 
6
 
7
+ logger = datasets.logging.get_logger(__name__)
8
+
9
+
10
  _CITATION = """\
11
  @ONLINE {wikidump,
12
  author = {Wikimedia Foundation},
 
196
  ]
197
 
198
  def _generate_examples(self, wikipedia):
199
+ logger.info(
200
  "generating examples from = {} {}".format(self.config.wikipedia_name, self.config.wikipedia_version_name)
201
  )
202
  for split in wikipedia: