system HF staff commited on
Commit
b534c8d
1 Parent(s): 62d5c81

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (2) hide show
  1. README.md +21 -21
  2. wiki40b.py +4 -3
README.md CHANGED
@@ -27,7 +27,7 @@
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
- ## [Dataset Description](#dataset-description)
31
 
32
  - **Homepage:** [https://research.google/pubs/pub49029/](https://research.google/pubs/pub49029/)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -37,7 +37,7 @@
37
  - **Size of the generated dataset:** 9988.05 MB
38
  - **Total amount of disk used:** 9988.05 MB
39
 
40
- ### [Dataset Summary](#dataset-summary)
41
 
42
  Clean-up text for 40+ Wikipedia languages editions of pages
43
  correspond to entities. The datasets have train/dev/test splits per language.
@@ -46,19 +46,19 @@ redirect pages, deleted pages, and non-entity pages. Each example contains the
46
  wikidata id of the entity, and the full Wikipedia article after page processing
47
  that removes non-content sections and structured objects.
48
 
49
- ### [Supported Tasks](#supported-tasks)
50
 
51
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
52
 
53
- ### [Languages](#languages)
54
 
55
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
 
57
- ## [Dataset Structure](#dataset-structure)
58
 
59
  We show detailed information for up to 5 configurations of the dataset.
60
 
61
- ### [Data Instances](#data-instances)
62
 
63
  #### en
64
 
@@ -71,7 +71,7 @@ An example of 'train' looks as follows.
71
 
72
  ```
73
 
74
- ### [Data Fields](#data-fields)
75
 
76
  The data fields are the same among all splits.
77
 
@@ -80,55 +80,55 @@ The data fields are the same among all splits.
80
  - `text`: a `string` feature.
81
  - `version_id`: a `string` feature.
82
 
83
- ### [Data Splits Sample Size](#data-splits-sample-size)
84
 
85
  |name| train |validation| test |
86
  |----|------:|---------:|-----:|
87
  |en |2926536| 163597|162274|
88
 
89
- ## [Dataset Creation](#dataset-creation)
90
 
91
- ### [Curation Rationale](#curation-rationale)
92
 
93
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
94
 
95
- ### [Source Data](#source-data)
96
 
97
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
98
 
99
- ### [Annotations](#annotations)
100
 
101
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
 
103
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
104
 
105
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
 
107
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
108
 
109
- ### [Social Impact of Dataset](#social-impact-of-dataset)
110
 
111
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
112
 
113
- ### [Discussion of Biases](#discussion-of-biases)
114
 
115
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
 
117
- ### [Other Known Limitations](#other-known-limitations)
118
 
119
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
 
121
- ## [Additional Information](#additional-information)
122
 
123
- ### [Dataset Curators](#dataset-curators)
124
 
125
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
126
 
127
- ### [Licensing Information](#licensing-information)
128
 
129
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
 
131
- ### [Citation Information](#citation-information)
132
 
133
  ```
134
 
 
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
+ ## Dataset Description
31
 
32
  - **Homepage:** [https://research.google/pubs/pub49029/](https://research.google/pubs/pub49029/)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
37
  - **Size of the generated dataset:** 9988.05 MB
38
  - **Total amount of disk used:** 9988.05 MB
39
 
40
+ ### Dataset Summary
41
 
42
  Clean-up text for 40+ Wikipedia languages editions of pages
43
  correspond to entities. The datasets have train/dev/test splits per language.
 
46
  wikidata id of the entity, and the full Wikipedia article after page processing
47
  that removes non-content sections and structured objects.
48
 
49
+ ### Supported Tasks
50
 
51
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
52
 
53
+ ### Languages
54
 
55
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
 
57
+ ## Dataset Structure
58
 
59
  We show detailed information for up to 5 configurations of the dataset.
60
 
61
+ ### Data Instances
62
 
63
  #### en
64
 
 
71
 
72
  ```
73
 
74
+ ### Data Fields
75
 
76
  The data fields are the same among all splits.
77
 
 
80
  - `text`: a `string` feature.
81
  - `version_id`: a `string` feature.
82
 
83
+ ### Data Splits Sample Size
84
 
85
  |name| train |validation| test |
86
  |----|------:|---------:|-----:|
87
  |en |2926536| 163597|162274|
88
 
89
+ ## Dataset Creation
90
 
91
+ ### Curation Rationale
92
 
93
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
94
 
95
+ ### Source Data
96
 
97
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
98
 
99
+ ### Annotations
100
 
101
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
 
103
+ ### Personal and Sensitive Information
104
 
105
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
 
107
+ ## Considerations for Using the Data
108
 
109
+ ### Social Impact of Dataset
110
 
111
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
112
 
113
+ ### Discussion of Biases
114
 
115
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
 
117
+ ### Other Known Limitations
118
 
119
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
 
121
+ ## Additional Information
122
 
123
+ ### Dataset Curators
124
 
125
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
126
 
127
+ ### Licensing Information
128
 
129
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
 
131
+ ### Citation Information
132
 
133
  ```
134
 
wiki40b.py CHANGED
@@ -17,11 +17,12 @@
17
 
18
  from __future__ import absolute_import, division, print_function
19
 
20
- import logging
21
-
22
  import datasets
23
 
24
 
 
 
 
25
  _CITATION = """
26
  """
27
 
@@ -160,7 +161,7 @@ class Wiki40b(datasets.BeamBasedBuilder):
160
  import apache_beam as beam
161
  import tensorflow as tf
162
 
163
- logging.info("generating examples from = %s", filepaths)
164
 
165
  def _extract_content(example):
166
  """Extracts content from a TFExample."""
 
17
 
18
  from __future__ import absolute_import, division, print_function
19
 
 
 
20
  import datasets
21
 
22
 
23
+ logger = datasets.logging.get_logger(__name__)
24
+
25
+
26
  _CITATION = """
27
  """
28
 
 
161
  import apache_beam as beam
162
  import tensorflow as tf
163
 
164
+ logger.info("generating examples from = %s", filepaths)
165
 
166
  def _extract_content(example):
167
  """Extracts content from a TFExample."""