Datasets:

Languages:
English
ArXiv:
License:
system HF staff commited on
Commit
a73b2ed
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +199 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.1.0/dummy_data.zip +3 -0
  5. wiki_bio.py +177 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-sa-3-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ task_ids:
19
+ - explanation-generation
20
+ - table-to-text
21
+ ---
22
+
23
+ # Dataset Card for [Dataset Name]
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Repository:** https://github.com/DavidGrangier/wikipedia-biography-dataset
51
+ - **Paper:** https://arxiv.org/pdf/1603.07771.pdf
52
+ - **GoogleDrive:** https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil
53
+
54
+ ### Dataset Summary
55
+
56
+ This Dataset contains 728321 biographies extracted from Wikipedia containing the first paragraph of the biography and the tabular infobox.
57
+ ### Supported Tasks and Leaderboards
58
+
59
+ The main purpose of this dataset is developing text generation models.
60
+
61
+ ### Languages
62
+
63
+ English.
64
+
65
+ ## Dataset Structure
66
+
67
+ ### Data Instances
68
+
69
+ [More Information Needed]
70
+
71
+ ### Data Fields
72
+
73
+ The structure of a single sample is the following:
74
+ ```json
75
+ {
76
+ "input_text":{
77
+ "context":"pope michael iii of alexandria\n",
78
+ "table":{
79
+ "column_header":[
80
+ "type",
81
+ "ended",
82
+ "death_date",
83
+ "title",
84
+ "enthroned",
85
+ "name",
86
+ "buried",
87
+ "religion",
88
+ "predecessor",
89
+ "nationality",
90
+ "article_title",
91
+ "feast_day",
92
+ "birth_place",
93
+ "residence",
94
+ "successor"
95
+ ],
96
+ "content":[
97
+ "pope",
98
+ "16 march 907",
99
+ "16 march 907",
100
+ "56th of st. mark pope of alexandria & patriarch of the see",
101
+ "25 april 880",
102
+ "michael iii of alexandria",
103
+ "monastery of saint macarius the great",
104
+ "coptic orthodox christian",
105
+ "shenouda i",
106
+ "egyptian",
107
+ "pope michael iii of alexandria\n",
108
+ "16 -rrb- march -lrb- 20 baramhat in the coptic calendar",
109
+ "egypt",
110
+ "saint mark 's church",
111
+ "gabriel i"
112
+ ],
113
+ "row_number":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
114
+ }
115
+ },
116
+ "target_text":"pope michael iii of alexandria -lrb- also known as khail iii -rrb- was the coptic pope of alexandria and patriarch of the see of st. mark -lrb- 880 -- 907 -rrb- .\nin 882 , the governor of egypt , ahmad ibn tulun , forced khail to pay heavy contributions , forcing him to sell a church and some attached properties to the local jewish community .\nthis building was at one time believed to have later become the site of the cairo geniza .\n"
117
+ }
118
+ ```
119
+ where, in the `"table"` field, all the information of the Wikpedia infobox is stored (the header of the infobox is stored in `"column_header"` and the information in the `"content"` field).
120
+ ### Data Splits
121
+
122
+ - Train: 582659 samples.
123
+ - Test: 72831 samples.
124
+ - Validation: 72831 samples.
125
+ ## Dataset Creation
126
+
127
+ ### Curation Rationale
128
+
129
+ [More Information Needed]
130
+
131
+ ### Source Data
132
+ This dataset was announced in the paper <em>Neural Text Generation from Structured Data with Application to the Biography Domain</em> [(arxiv link)](https://arxiv.org/pdf/1603.07771.pdf) and is stored both in [this](https://github.com/DavidGrangier/wikipedia-biography-dataset) repo (owned by DavidGrangier) and in [Google Drive](https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil) (zipped and mantained by the TensorFlow team).
133
+ #### Initial Data Collection and Normalization
134
+
135
+ [More Information Needed]
136
+
137
+ #### Who are the source language producers?
138
+
139
+ [More Information Needed]
140
+
141
+ ### Annotations
142
+
143
+ #### Annotation process
144
+
145
+ [More Information Needed]
146
+
147
+ #### Who are the annotators?
148
+
149
+ [More Information Needed]
150
+
151
+ ### Personal and Sensitive Information
152
+
153
+ [More Information Needed]
154
+
155
+ ## Considerations for Using the Data
156
+
157
+ ### Social Impact of Dataset
158
+
159
+ [More Information Needed]
160
+
161
+ ### Discussion of Biases
162
+
163
+ [More Information Needed]
164
+
165
+ ### Other Known Limitations
166
+
167
+ [More Information Needed]
168
+
169
+ ## Additional Information
170
+
171
+ ### Dataset Curators
172
+
173
+ [More Information Needed]
174
+
175
+ ### Licensing Information
176
+
177
+ This dataset is ditributed under Creative Comons CC BY-SA 3.0 License.
178
+ ### Citation Information
179
+
180
+ For refering the original paper in BibTex format:
181
+
182
+ ```
183
+ @article{DBLP:journals/corr/LebretGA16,
184
+ author = {R{\'{e}}mi Lebret and
185
+ David Grangier and
186
+ Michael Auli},
187
+ title = {Generating Text from Structured Data with Application to the Biography
188
+ Domain},
189
+ journal = {CoRR},
190
+ volume = {abs/1603.07771},
191
+ year = {2016},
192
+ url = {http://arxiv.org/abs/1603.07771},
193
+ archivePrefix = {arXiv},
194
+ eprint = {1603.07771},
195
+ timestamp = {Mon, 13 Aug 2018 16:48:30 +0200},
196
+ biburl = {https://dblp.org/rec/journals/corr/LebretGA16.bib},
197
+ bibsource = {dblp computer science bibliography, https://dblp.org}
198
+ }
199
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "This dataset gathers 728,321 biographies from wikipedia. It aims at evaluating text generation\nalgorithms. For each article, we provide the first paragraph and the infobox (both tokenized).\nFor each article, we extracted the first paragraph (text), the infobox (structured data). Each\ninfobox is encoded as a list of (field name, field value) pairs. We used Stanford CoreNLP\n(http://stanfordnlp.github.io/CoreNLP/) to preprocess the data, i.e. we broke the text into\nsentences and tokenized both the text and the field values. The dataset was randomly split in\nthree subsets train (80%), valid (10%), test (10%).\n", "citation": "@article{DBLP:journals/corr/LebretGA16,\n author = {R{'{e}}mi Lebret and\n David Grangier and\n Michael Auli},\n title = {Generating Text from Structured Data with Application to the Biography\n Domain},\n journal = {CoRR},\n volume = {abs/1603.07771},\n year = {2016},\n url = {http://arxiv.org/abs/1603.07771},\n archivePrefix = {arXiv},\n eprint = {1603.07771},\n timestamp = {Mon, 13 Aug 2018 16:48:30 +0200},\n biburl = {https://dblp.org/rec/journals/corr/LebretGA16.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://github.com/DavidGrangier/wikipedia-biography-dataset", "license": "CC BY-SA 3.0", "features": {"input_text": {"table": {"feature": {"column_header": {"dtype": "string", "id": null, "_type": "Value"}, "row_number": {"dtype": "int16", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "context": {"dtype": "string", "id": null, "_type": "Value"}}, "target_text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "input_text", "output": "target_text"}, "builder_name": "wiki_bio", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 618362475, "num_examples": 582659, "dataset_name": "wiki_bio"}, "test": {"name": "test", "num_bytes": 77151324, "num_examples": 72831, "dataset_name": "wiki_bio"}, "val": {"name": "val", "num_bytes": 77221530, "num_examples": 72831, "dataset_name": "wiki_bio"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil": {"num_bytes": 333998704, "checksum": "0de0fef4cc6c9182138939134b81b6ac33ffbc989b6d23a2d9ef1e50c49b8032"}}, "download_size": 333998704, "post_processing_size": null, "dataset_size": 772735329, "size_in_bytes": 1106734033}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d32518d755ccc45802a799dec27896c33fca9fbdffcf3952e94ebf29ca1badf
3
+ size 6579
wiki_bio.py ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """\
16
+ This dataset gathers 728,321 biographies from Wikipedia. It aims at evaluating text generation
17
+ algorithms. For each article, we provide the first paragraph and the infobox.
18
+ """
19
+
20
+ from __future__ import absolute_import, division, print_function
21
+
22
+ import os
23
+
24
+ import datasets
25
+
26
+
27
+ _CITATION = """\
28
+ @article{DBLP:journals/corr/LebretGA16,
29
+ author = {R{\'{e}}mi Lebret and
30
+ David Grangier and
31
+ Michael Auli},
32
+ title = {Generating Text from Structured Data with Application to the Biography
33
+ Domain},
34
+ journal = {CoRR},
35
+ volume = {abs/1603.07771},
36
+ year = {2016},
37
+ url = {http://arxiv.org/abs/1603.07771},
38
+ archivePrefix = {arXiv},
39
+ eprint = {1603.07771},
40
+ timestamp = {Mon, 13 Aug 2018 16:48:30 +0200},
41
+ biburl = {https://dblp.org/rec/journals/corr/LebretGA16.bib},
42
+ bibsource = {dblp computer science bibliography, https://dblp.org}
43
+ }
44
+ """
45
+
46
+ _DESCRIPTION = """\
47
+ This dataset gathers 728,321 biographies from wikipedia. It aims at evaluating text generation
48
+ algorithms. For each article, we provide the first paragraph and the infobox (both tokenized).
49
+ For each article, we extracted the first paragraph (text), the infobox (structured data). Each
50
+ infobox is encoded as a list of (field name, field value) pairs. We used Stanford CoreNLP
51
+ (http://stanfordnlp.github.io/CoreNLP/) to preprocess the data, i.e. we broke the text into
52
+ sentences and tokenized both the text and the field values. The dataset was randomly split in
53
+ three subsets train (80%), valid (10%), test (10%).
54
+ """
55
+
56
+ _HOMEPAGE = "https://github.com/DavidGrangier/wikipedia-biography-dataset"
57
+
58
+ _LICENSE = "CC BY-SA 3.0"
59
+
60
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
61
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
62
+ _URL = "https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil"
63
+
64
+
65
+ def _get_table(infobox_line):
66
+ """Converts the infobox into a one row table."""
67
+ cells = infobox_line.split("\t")
68
+ # remove empty cells
69
+ cells = list(filter(lambda x: x.find("<none>") == -1, cells))
70
+ columns = set([cell[0 : cell.split(":")[0].rfind("_")] for cell in cells])
71
+ table = {col: dict() for col in columns}
72
+ for cell in cells:
73
+ delimiter_position_value = cell.find(":")
74
+ column_index = cell[0:delimiter_position_value]
75
+ value = cell[delimiter_position_value + 1 :]
76
+ delimiter_column_index = column_index.rfind("_")
77
+ column = column_index[0:delimiter_column_index]
78
+ index = column_index[delimiter_column_index + 1 :]
79
+ table[column][index] = value
80
+ infobox_line_as_table = []
81
+ for column in table.keys():
82
+ row_value = " ".join([table[column][index] for index in sorted(table[column].keys())])
83
+ infobox_line_as_table.append(
84
+ {
85
+ "column_header": column,
86
+ "row_number": 1,
87
+ "content": row_value,
88
+ }
89
+ )
90
+ return infobox_line_as_table
91
+
92
+
93
+ class WikiBio(datasets.GeneratorBasedBuilder):
94
+ """Infoboxes and first paragraph from Wikipedia biography pages."""
95
+
96
+ VERSION = datasets.Version("1.1.0")
97
+
98
+ def _info(self):
99
+ features = datasets.Features(
100
+ {
101
+ "input_text": {
102
+ "table": datasets.Sequence(
103
+ {
104
+ "column_header": datasets.Value("string"),
105
+ "row_number": datasets.Value("int16"),
106
+ "content": datasets.Value("string"),
107
+ }
108
+ ),
109
+ "context": datasets.Value("string"),
110
+ },
111
+ "target_text": datasets.Value("string"),
112
+ }
113
+ )
114
+ return datasets.DatasetInfo(
115
+ description=_DESCRIPTION,
116
+ features=features,
117
+ supervised_keys=("input_text", "target_text"),
118
+ homepage=_HOMEPAGE,
119
+ license=_LICENSE,
120
+ citation=_CITATION,
121
+ )
122
+
123
+ def _split_generators(self, dl_manager):
124
+ """Returns SplitGenerators."""
125
+ my_urls = _URL
126
+ data_dir = dl_manager.download_and_extract(my_urls)
127
+ data_path = os.path.join(data_dir, "wikipedia-biography-dataset")
128
+ return [
129
+ datasets.SplitGenerator(
130
+ name=datasets.Split("train"),
131
+ gen_kwargs={
132
+ "id_file": os.path.join(data_path, "train", "train.id"),
133
+ "infobox_file": os.path.join(data_path, "train", "train.box"),
134
+ "nb_lines_file": os.path.join(data_path, "train", "train.nb"),
135
+ "sentences_file": os.path.join(data_path, "train", "train.sent"),
136
+ "article_title_file": os.path.join(data_path, "train", "train.title"),
137
+ },
138
+ ),
139
+ datasets.SplitGenerator(
140
+ name=datasets.Split("test"),
141
+ gen_kwargs={
142
+ "id_file": os.path.join(data_path, "test", "test.id"),
143
+ "infobox_file": os.path.join(data_path, "test", "test.box"),
144
+ "nb_lines_file": os.path.join(data_path, "test", "test.nb"),
145
+ "sentences_file": os.path.join(data_path, "test", "test.sent"),
146
+ "article_title_file": os.path.join(data_path, "test", "test.title"),
147
+ },
148
+ ),
149
+ datasets.SplitGenerator(
150
+ name=datasets.Split("val"),
151
+ gen_kwargs={
152
+ "id_file": os.path.join(data_path, "valid", "valid.id"),
153
+ "infobox_file": os.path.join(data_path, "valid", "valid.box"),
154
+ "nb_lines_file": os.path.join(data_path, "valid", "valid.nb"),
155
+ "sentences_file": os.path.join(data_path, "valid", "valid.sent"),
156
+ "article_title_file": os.path.join(data_path, "valid", "valid.title"),
157
+ },
158
+ ),
159
+ ]
160
+
161
+ def _generate_examples(self, id_file, infobox_file, nb_lines_file, sentences_file, article_title_file):
162
+ """ Yields examples."""
163
+ with open(id_file, "r", encoding="utf-8") as id_src, open(
164
+ infobox_file, "r", encoding="utf-8"
165
+ ) as infobox_src, open(nb_lines_file, "r", encoding="utf-8") as nb_lines_src, open(
166
+ sentences_file, "r", encoding="utf-8"
167
+ ) as sentences_src, open(
168
+ article_title_file, "r", encoding="utf-8"
169
+ ) as article_title_src:
170
+ for id_, infobox, nb_lines, article_title in zip(id_src, infobox_src, nb_lines_src, article_title_src):
171
+ target_text = []
172
+ for _ in range(int(nb_lines)):
173
+ target_text.append(sentences_src.readline())
174
+ yield id_, {
175
+ "input_text": {"table": _get_table(infobox), "context": article_title},
176
+ "target_text": "".join(target_text),
177
+ }