File size: 10,900 Bytes
345ad85
 
 
 
 
89601c7
d38f5a3
 
89601c7
d38f5a3
345ad85
 
 
 
 
 
 
c501136
345ad85
c501136
1befac5
8bdcfc4
ae65d96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a5e0535
 
 
ae65d96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a5e0535
 
 
ae65d96
 
e2d5ffe
 
 
345ad85
 
 
 
 
 
1befac5
 
 
345ad85
1befac5
 
 
345ad85
1befac5
 
 
 
345ad85
1befac5
 
 
345ad85
1befac5
 
 
 
345ad85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c501136
345ad85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f444831
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
---
annotations_creators:
- found
language_creators:
- crowdsourced
language:
- de
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-web-nlg
task_categories:
- tabular-to-text
task_ids:
- rdf-to-text
paperswithcode_id: null
pretty_name: Enriched WebNLG
dataset_info:
- config_name: en
  features:
  - name: category
    dtype: string
  - name: size
    dtype: int32
  - name: eid
    dtype: string
  - name: original_triple_sets
    sequence:
    - name: otriple_set
      sequence: string
  - name: modified_triple_sets
    sequence:
    - name: mtriple_set
      sequence: string
  - name: shape
    dtype: string
  - name: shape_type
    dtype: string
  - name: lex
    sequence:
    - name: comment
      dtype: string
    - name: lid
      dtype: string
    - name: text
      dtype: string
    - name: template
      dtype: string
    - name: sorted_triple_sets
      sequence: string
    - name: lexicalization
      dtype: string
  splits:
  - name: train
    num_bytes: 14665155
    num_examples: 6940
  - name: dev
    num_bytes: 1843787
    num_examples: 872
  - name: test
    num_bytes: 3931381
    num_examples: 1862
  download_size: 44284508
  dataset_size: 20440323
- config_name: de
  features:
  - name: category
    dtype: string
  - name: size
    dtype: int32
  - name: eid
    dtype: string
  - name: original_triple_sets
    sequence:
    - name: otriple_set
      sequence: string
  - name: modified_triple_sets
    sequence:
    - name: mtriple_set
      sequence: string
  - name: shape
    dtype: string
  - name: shape_type
    dtype: string
  - name: lex
    sequence:
    - name: comment
      dtype: string
    - name: lid
      dtype: string
    - name: text
      dtype: string
    - name: template
      dtype: string
    - name: sorted_triple_sets
      sequence: string
  splits:
  - name: train
    num_bytes: 9748193
    num_examples: 6940
  - name: dev
    num_bytes: 1238609
    num_examples: 872
  download_size: 44284508
  dataset_size: 10986802
config_names:
- de
- en
---

# Dataset Card for WebNLG

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [WebNLG challenge website](https://webnlg-challenge.loria.fr/)
- **Repository:** [Enriched WebNLG Github repository](https://github.com/ThiagoCF05/webnlg)
- **Paper:** [Enriching the WebNLG corpus](https://www.aclweb.org/anthology/W18-6521/)

### Dataset Summary

The WebNLG challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a
set of triples extracted from DBpedia and the text is a verbalisation of these triples. For instance, given the 3
DBpedia triples shown in (a), the aim is to generate a text such as (b). It is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture, such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.

### Supported Tasks and Leaderboards

The dataset supports a `other-rdf-to-text` task which requires a model takes a set of RDF (Resource Description
Format) triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural
language sentence expressing the information contained in the triples.

### Languages

The dataset is presented in two versions: English (config `en`) and German (config `de`)

## Dataset Structure

### Data Instances

A typical example contains the original RDF triples in the set, a modified version which presented to crowd workers, and
a set of possible verbalizations for this set of triples:

```
{ 'category': 'Politician',
 'eid': 'Id10',
 'lex': {'comment': ['good', 'good', 'good'],
         'lid': ['Id1', 'Id2', 'Id3'],
         'text': ['World War II had Chiang Kai-shek as a commander and United States Army soldier Abner W. Sibal.',
                  'Abner W. Sibal served in the United States Army during the Second World War and during that war Chiang Kai-shek was one of the commanders.',
                  'Abner W. Sibal, served in the United States Army and fought in World War II, one of the commanders of which, was Chiang Kai-shek.']},
 'modified_triple_sets': {'mtriple_set': [['Abner_W._Sibal | battle | World_War_II',
                                           'World_War_II | commander | Chiang_Kai-shek',
                                           'Abner_W._Sibal | militaryBranch | United_States_Army']]},
 'original_triple_sets': {'otriple_set': [['Abner_W._Sibal | battles | World_War_II', 'World_War_II | commander | Chiang_Kai-shek', 'Abner_W._Sibal | branch | United_States_Army'],
                                          ['Abner_W._Sibal | militaryBranch | United_States_Army',
                                           'Abner_W._Sibal | battles | World_War_II',
                                           'World_War_II | commander | Chiang_Kai-shek']]},
 'shape': '(X (X) (X (X)))',
 'shape_type': 'mixed',
 'size': 3}
```

### Data Fields

The following fields can be found in the instances:

- `category`: the category of the DBpedia entites present in the RDF triples.
- `eid`: an example ID, only unique per split per category.
- `size`: number of RDF triples in the set.
- `shape`: (for v3 only) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. `shape`
  is a string representation of the tree with nested parentheses where X is a node (
  see [Newick tree format](https://en.wikipedia.org/wiki/Newick_format))
- `shape_type`: (for v3 only) is a type of the tree shape, which can be: `chain` (the object of one triple is the
  subject of the other); `sibling` (triples with a shared subject); `mixed` (both chain and sibling types present).
- `2017_test_category`: (for `webnlg_challenge_2017`) tells whether the set of RDF triples was present in the training
  set or not.
- `lex`: the lexicalizations, with:
    - `text`: the text to be predicted.
    - `lid`: a lexicalizayion ID, unique per example.
    - `comment`: the lexicalizations were rated by crowd workers are either `good` or `bad`

### Data Splits

The `en` version has `train`, `test` and `dev` splits; the `de` version, only `train` and `dev`.

## Dataset Creation

### Curation Rationale

Natural  Language  Generation  (NLG)  is  the  process  of  automatically  converting  non-linguistic data  into  a  linguistic  output  format  (Reiter  andDale,   2000;   Gatt   and   Krahmer,   2018). Recently,   the  field  has  seen  an  increase  in  the number  of  available  focused  data  resources  as E2E (Novikova et al., 2017), ROTOWIRE(Wise-man  et  al.,  2017)  and  WebNLG  (Gardent  et  al.,2017a,b) corpora. Although theses recent releases are highly valuable resources for the NLG community in general,nall  of  them  were  designed  to  work  with  end-to-end NLG models.   Hence,  they consist of a collection  of  parallel  raw  representations  and  their corresponding  textual  realizations.   No  intermediate representations are available so researchersncan straight-forwardly use them to develop or evaluate  popular tasks  in NLG  pipelines (Reiter  and Dale, 2000), such as Discourse Ordering, Lexicalization,  Aggregation,  Referring Expression Generation, among others.  Moreover, these new corpora, like many other resources in Computational Linguistics  more  in  general,  are  only  available in  English,  limiting  the  development  of  NLG-applications  to  other  languages.


### Source Data

#### Initial Data Collection and Normalization

[More Information Needed]

#### Who are the source language producers?

[More Information Needed]

### Annotations

#### Annotation process

[More Information Needed]

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

[More Information Needed]

### Licensing Information

The dataset uses the `cc-by-nc-sa-4.0` license. The source DBpedia project uses the `cc-by-sa-3.0` and `gfdl-1.1`
licenses.

### Citation Information

- If you use the Enriched WebNLG corpus, cite:

```
@InProceedings{ferreiraetal2018,
  author = 	"Castro Ferreira, Thiago
		and Moussallem, Diego
		and Wubben, Sander
		and Krahmer, Emiel",
  title = 	"Enriching the WebNLG corpus",
  booktitle = 	"Proceedings of the 11th International Conference on Natural Language Generation",
  year = 	"2018",
  series = {INLG'18},
  publisher = 	"Association for Computational Linguistics",
  address = 	"Tilburg, The Netherlands",
}

@inproceedings{web_nlg,
  author    = {Claire Gardent and
               Anastasia Shimorina and
               Shashi Narayan and
               Laura Perez{-}Beltrachini},
  editor    = {Regina Barzilay and
               Min{-}Yen Kan},
  title     = {Creating Training Corpora for {NLG} Micro-Planners},
  booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational
               Linguistics, {ACL} 2017, Vancouver, Canada, July 30 - August 4, Volume
               1: Long Papers},
  pages     = {179--188},
  publisher = {Association for Computational Linguistics},
  year      = {2017},
  url       = {https://doi.org/10.18653/v1/P17-1017},
  doi       = {10.18653/v1/P17-1017}
}
```
### Contributions

Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.