Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
system HF staff commited on
Commit
14e81fd
1 Parent(s): 346a684

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (3) hide show
  1. README.md +166 -0
  2. common_gen.py +17 -6
  3. dataset_infos.json +1 -1
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "common_gen"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://inklab.usc.edu/CommonGen/index.html](https://inklab.usc.edu/CommonGen/index.html)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 1.76 MB
37
+ - **Size of the generated dataset:** 6.88 MB
38
+ - **Total amount of disk used:** 8.64 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ CommonGen is a constrained text generation task, associated with a benchmark dataset,
43
+ to explicitly test machines for the ability of generative commonsense reasoning. Given
44
+ a set of common concepts; the task is to generate a coherent sentence describing an
45
+ everyday scenario using these concepts.
46
+
47
+ CommonGen is challenging because it inherently requires 1) relational reasoning using
48
+ background commonsense knowledge, and 2) compositional generalization ability to work
49
+ on unseen concept combinations. Our dataset, constructed through a combination of
50
+ crowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and
51
+ 50k sentences in total.
52
+
53
+ ### [Supported Tasks](#supported-tasks)
54
+
55
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
+
57
+ ### [Languages](#languages)
58
+
59
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
60
+
61
+ ## [Dataset Structure](#dataset-structure)
62
+
63
+ We show detailed information for up to 5 configurations of the dataset.
64
+
65
+ ### [Data Instances](#data-instances)
66
+
67
+ #### default
68
+
69
+ - **Size of downloaded dataset files:** 1.76 MB
70
+ - **Size of the generated dataset:** 6.88 MB
71
+ - **Total amount of disk used:** 8.64 MB
72
+
73
+ An example of 'train' looks as follows.
74
+ ```
75
+ {
76
+ "concept_set_idx": 0,
77
+ "concepts": ["ski", "mountain", "skier"],
78
+ "target": "Three skiers are skiing on a snowy mountain."
79
+ }
80
+ ```
81
+
82
+ ### [Data Fields](#data-fields)
83
+
84
+ The data fields are the same among all splits.
85
+
86
+ #### default
87
+ - `concept_set_idx`: a `int32` feature.
88
+ - `concepts`: a `list` of `string` features.
89
+ - `target`: a `string` feature.
90
+
91
+ ### [Data Splits Sample Size](#data-splits-sample-size)
92
+
93
+ | name |train|validation|test|
94
+ |-------|----:|---------:|---:|
95
+ |default|67389| 4018|1497|
96
+
97
+ ## [Dataset Creation](#dataset-creation)
98
+
99
+ ### [Curation Rationale](#curation-rationale)
100
+
101
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
+
103
+ ### [Source Data](#source-data)
104
+
105
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
+
107
+ ### [Annotations](#annotations)
108
+
109
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
+
111
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
112
+
113
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
114
+
115
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
116
+
117
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
118
+
119
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
+
121
+ ### [Discussion of Biases](#discussion-of-biases)
122
+
123
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
+
125
+ ### [Other Known Limitations](#other-known-limitations)
126
+
127
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
+
129
+ ## [Additional Information](#additional-information)
130
+
131
+ ### [Dataset Curators](#dataset-curators)
132
+
133
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
+
135
+ ### [Licensing Information](#licensing-information)
136
+
137
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
+
139
+ ### [Citation Information](#citation-information)
140
+
141
+ ```bib
142
+ @inproceedings{lin-etal-2020-commongen,
143
+ title = "{C}ommon{G}en: A Constrained Text Generation Challenge for Generative Commonsense Reasoning",
144
+ author = "Lin, Bill Yuchen and
145
+ Zhou, Wangchunshu and
146
+ Shen, Ming and
147
+ Zhou, Pei and
148
+ Bhagavatula, Chandra and
149
+ Choi, Yejin and
150
+ Ren, Xiang",
151
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
152
+ month = nov,
153
+ year = "2020",
154
+ address = "Online",
155
+ publisher = "Association for Computational Linguistics",
156
+ url = "https://www.aclweb.org/anthology/2020.findings-emnlp.165",
157
+ doi = "10.18653/v1/2020.findings-emnlp.165",
158
+ pages = "1823--1840"
159
+ }
160
+
161
+ ```
162
+
163
+
164
+ ### Contributions
165
+
166
+ Thanks to [@JetRunner](https://github.com/JetRunner), [@yuchenlin](https://github.com/yuchenlin), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
common_gen.py CHANGED
@@ -10,12 +10,23 @@ import datasets
10
  random.seed(42) # This is important, to ensure the same order for concept sets as the official script.
11
 
12
  _CITATION = """\
13
- @article{lin2019comgen,
14
- author = {Bill Yuchen Lin and Ming Shen and Wangchunshu Zhou and Pei Zhou and Chandra Bhagavatula and Yejin Choi and Xiang Ren},
15
- title = {CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning},
16
- journal = {CoRR},
17
- volume = {abs/1911.03705},
18
- year = {2019}
 
 
 
 
 
 
 
 
 
 
 
19
  }
20
  """
21
 
 
10
  random.seed(42) # This is important, to ensure the same order for concept sets as the official script.
11
 
12
  _CITATION = """\
13
+ @inproceedings{lin-etal-2020-commongen,
14
+ title = "{C}ommon{G}en: A Constrained Text Generation Challenge for Generative Commonsense Reasoning",
15
+ author = "Lin, Bill Yuchen and
16
+ Zhou, Wangchunshu and
17
+ Shen, Ming and
18
+ Zhou, Pei and
19
+ Bhagavatula, Chandra and
20
+ Choi, Yejin and
21
+ Ren, Xiang",
22
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
23
+ month = nov,
24
+ year = "2020",
25
+ address = "Online",
26
+ publisher = "Association for Computational Linguistics",
27
+ url = "https://www.aclweb.org/anthology/2020.findings-emnlp.165",
28
+ doi = "10.18653/v1/2020.findings-emnlp.165",
29
+ pages = "1823--1840"
30
  }
31
  """
32
 
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"default": {"description": "CommonGen is a constrained text generation task, associated with a benchmark dataset, \nto explicitly test machines for the ability of generative commonsense reasoning. Given \na set of common concepts; the task is to generate a coherent sentence describing an \neveryday scenario using these concepts.\n\nCommonGen is challenging because it inherently requires 1) relational reasoning using \nbackground commonsense knowledge, and 2) compositional generalization ability to work \non unseen concept combinations. Our dataset, constructed through a combination of \ncrowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and \n50k sentences in total.\n", "citation": "@article{lin2019comgen,\n author = {Bill Yuchen Lin and Ming Shen and Wangchunshu Zhou and Pei Zhou and Chandra Bhagavatula and Yejin Choi and Xiang Ren},\n title = {CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning},\n journal = {CoRR},\n volume = {abs/1911.03705},\n year = {2019}\n}\n", "homepage": "https://inklab.usc.edu/CommonGen/index.html", "license": "", "features": {"concept_set_idx": {"dtype": "int32", "id": null, "_type": "Value"}, "concepts": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "target": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "concepts", "output": "target"}, "builder_name": "common_gen", "config_name": "default", "version": {"version_str": "2020.5.30", "description": null, "datasets_version_to_prepare": null, "major": 2020, "minor": 5, "patch": 30}, "splits": {"train": {"name": "train", "num_bytes": 6724250, "num_examples": 67389, "dataset_name": "common_gen"}, "validation": {"name": "validation", "num_bytes": 408752, "num_examples": 4018, "dataset_name": "common_gen"}, "test": {"name": "test", "num_bytes": 77530, "num_examples": 1497, "dataset_name": "common_gen"}}, "download_checksums": {"https://storage.googleapis.com/huggingface-nlp/datasets/common_gen/commongen_data.zip": {"num_bytes": 1845699, "checksum": "a3f19ca607da4e874fc5f2dd1f53c13a6788a497f883d74cc3f9a1fcda44c594"}}, "download_size": 1845699, "post_processing_size": null, "dataset_size": 7210532, "size_in_bytes": 9056231}}
 
1
+ {"default": {"description": "CommonGen is a constrained text generation task, associated with a benchmark dataset, \nto explicitly test machines for the ability of generative commonsense reasoning. Given \na set of common concepts; the task is to generate a coherent sentence describing an \neveryday scenario using these concepts.\n\nCommonGen is challenging because it inherently requires 1) relational reasoning using \nbackground commonsense knowledge, and 2) compositional generalization ability to work \non unseen concept combinations. Our dataset, constructed through a combination of \ncrowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and \n50k sentences in total.\n", "citation": "@inproceedings{lin-etal-2020-commongen,\n author = {Bill Yuchen Lin and Wangchunshu Zhou and Ming Shen and Pei Zhou and Chandra Bhagavatula and Yejin Choi and Xiang Ren},\n title = {{C}ommon{G}en: A Constrained Text Generation Challenge for Generative Commonsense Reasoning},\n booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},\n year = {2020}\n}\n", "homepage": "https://inklab.usc.edu/CommonGen/index.html", "license": "", "features": {"concept_set_idx": {"dtype": "int32", "id": null, "_type": "Value"}, "concepts": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "target": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "concepts", "output": "target"}, "builder_name": "common_gen", "config_name": "default", "version": {"version_str": "2020.5.30", "description": null, "datasets_version_to_prepare": null, "major": 2020, "minor": 5, "patch": 30}, "splits": {"train": {"name": "train", "num_bytes": 6724250, "num_examples": 67389, "dataset_name": "common_gen"}, "validation": {"name": "validation", "num_bytes": 408752, "num_examples": 4018, "dataset_name": "common_gen"}, "test": {"name": "test", "num_bytes": 77530, "num_examples": 1497, "dataset_name": "common_gen"}}, "download_checksums": {"https://storage.googleapis.com/huggingface-nlp/datasets/common_gen/commongen_data.zip": {"num_bytes": 1845699, "checksum": "a3f19ca607da4e874fc5f2dd1f53c13a6788a497f883d74cc3f9a1fcda44c594"}}, "download_size": 1845699, "post_processing_size": null, "dataset_size": 7210532, "size_in_bytes": 9056231}}