Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
system HF staff commited on
Commit
355e19b
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +170 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.0.0/dummy_data.zip +3 -0
  5. jfleg.py +147 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-nc-sa-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ - other-language-learner
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - extended|other-GUG-grammaticality-judgements
17
+ task_categories:
18
+ - conditional-text-generation
19
+ task_ids:
20
+ - conditional-text-generation-other-grammatical-error-correction
21
+ ---
22
+
23
+ # Dataset Card for JFLEG
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** [Github](https://github.com/keisks/jfleg)
51
+ - **Repository:** [Github](https://github.com/keisks/jfleg)
52
+ - **Paper:** [Napoles et al., 2020](https://www.aclweb.org/anthology/E17-2037/)
53
+ - **Leaderboard:** [Leaderboard](https://github.com/keisks/jfleg#leader-board-published-results)
54
+ - **Point of Contact:** Courtney Napoles, Keisuke Sakaguchi
55
+
56
+ ### Dataset Summary
57
+ JFLEG (JHU FLuency-Extended GUG) is an English grammatical error correction (GEC) corpus. It is a gold standard benchmark for developing and evaluating GEC systems with respect to fluency (extent to which a text is native-sounding) as well as grammaticality. For each source document, there are four human-written corrections.
58
+
59
+ ### Supported Tasks and Leaderboards
60
+ Grammatical error correction.
61
+
62
+ ### Languages
63
+ English (native as well as L2 writers)
64
+
65
+ ## Dataset Structure
66
+
67
+ ### Data Instances
68
+ Each instance contains a source sentence and four corrections. For example:
69
+ ```python
70
+ {
71
+ 'sentence': "They are moved by solar energy ."
72
+ 'corrections': [
73
+ "They are moving by solar energy .",
74
+ "They are moved by solar energy .",
75
+ "They are moved by solar energy .",
76
+ "They are propelled by solar energy ."
77
+ ]
78
+ }
79
+ ```
80
+
81
+ ### Data Fields
82
+ - sentence: original sentence written by an English learner
83
+ - corrections: corrected versions by human annotators. The order of the annotations are consistent (eg first sentence will always be written by annotator "ref0").
84
+
85
+ ### Data Splits
86
+ - This dataset contains 1511 examples in total and comprise a dev and test split.
87
+ - There are 754 and 747 source sentences for dev and test, respectively.
88
+ - Each sentence has 4 corresponding corrected versions.
89
+
90
+ ## Dataset Creation
91
+
92
+ ### Curation Rationale
93
+
94
+ [More Information Needed]
95
+
96
+ ### Source Data
97
+
98
+ #### Initial Data Collection and Normalization
99
+
100
+ [More Information Needed]
101
+
102
+ #### Who are the source language producers?
103
+
104
+ [More Information Needed]
105
+
106
+ ### Annotations
107
+
108
+ #### Annotation process
109
+
110
+ [More Information Needed]
111
+
112
+ #### Who are the annotators?
113
+
114
+ [More Information Needed]
115
+
116
+ ### Personal and Sensitive Information
117
+
118
+ [More Information Needed]
119
+
120
+ ## Considerations for Using the Data
121
+
122
+ ### Social Impact of Dataset
123
+
124
+ [More Information Needed]
125
+
126
+ ### Discussion of Biases
127
+
128
+ [More Information Needed]
129
+
130
+ ### Other Known Limitations
131
+
132
+ [More Information Needed]
133
+
134
+ ## Additional Information
135
+
136
+ ### Dataset Curators
137
+
138
+ [More Information Needed]
139
+
140
+ ### Licensing Information
141
+ This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
142
+
143
+ ### Citation Information
144
+ This benchmark was proposed by [Napoles et al., 2020](https://www.aclweb.org/anthology/E17-2037/).
145
+
146
+ ```
147
+ @InProceedings{napoles-sakaguchi-tetreault:2017:EACLshort,
148
+ author = {Napoles, Courtney and Sakaguchi, Keisuke and Tetreault, Joel},
149
+ title = {JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction},
150
+ booktitle = {Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers},
151
+ month = {April},
152
+ year = {2017},
153
+ address = {Valencia, Spain},
154
+ publisher = {Association for Computational Linguistics},
155
+ pages = {229--234},
156
+ url = {http://www.aclweb.org/anthology/E17-2037}
157
+ }
158
+
159
+ @InProceedings{heilman-EtAl:2014:P14-2,
160
+ author = {Heilman, Michael and Cahill, Aoife and Madnani, Nitin and Lopez, Melissa and Mulholland, Matthew and Tetreault, Joel},
161
+ title = {Predicting Grammaticality on an Ordinal Scale},
162
+ booktitle = {Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
163
+ month = {June},
164
+ year = {2014},
165
+ address = {Baltimore, Maryland},
166
+ publisher = {Association for Computational Linguistics},
167
+ pages = {174--180},
168
+ url = {http://www.aclweb.org/anthology/P14-2029}
169
+ }
170
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "JFLEG (JHU FLuency-Extended GUG) is an English grammatical error correction (GEC) corpus.\nIt is a gold standard benchmark for developing and evaluating GEC systems with respect to\nfluency (extent to which a text is native-sounding) as well as grammaticality.\n\nFor each source document, there are four human-written corrections (ref0 to ref3).\n", "citation": "@InProceedings{napoles-sakaguchi-tetreault:2017:EACLshort,\n author = {Napoles, Courtney\n and Sakaguchi, Keisuke\n and Tetreault, Joel},\n title = {JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction},\n booktitle = {Proceedings of the 15th Conference of the European Chapter of the\n Association for Computational Linguistics: Volume 2, Short Papers},\n month = {April},\n year = {2017},\n address = {Valencia, Spain},\n publisher = {Association for Computational Linguistics},\n pages = {229--234},\n url = {http://www.aclweb.org/anthology/E17-2037}\n}\n@InProceedings{heilman-EtAl:2014:P14-2,\n author = {Heilman, Michael\n and Cahill, Aoife\n and Madnani, Nitin\n and Lopez, Melissa\n and Mulholland, Matthew\n and Tetreault, Joel},\n title = {Predicting Grammaticality on an Ordinal Scale},\n booktitle = {Proceedings of the 52nd Annual Meeting of the\n Association for Computational Linguistics (Volume 2: Short Papers)},\n month = {June},\n year = {2014},\n address = {Baltimore, Maryland},\n publisher = {Association for Computational Linguistics},\n pages = {174--180},\n url = {http://www.aclweb.org/anthology/P14-2029}\n}\n", "homepage": "https://github.com/keisks/jfleg", "license": "CC BY-NC-SA 4.0", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}, "corrections": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "jfleg", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 379991, "num_examples": 755, "dataset_name": "jfleg"}, "test": {"name": "test", "num_bytes": 379711, "num_examples": 748, "dataset_name": "jfleg"}}, "download_checksums": {"https://raw.githubusercontent.com/keisks/jfleg/master/dev/dev.src": {"num_bytes": 72726, "checksum": "4a0e8b86d18a1058460ff0a592dac1ba68986d135256efbd27e997ac43f295f8"}, "https://raw.githubusercontent.com/keisks/jfleg/master/dev/dev.ref0": {"num_bytes": 73216, "checksum": "adea6287c6e2240b7777e63cd56f8e228e742bbfb42c5152bc0bd2bc91f4e53e"}, "https://raw.githubusercontent.com/keisks/jfleg/master/dev/dev.ref1": {"num_bytes": 73129, "checksum": "d40d56ec7468ddab03fdcca97065ab3f9d391d749dbc7097b7c777a19ce4242e"}, "https://raw.githubusercontent.com/keisks/jfleg/master/dev/dev.ref2": {"num_bytes": 73394, "checksum": "b070691d633e0c4143d96ba21299ae71cb126086517d2970df47420842067793"}, "https://raw.githubusercontent.com/keisks/jfleg/master/dev/dev.ref3": {"num_bytes": 73164, "checksum": "9187fd834693fa77d07957991282d32d61ff84a207c25cbfab318c871bacdbc4"}, "https://raw.githubusercontent.com/keisks/jfleg/master/test/test.src": {"num_bytes": 72684, "checksum": "893db119162487aa7f956b65978453576919e6797cd6c1955f93b7a8b9f4bbd8"}, "https://raw.githubusercontent.com/keisks/jfleg/master/test/test.ref0": {"num_bytes": 73090, "checksum": "875953280a3ea1dea2827337b1778c0105f0c0aa79f2517a6e0e42db5e5e170c"}, "https://raw.githubusercontent.com/keisks/jfleg/master/test/test.ref1": {"num_bytes": 73325, "checksum": "190d3398f2765f54a39b5489d1e96c483412a656086c731f8712ad0591087d80"}, "https://raw.githubusercontent.com/keisks/jfleg/master/test/test.ref2": {"num_bytes": 73018, "checksum": "0e3c6abe934ccd16c9dffb2fd889d6f55afc3ad13a63c1e148c720bb4e99046b"}, "https://raw.githubusercontent.com/keisks/jfleg/master/test/test.ref3": {"num_bytes": 73365, "checksum": "19f49de6eff813b26505ecf756c20dc301aeb80696696b01ca950298f6e58441"}}, "download_size": 731111, "post_processing_size": null, "dataset_size": 759702, "size_in_bytes": 1490813}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e68f341e896df513b6f5df1c388b2d8348674e68e41c1e4d0f35a6bc64c9a1a7
3
+ size 4859
jfleg.py ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """JFLEG dataset."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import datasets
20
+
21
+
22
+ _CITATION = """\
23
+ @InProceedings{napoles-sakaguchi-tetreault:2017:EACLshort,
24
+ author = {Napoles, Courtney
25
+ and Sakaguchi, Keisuke
26
+ and Tetreault, Joel},
27
+ title = {JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction},
28
+ booktitle = {Proceedings of the 15th Conference of the European Chapter of the
29
+ Association for Computational Linguistics: Volume 2, Short Papers},
30
+ month = {April},
31
+ year = {2017},
32
+ address = {Valencia, Spain},
33
+ publisher = {Association for Computational Linguistics},
34
+ pages = {229--234},
35
+ url = {http://www.aclweb.org/anthology/E17-2037}
36
+ }
37
+ @InProceedings{heilman-EtAl:2014:P14-2,
38
+ author = {Heilman, Michael
39
+ and Cahill, Aoife
40
+ and Madnani, Nitin
41
+ and Lopez, Melissa
42
+ and Mulholland, Matthew
43
+ and Tetreault, Joel},
44
+ title = {Predicting Grammaticality on an Ordinal Scale},
45
+ booktitle = {Proceedings of the 52nd Annual Meeting of the
46
+ Association for Computational Linguistics (Volume 2: Short Papers)},
47
+ month = {June},
48
+ year = {2014},
49
+ address = {Baltimore, Maryland},
50
+ publisher = {Association for Computational Linguistics},
51
+ pages = {174--180},
52
+ url = {http://www.aclweb.org/anthology/P14-2029}
53
+ }
54
+ """
55
+
56
+ _DESCRIPTION = """\
57
+ JFLEG (JHU FLuency-Extended GUG) is an English grammatical error correction (GEC) corpus.
58
+ It is a gold standard benchmark for developing and evaluating GEC systems with respect to
59
+ fluency (extent to which a text is native-sounding) as well as grammaticality.
60
+
61
+ For each source document, there are four human-written corrections (ref0 to ref3).
62
+ """
63
+
64
+ _HOMEPAGE = "https://github.com/keisks/jfleg"
65
+
66
+ _LICENSE = "CC BY-NC-SA 4.0"
67
+
68
+ _URLs = {
69
+ "dev": {
70
+ "src": "https://raw.githubusercontent.com/keisks/jfleg/master/dev/dev.src",
71
+ "ref0": "https://raw.githubusercontent.com/keisks/jfleg/master/dev/dev.ref0",
72
+ "ref1": "https://raw.githubusercontent.com/keisks/jfleg/master/dev/dev.ref1",
73
+ "ref2": "https://raw.githubusercontent.com/keisks/jfleg/master/dev/dev.ref2",
74
+ "ref3": "https://raw.githubusercontent.com/keisks/jfleg/master/dev/dev.ref3",
75
+ },
76
+ "test": {
77
+ "src": "https://raw.githubusercontent.com/keisks/jfleg/master/test/test.src",
78
+ "ref0": "https://raw.githubusercontent.com/keisks/jfleg/master/test/test.ref0",
79
+ "ref1": "https://raw.githubusercontent.com/keisks/jfleg/master/test/test.ref1",
80
+ "ref2": "https://raw.githubusercontent.com/keisks/jfleg/master/test/test.ref2",
81
+ "ref3": "https://raw.githubusercontent.com/keisks/jfleg/master/test/test.ref3",
82
+ },
83
+ }
84
+
85
+
86
+ class Jfleg(datasets.GeneratorBasedBuilder):
87
+ """JFLEG (JHU FLuency-Extended GUG) grammatical error correction dataset."""
88
+
89
+ VERSION = datasets.Version("1.0.0")
90
+
91
+ def _info(self):
92
+ return datasets.DatasetInfo(
93
+ description=_DESCRIPTION,
94
+ features=datasets.Features(
95
+ {"sentence": datasets.Value("string"), "corrections": datasets.Sequence(datasets.Value("string"))}
96
+ ),
97
+ supervised_keys=None,
98
+ homepage=_HOMEPAGE,
99
+ license=_LICENSE,
100
+ citation=_CITATION,
101
+ )
102
+
103
+ def _split_generators(self, dl_manager):
104
+ """Returns SplitGenerators."""
105
+
106
+ downloaded_dev = dl_manager.download_and_extract(_URLs["dev"])
107
+ downloaded_test = dl_manager.download_and_extract(_URLs["test"])
108
+
109
+ return [
110
+ datasets.SplitGenerator(
111
+ name=datasets.Split.VALIDATION,
112
+ gen_kwargs={
113
+ "filepath": downloaded_dev,
114
+ "split": "dev",
115
+ },
116
+ ),
117
+ datasets.SplitGenerator(
118
+ name=datasets.Split.TEST,
119
+ gen_kwargs={"filepath": downloaded_test, "split": "test"},
120
+ ),
121
+ ]
122
+
123
+ def _generate_examples(self, filepath, split):
124
+ """ Yields examples. """
125
+
126
+ source_file = filepath["src"]
127
+ with open(source_file, encoding="utf-8") as f:
128
+ source_sentences = f.read().split("\n")
129
+ num_source = len(source_sentences)
130
+
131
+ corrections = []
132
+ for n in range(0, 4):
133
+ correction_file = filepath["ref{n}".format(n=n)]
134
+ with open(correction_file, encoding="utf-8") as f:
135
+ correction_sentences = f.read().split("\n")
136
+ num_correction = len(correction_sentences)
137
+
138
+ assert len(correction_sentences) == len(
139
+ source_sentences
140
+ ), "Sizes do not match: {ns} vs {nr} for {sf} vs {cf}.".format(
141
+ ns=num_source, nr=num_correction, sf=source_file, cf=correction_file
142
+ )
143
+ corrections.append(correction_sentences)
144
+
145
+ corrected_sentences = list(zip(*corrections))
146
+ for id_, source_sentence in enumerate(source_sentences):
147
+ yield id_, {"sentence": source_sentence, "corrections": corrected_sentences[id_]}