system HF staff commited on
Commit
7713d85
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - apache-2-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ raw:
14
+ - 100K<n<1M
15
+ simplified:
16
+ - 10K<n<100K
17
+ source_datasets:
18
+ - original
19
+ task_categories:
20
+ - text-classification
21
+ task_ids:
22
+ - multi-class-classification
23
+ - multi-label-classification
24
+ - text-classification-other-emotion
25
+ ---
26
+
27
+ # Dataset Card for GoEmotions
28
+
29
+ ## Table of Contents
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-instances)
37
+ - [Data Splits](#data-instances)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+
52
+ ## Dataset Description
53
+
54
+ - **Homepage:** https://github.com/google-research/google-research/tree/master/goemotions
55
+ - **Repository:** https://github.com/google-research/google-research/tree/master/goemotions
56
+ - **Paper:** https://arxiv.org/abs/2005.00547
57
+ - **Leaderboard:**
58
+ - **Point of Contact:** [Dora Demszky](https://nlp.stanford.edu/~ddemszky/index.html)
59
+
60
+ ### Dataset Summary
61
+
62
+ The GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral.
63
+ The raw data is included as well as the smaller, simplified version of the dataset with predefined train/val/test
64
+ splits.
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ This dataset is intended for multi-class, multi-label emotion classification.
69
+
70
+ ### Languages
71
+
72
+ The data is in English.
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ Each instance is a reddit comment with a corresponding ID and one or more emotion annotations (or neutral).
79
+
80
+ ### Data Fields
81
+
82
+ The simplified configuration includes:
83
+ - `text`: the reddit comment
84
+ - `labels`: the emotion annotations
85
+ - `comment_id`: unique identifier of the comment (can be used to look up the entry in the raw dataset)
86
+
87
+ In addition to the above, the raw data includes:
88
+ * `author`: The Reddit username of the comment's author.
89
+ * `subreddit`: The subreddit that the comment belongs to.
90
+ * `link_id`: The link id of the comment.
91
+ * `parent_id`: The parent id of the comment.
92
+ * `created_utc`: The timestamp of the comment.
93
+ * `rater_id`: The unique id of the annotator.
94
+ * `example_very_unclear`: Whether the annotator marked the example as being very unclear or difficult to label (in this
95
+ case they did not choose any emotion labels).
96
+
97
+ In the raw data, labels are listed as their own columns with binary 0/1 entries rather than a list of ids as in the
98
+ simplified data.
99
+
100
+ ### Data Splits
101
+
102
+ The simplified data includes a set of train/val/test splits with 43,410, 5426, and 5427 examples respectively.
103
+
104
+ ## Dataset Creation
105
+
106
+ ### Curation Rationale
107
+
108
+ From the paper abstract:
109
+
110
+ > Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to
111
+ detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a
112
+ fine-grained typology, adaptable to multiple downstream tasks.
113
+
114
+ ### Source Data
115
+
116
+ #### Initial Data Collection and Normalization
117
+
118
+ Data was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper.
119
+
120
+ #### Who are the source language producers?
121
+
122
+ English-speaking Reddit users.
123
+
124
+ ### Annotations
125
+
126
+ #### Annotation process
127
+
128
+ [More Information Needed]
129
+
130
+ #### Who are the annotators?
131
+
132
+ Annotations were produced by 3 English-speaking crowdworkers in India.
133
+
134
+ ### Personal and Sensitive Information
135
+
136
+ This dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames
137
+ are typically disasociated from personal real-world identities, this is not always the case. It may therefore be
138
+ possible to discover the identities of the individuals who created this content in some cases.
139
+
140
+ ## Considerations for Using the Data
141
+
142
+ ### Social Impact of Dataset
143
+
144
+ Emotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer
145
+ interaction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases
146
+ to make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance
147
+ pricing, and student attentiveness (see
148
+ [this article](https://www.unite.ai/ai-now-institute-warns-about-misuse-of-emotion-detection-software-and-other-ethical-issues/)).
149
+
150
+ ### Discussion of Biases
151
+
152
+ From the authors' github page:
153
+
154
+ > Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset.
155
+
156
+ ### Other Known Limitations
157
+
158
+ [More Information Needed]
159
+
160
+ ## Additional Information
161
+
162
+ ### Dataset Curators
163
+
164
+ Researchers at Amazon Alexa, Google Research, and Stanford. See the [author list](https://arxiv.org/abs/2005.00547).
165
+
166
+ ### Licensing Information
167
+
168
+ The GitHub repository which houses this dataset has an
169
+ [Apache License 2.0](https://github.com/google-research/google-research/blob/master/LICENSE).
170
+
171
+ ### Citation Information
172
+
173
+ @inproceedings{demszky2020goemotions,
174
+ author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},
175
+ booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},
176
+ title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},
177
+ year = {2020}
178
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"raw": {"description": "The GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral.\nThe emotion categories are admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire,\ndisappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness,\noptimism, pride, realization, relief, remorse, sadness, surprise.\n", "citation": "@inproceedings{demszky2020goemotions,\n author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},\n booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},\n title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},\n year = {2020}\n}\n", "homepage": "https://github.com/google-research/google-research/tree/master/goemotions", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}, "author": {"dtype": "string", "id": null, "_type": "Value"}, "subreddit": {"dtype": "string", "id": null, "_type": "Value"}, "link_id": {"dtype": "string", "id": null, "_type": "Value"}, "parent_id": {"dtype": "string", "id": null, "_type": "Value"}, "created_utc": {"dtype": "float32", "id": null, "_type": "Value"}, "rater_id": {"dtype": "int32", "id": null, "_type": "Value"}, "example_very_unclear": {"dtype": "bool", "id": null, "_type": "Value"}, "admiration": {"dtype": "int32", "id": null, "_type": "Value"}, "amusement": {"dtype": "int32", "id": null, "_type": "Value"}, "anger": {"dtype": "int32", "id": null, "_type": "Value"}, "annoyance": {"dtype": "int32", "id": null, "_type": "Value"}, "approval": {"dtype": "int32", "id": null, "_type": "Value"}, "caring": {"dtype": "int32", "id": null, "_type": "Value"}, "confusion": {"dtype": "int32", "id": null, "_type": "Value"}, "curiosity": {"dtype": "int32", "id": null, "_type": "Value"}, "desire": {"dtype": "int32", "id": null, "_type": "Value"}, "disappointment": {"dtype": "int32", "id": null, "_type": "Value"}, "disapproval": {"dtype": "int32", "id": null, "_type": "Value"}, "disgust": {"dtype": "int32", "id": null, "_type": "Value"}, "embarrassment": {"dtype": "int32", "id": null, "_type": "Value"}, "excitement": {"dtype": "int32", "id": null, "_type": "Value"}, "fear": {"dtype": "int32", "id": null, "_type": "Value"}, "gratitude": {"dtype": "int32", "id": null, "_type": "Value"}, "grief": {"dtype": "int32", "id": null, "_type": "Value"}, "joy": {"dtype": "int32", "id": null, "_type": "Value"}, "love": {"dtype": "int32", "id": null, "_type": "Value"}, "nervousness": {"dtype": "int32", "id": null, "_type": "Value"}, "optimism": {"dtype": "int32", "id": null, "_type": "Value"}, "pride": {"dtype": "int32", "id": null, "_type": "Value"}, "realization": {"dtype": "int32", "id": null, "_type": "Value"}, "relief": {"dtype": "int32", "id": null, "_type": "Value"}, "remorse": {"dtype": "int32", "id": null, "_type": "Value"}, "sadness": {"dtype": "int32", "id": null, "_type": "Value"}, "surprise": {"dtype": "int32", "id": null, "_type": "Value"}, "neutral": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "go_emotions", "config_name": "raw", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 55343630, "num_examples": 211225, "dataset_name": "go_emotions"}}, "download_checksums": {"https://storage.googleapis.com/gresearch/goemotions/data/full_dataset/goemotions_1.csv": {"num_bytes": 14174600, "checksum": "cac049036bad5d68d1081f72b65f2cc51e4df82af05e3e22cfa747051cac1af3"}, "https://storage.googleapis.com/gresearch/goemotions/data/full_dataset/goemotions_2.csv": {"num_bytes": 14173154, "checksum": "f699ecc5aa425c1720c1d02475f1e41815244b680bd75b282eb770d2c76cd84d"}, "https://storage.googleapis.com/gresearch/goemotions/data/full_dataset/goemotions_3.csv": {"num_bytes": 14395164, "checksum": "467f1e7191af00f2e76cc7f425885c2dc304bea8aff284b10e8c460d22f2e1af"}}, "download_size": 42742918, "post_processing_size": null, "dataset_size": 55343630, "size_in_bytes": 98086548}, "simplified": {"description": "The GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral.\nThe emotion categories are admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire,\ndisappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness,\noptimism, pride, realization, relief, remorse, sadness, surprise.\n", "citation": "@inproceedings{demszky2020goemotions,\n author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},\n booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},\n title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},\n year = {2020}\n}\n", "homepage": "https://github.com/google-research/google-research/tree/master/goemotions", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "labels": {"feature": {"num_classes": 28, "names": ["admiration", "amusement", "anger", "annoyance", "approval", "caring", "confusion", "curiosity", "desire", "disappointment", "disapproval", "disgust", "embarrassment", "excitement", "fear", "gratitude", "grief", "joy", "love", "nervousness", "optimism", "pride", "realization", "relief", "remorse", "sadness", "surprise", "neutral"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "go_emotions", "config_name": "simplified", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4224198, "num_examples": 43410, "dataset_name": "go_emotions"}, "validation": {"name": "validation", "num_bytes": 527131, "num_examples": 5426, "dataset_name": "go_emotions"}, "test": {"name": "test", "num_bytes": 524455, "num_examples": 5427, "dataset_name": "go_emotions"}}, "download_checksums": {"https://github.com/google-research/google-research/raw/master/goemotions/data/train.tsv": {"num_bytes": 3519053, "checksum": "1c254a142be5c00e80d819b9ae1bbd36d94b2eeb8f4b1271846508d57e57d9c5"}, "https://github.com/google-research/google-research/raw/master/goemotions/data/dev.tsv": {"num_bytes": 439059, "checksum": "575489c079c9de1097062a01738f998590d6b7ead66dd1c9fd1d2ba01fd8bc62"}, "https://github.com/google-research/google-research/raw/master/goemotions/data/test.tsv": {"num_bytes": 436706, "checksum": "0587b2dd8b27b97352adbfc3fb083d46005c8946657fdc2b1ca8b1cc7f1f8be4"}}, "download_size": 4394818, "post_processing_size": null, "dataset_size": 5275784, "size_in_bytes": 9670602}}
dummy/raw/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e80dbfe4f76930c93a8e3bdba550993ced7fbf4e60b896dee33abe3312a0863b
3
+ size 2228
dummy/simplified/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d77811d567cb698d30832d4277a8851baa12e35c76491cb45841af2219df389c
3
+ size 1363
go_emotions.py ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """GoEmotions dataset"""
18
+
19
+ from __future__ import absolute_import, division, print_function
20
+
21
+ import csv
22
+ import os
23
+
24
+ import datasets
25
+
26
+
27
+ _DESCRIPTION = """\
28
+ The GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral.
29
+ The emotion categories are admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire,
30
+ disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness,
31
+ optimism, pride, realization, relief, remorse, sadness, surprise.
32
+ """
33
+
34
+ _CITATION = """\
35
+ @inproceedings{demszky2020goemotions,
36
+ author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},
37
+ booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},
38
+ title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},
39
+ year = {2020}
40
+ }
41
+ """
42
+
43
+ _CLASS_NAMES = [
44
+ "admiration",
45
+ "amusement",
46
+ "anger",
47
+ "annoyance",
48
+ "approval",
49
+ "caring",
50
+ "confusion",
51
+ "curiosity",
52
+ "desire",
53
+ "disappointment",
54
+ "disapproval",
55
+ "disgust",
56
+ "embarrassment",
57
+ "excitement",
58
+ "fear",
59
+ "gratitude",
60
+ "grief",
61
+ "joy",
62
+ "love",
63
+ "nervousness",
64
+ "optimism",
65
+ "pride",
66
+ "realization",
67
+ "relief",
68
+ "remorse",
69
+ "sadness",
70
+ "surprise",
71
+ "neutral",
72
+ ]
73
+
74
+ _BASE_DOWNLOAD_URL = "https://github.com/google-research/google-research/raw/master/goemotions/data/"
75
+ _RAW_DOWNLOAD_URLS = [
76
+ "https://storage.googleapis.com/gresearch/goemotions/data/full_dataset/goemotions_1.csv",
77
+ "https://storage.googleapis.com/gresearch/goemotions/data/full_dataset/goemotions_2.csv",
78
+ "https://storage.googleapis.com/gresearch/goemotions/data/full_dataset/goemotions_3.csv",
79
+ ]
80
+ _HOMEPAGE = "https://github.com/google-research/google-research/tree/master/goemotions"
81
+
82
+
83
+ class GoEmotionsConfig(datasets.BuilderConfig):
84
+ @property
85
+ def features(self):
86
+ if self.name == "simplified":
87
+ return {
88
+ "text": datasets.Value("string"),
89
+ "labels": datasets.Sequence(datasets.ClassLabel(names=_CLASS_NAMES)),
90
+ "id": datasets.Value("string"),
91
+ }
92
+ elif self.name == "raw":
93
+ d = {
94
+ "text": datasets.Value("string"),
95
+ "id": datasets.Value("string"),
96
+ "author": datasets.Value("string"),
97
+ "subreddit": datasets.Value("string"),
98
+ "link_id": datasets.Value("string"),
99
+ "parent_id": datasets.Value("string"),
100
+ "created_utc": datasets.Value("float"),
101
+ "rater_id": datasets.Value("int32"),
102
+ "example_very_unclear": datasets.Value("bool"),
103
+ }
104
+ d.update({label: datasets.Value("int32") for label in _CLASS_NAMES})
105
+ return d
106
+
107
+
108
+ class GoEmotions(datasets.GeneratorBasedBuilder):
109
+ """GoEmotions dataset"""
110
+
111
+ BUILDER_CONFIGS = [
112
+ GoEmotionsConfig(
113
+ name="raw",
114
+ ),
115
+ GoEmotionsConfig(
116
+ name="simplified",
117
+ ),
118
+ ]
119
+ BUILDER_CONFIG_CLASS = GoEmotionsConfig
120
+ DEFAULT_CONFIG_NAME = "simplified"
121
+
122
+ def _info(self):
123
+ return datasets.DatasetInfo(
124
+ description=_DESCRIPTION,
125
+ features=datasets.Features(self.config.features),
126
+ homepage=_HOMEPAGE,
127
+ citation=_CITATION,
128
+ )
129
+
130
+ def _split_generators(self, dl_manager):
131
+ if self.config.name == "raw":
132
+ paths = dl_manager.download_and_extract(_RAW_DOWNLOAD_URLS)
133
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": paths, "raw": True})]
134
+ if self.config.name == "simplified":
135
+ train_path = dl_manager.download_and_extract(os.path.join(_BASE_DOWNLOAD_URL, "train.tsv"))
136
+ dev_path = dl_manager.download_and_extract(os.path.join(_BASE_DOWNLOAD_URL, "dev.tsv"))
137
+ test_path = dl_manager.download_and_extract(os.path.join(_BASE_DOWNLOAD_URL, "test.tsv"))
138
+ return [
139
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": [train_path]}),
140
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepaths": [dev_path]}),
141
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepaths": [test_path]}),
142
+ ]
143
+
144
+ def _generate_examples(self, filepaths, raw=False):
145
+ """Generate AG News examples."""
146
+ for filepath in filepaths:
147
+ with open(filepath, "r", encoding="utf-8") as f:
148
+ if raw:
149
+ reader = csv.DictReader(f)
150
+ else:
151
+ reader = csv.DictReader(f, delimiter="\t", fieldnames=list(self.config.features.keys()))
152
+
153
+ for irow, row in enumerate(reader):
154
+ if raw:
155
+ row["example_very_unclear"] = row["example_very_unclear"] == "TRUE"
156
+ else:
157
+ row["labels"] = [int(ind) for ind in row["labels"].split(",")]
158
+
159
+ yield irow, row