albertvillanova HF staff commited on
Commit
a084417
1 Parent(s): f5dc258
README.md CHANGED
@@ -7,7 +7,7 @@ language_creators:
7
  language:
8
  - en
9
  license:
10
- - unknown
11
  multilinguality:
12
  - monolingual
13
  size_categories:
@@ -71,6 +71,7 @@ train-eval-index:
71
  tags:
72
  - emotion-classification
73
  dataset_info:
 
74
  features:
75
  - name: text
76
  dtype: string
@@ -78,24 +79,44 @@ dataset_info:
78
  dtype:
79
  class_label:
80
  names:
81
- 0: sadness
82
- 1: joy
83
- 2: love
84
- 3: anger
85
- 4: fear
86
- 5: surprise
87
  splits:
88
  - name: train
89
- num_bytes: 1741541
90
  num_examples: 16000
91
  - name: validation
92
- num_bytes: 214699
93
  num_examples: 2000
94
  - name: test
95
- num_bytes: 217177
96
  num_examples: 2000
97
- download_size: 2069616
98
- dataset_size: 2173417
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
  ---
100
 
101
  # Dataset Card for "emotion"
@@ -150,49 +171,30 @@ Emotion is a dataset of English Twitter messages with six basic emotions: anger,
150
 
151
  ### Data Instances
152
 
153
- #### default
154
-
155
- - **Size of downloaded dataset files:** 1.97 MB
156
- - **Size of the generated dataset:** 2.07 MB
157
- - **Total amount of disk used:** 4.05 MB
158
-
159
- An example of 'train' looks as follows.
160
  ```
161
  {
162
- "label": 0,
163
- "text": "im feeling quite sad and sorry for myself but ill snap out of it soon"
164
  }
165
  ```
166
 
167
- #### emotion
168
-
169
- - **Size of downloaded dataset files:** 1.97 MB
170
- - **Size of the generated dataset:** 2.09 MB
171
- - **Total amount of disk used:** 4.06 MB
172
-
173
- An example of 'validation' looks as follows.
174
- ```
175
-
176
- ```
177
-
178
  ### Data Fields
179
 
180
- The data fields are the same among all splits.
181
-
182
- #### default
183
  - `text`: a `string` feature.
184
  - `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
185
 
186
- #### emotion
187
- - `text`: a `string` feature.
188
- - `label`: a `string` feature.
189
-
190
  ### Data Splits
191
 
192
- | name | train | validation | test |
193
- | ------- | ----: | ---------: | ---: |
194
- | default | 16000 | 2000 | 2000 |
195
- | emotion | 16000 | 2000 | 2000 |
 
 
 
 
196
 
197
  ## Dataset Creation
198
 
@@ -246,10 +248,11 @@ The data fields are the same among all splits.
246
 
247
  ### Licensing Information
248
 
249
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
250
 
251
  ### Citation Information
252
 
 
253
  ```
254
  @inproceedings{saravia-etal-2018-carer,
255
  title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
@@ -268,10 +271,8 @@ The data fields are the same among all splits.
268
  pages = "3687--3697",
269
  abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
270
  }
271
-
272
  ```
273
 
274
-
275
  ### Contributions
276
 
277
- Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
 
7
  language:
8
  - en
9
  license:
10
+ - other
11
  multilinguality:
12
  - monolingual
13
  size_categories:
 
71
  tags:
72
  - emotion-classification
73
  dataset_info:
74
+ - config_name: split
75
  features:
76
  - name: text
77
  dtype: string
 
79
  dtype:
80
  class_label:
81
  names:
82
+ '0': sadness
83
+ '1': joy
84
+ '2': love
85
+ '3': anger
86
+ '4': fear
87
+ '5': surprise
88
  splits:
89
  - name: train
90
+ num_bytes: 1741597
91
  num_examples: 16000
92
  - name: validation
93
+ num_bytes: 214703
94
  num_examples: 2000
95
  - name: test
96
+ num_bytes: 217181
97
  num_examples: 2000
98
+ download_size: 740883
99
+ dataset_size: 2173481
100
+ - config_name: unsplit
101
+ features:
102
+ - name: text
103
+ dtype: string
104
+ - name: label
105
+ dtype:
106
+ class_label:
107
+ names:
108
+ '0': sadness
109
+ '1': joy
110
+ '2': love
111
+ '3': anger
112
+ '4': fear
113
+ '5': surprise
114
+ splits:
115
+ - name: train
116
+ num_bytes: 45445685
117
+ num_examples: 416809
118
+ download_size: 15388281
119
+ dataset_size: 45445685
120
  ---
121
 
122
  # Dataset Card for "emotion"
 
171
 
172
  ### Data Instances
173
 
174
+ An example looks as follows.
 
 
 
 
 
 
175
  ```
176
  {
177
+ "text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
178
+ "label": 0
179
  }
180
  ```
181
 
 
 
 
 
 
 
 
 
 
 
 
182
  ### Data Fields
183
 
184
+ The data fields are:
 
 
185
  - `text`: a `string` feature.
186
  - `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
187
 
 
 
 
 
188
  ### Data Splits
189
 
190
+ The dataset has 2 configurations:
191
+ - split: with a total of 20_000 examples split into train, validation and split
192
+ - unsplit: with a total of 416_809 examples in a single train split
193
+
194
+ | name | train | validation | test |
195
+ |---------|-------:|-----------:|-----:|
196
+ | split | 16000 | 2000 | 2000 |
197
+ | unsplit | 416809 | n/a | n/a |
198
 
199
  ## Dataset Creation
200
 
 
248
 
249
  ### Licensing Information
250
 
251
+ The dataset should be used for educational and research purposes only.
252
 
253
  ### Citation Information
254
 
255
+ If you use this dataset, please cite:
256
  ```
257
  @inproceedings{saravia-etal-2018-carer,
258
  title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
 
271
  pages = "3687--3697",
272
  abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
273
  }
 
274
  ```
275
 
 
276
  ### Contributions
277
 
278
+ Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
data/data.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8944e6b35cb42294769ac30cf17bd006231545b2eeecfa59324246e192564d1f
3
+ size 15388281
data/test.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4524468d0b7ee8eab07a088216cde7f9278f1c574669504a805ed172df6dad75
3
+ size 74935
data/train.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:757a0a73f1483f4b3f94783b774cdbf0831722a2b2c9abb5b820b4614ff6882a
3
+ size 591930
data/validation.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50783464882f450f88e61ece964a200e492495eed1472ed520d013bbcd3049be
3
+ size 74018
emotion.py CHANGED
@@ -1,4 +1,4 @@
1
- import csv
2
 
3
  import datasets
4
  from datasets.tasks import TextClassification
@@ -27,14 +27,33 @@ _CITATION = """\
27
  _DESCRIPTION = """\
28
  Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
29
  """
30
- _URL = "https://github.com/dair-ai/emotion_dataset"
31
- # use dl=1 to force browser to download data instead of displaying it
32
- _TRAIN_DOWNLOAD_URL = "https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1"
33
- _VALIDATION_DOWNLOAD_URL = "https://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt?dl=1"
34
- _TEST_DOWNLOAD_URL = "https://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt?dl=1"
 
 
 
 
 
 
 
 
 
 
35
 
36
 
37
  class Emotion(datasets.GeneratorBasedBuilder):
 
 
 
 
 
 
 
 
 
38
  def _info(self):
39
  class_names = ["sadness", "joy", "love", "anger", "fear", "surprise"]
40
  return datasets.DatasetInfo(
@@ -43,26 +62,27 @@ class Emotion(datasets.GeneratorBasedBuilder):
43
  {"text": datasets.Value("string"), "label": datasets.ClassLabel(names=class_names)}
44
  ),
45
  supervised_keys=("text", "label"),
46
- homepage=_URL,
47
  citation=_CITATION,
 
48
  task_templates=[TextClassification(text_column="text", label_column="label")],
49
  )
50
 
51
  def _split_generators(self, dl_manager):
52
  """Returns SplitGenerators."""
53
- train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL)
54
- valid_path = dl_manager.download_and_extract(_VALIDATION_DOWNLOAD_URL)
55
- test_path = dl_manager.download_and_extract(_TEST_DOWNLOAD_URL)
56
- return [
57
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
58
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": valid_path}),
59
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}),
60
- ]
 
61
 
62
  def _generate_examples(self, filepath):
63
  """Generate examples."""
64
- with open(filepath, encoding="utf-8") as csv_file:
65
- csv_reader = csv.reader(csv_file, delimiter=";")
66
- for id_, row in enumerate(csv_reader):
67
- text, label = row
68
- yield id_, {"text": text, "label": label}
 
1
+ import json
2
 
3
  import datasets
4
  from datasets.tasks import TextClassification
 
27
  _DESCRIPTION = """\
28
  Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
29
  """
30
+
31
+ _HOMEPAGE = "https://github.com/dair-ai/emotion_dataset"
32
+
33
+ _LICENSE = "The dataset should be used for educational and research purposes only"
34
+
35
+ _URLS = {
36
+ "split": {
37
+ "train": "data/train.jsonl.gz",
38
+ "validation": "data/validation.jsonl.gz",
39
+ "test": "data/test.jsonl.gz",
40
+ },
41
+ "unsplit": {
42
+ "train": "data/data.jsonl.gz",
43
+ },
44
+ }
45
 
46
 
47
  class Emotion(datasets.GeneratorBasedBuilder):
48
+ VERSION = datasets.Version("1.0.0")
49
+ BUILDER_CONFIGS = [
50
+ datasets.BuilderConfig(
51
+ name="split", version=VERSION, description="Dataset split in train, validation and test"
52
+ ),
53
+ datasets.BuilderConfig(name="unsplit", version=VERSION, description="Unsplit dataset"),
54
+ ]
55
+ DEFAULT_CONFIG_NAME = "split"
56
+
57
  def _info(self):
58
  class_names = ["sadness", "joy", "love", "anger", "fear", "surprise"]
59
  return datasets.DatasetInfo(
 
62
  {"text": datasets.Value("string"), "label": datasets.ClassLabel(names=class_names)}
63
  ),
64
  supervised_keys=("text", "label"),
65
+ homepage=_HOMEPAGE,
66
  citation=_CITATION,
67
+ license=_LICENSE,
68
  task_templates=[TextClassification(text_column="text", label_column="label")],
69
  )
70
 
71
  def _split_generators(self, dl_manager):
72
  """Returns SplitGenerators."""
73
+ paths = dl_manager.download_and_extract(_URLS[self.config.name])
74
+ if self.config.name == "split":
75
+ return [
76
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": paths["train"]}),
77
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": paths["validation"]}),
78
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": paths["test"]}),
79
+ ]
80
+ else:
81
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": paths["train"]})]
82
 
83
  def _generate_examples(self, filepath):
84
  """Generate examples."""
85
+ with open(filepath, encoding="utf-8") as f:
86
+ for idx, line in enumerate(f):
87
+ example = json.loads(line)
88
+ yield idx, example