plaguss HF staff commited on
Commit
e766167
1 Parent(s): a556fb1

Add information for the dataset

Browse files
Files changed (1) hide show
  1. README.md +117 -0
README.md CHANGED
@@ -30,6 +30,10 @@ This dataset contains:
30
 
31
  * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
32
 
 
 
 
 
33
  ### Load with Argilla
34
 
35
  To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
@@ -265,6 +269,119 @@ The dataset contains a single split, which is `train`.
265
 
266
  ## Dataset Creation
267
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
268
  ### Curation Rationale
269
 
270
  [More Information Needed]
 
30
 
31
  * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
32
 
33
+ It contains the raw version of [go_emotions](https://huggingface.co/datasets/go_emotions) as a `FeedbackDataset`. Each of the original questions are defined a single
34
+ `FeedbackRecord` and contain the `responses` from each annotator. The final labels in the *simplified* version of the dataset have been used as `suggestions`, so that we
35
+ can use this dataset to showcase the metrics related to the agreement between annotators as well as the `responses` vs `suggestions` metrics.
36
+
37
  ### Load with Argilla
38
 
39
  To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
 
269
 
270
  ## Dataset Creation
271
 
272
+ ### Script used for the generation
273
+
274
+ ```python
275
+ import argilla as rg
276
+ from datasets import load_dataset
277
+ import uuid
278
+ from datasets import concatenate_datasets
279
+
280
+ ds = load_dataset("go_emotions", "raw", split="train")
281
+ ds_prepared = load_dataset("go_emotions")
282
+
283
+ _CLASS_NAMES = [
284
+ "admiration",
285
+ "amusement",
286
+ "anger",
287
+ "annoyance",
288
+ "approval",
289
+ "caring",
290
+ "confusion",
291
+ "curiosity",
292
+ "desire",
293
+ "disappointment",
294
+ "disapproval",
295
+ "disgust",
296
+ "embarrassment",
297
+ "excitement",
298
+ "fear",
299
+ "gratitude",
300
+ "grief",
301
+ "joy",
302
+ "love",
303
+ "nervousness",
304
+ "optimism",
305
+ "pride",
306
+ "realization",
307
+ "relief",
308
+ "remorse",
309
+ "sadness",
310
+ "surprise",
311
+ "neutral",
312
+ ]
313
+ label_to_id = {label: i for i, label in enumerate(_CLASS_NAMES)}
314
+ id_to_label = {i: label for i, label in enumerate(_CLASS_NAMES)}
315
+
316
+ # Concatenate the datasets and transform to pd.DataFrame
317
+
318
+ ds_prepared = concatenate_datasets([ds_prepared["train"], ds_prepared["validation"], ds_prepared["test"]])
319
+ df_prepared = ds_prepared.to_pandas()
320
+
321
+ # Obtain the final labels as a dict, to later include these as suggestions
322
+
323
+ labels_prepared = {}
324
+ for idx in df_prepared.index:
325
+ labels = [id_to_label[label_id] for label_id in df_prepared['labels'][idx]]
326
+ labels_prepared[df_prepared['id'][idx]] = labels
327
+
328
+ # Add labels to the dataset and keep only the relevant columns
329
+
330
+ def add_labels(ex):
331
+ labels = []
332
+ for label in _CLASS_NAMES:
333
+ if ex[label] == 1:
334
+ labels.append(label)
335
+ ex["labels"] = labels
336
+
337
+ return ex
338
+
339
+ ds = ds.map(add_labels)
340
+ df = ds.select_columns(["text", "labels", "rater_id", "id"]).to_pandas()
341
+
342
+ # Create a FeedbackDataset for text classification
343
+
344
+ feedback_dataset = rg.FeedbackDataset.for_text_classification(labels=_CLASS_NAMES, multi_label=True)
345
+
346
+ # Create the records with the original responses, and use as suggestions
347
+ # the final labels in the "simplified" go_emotions dataset.
348
+
349
+ records = []
350
+ for text, df_text in df.groupby("text"):
351
+ responses = []
352
+ for rater_id, df_raters in df_text.groupby("rater_id"):
353
+ responses.append(
354
+ {
355
+ "values": {"label": {"value": df_raters["labels"].iloc[0].tolist()}},
356
+ "status": "submitted",
357
+ "user_id": uuid.UUID(int=rater_id),
358
+ }
359
+ )
360
+ suggested_labels = labels_prepared.get(df_raters["id"].iloc[0], None)
361
+ if not suggested_labels:
362
+ continue
363
+ suggestion = [
364
+ {
365
+ "question_name": "label",
366
+ "value": suggested_labels,
367
+ "type": "human",
368
+ }
369
+ ]
370
+ records.append(
371
+ rg.FeedbackRecord(
372
+ fields={"text": df_raters["text"].iloc[0]},
373
+ responses=responses,
374
+ suggestions=suggestion
375
+ )
376
+ )
377
+
378
+
379
+ feedback_dataset.add_records(records)
380
+
381
+ # Push to the hub
382
+ feedback_dataset.push_to_huggingface("plaguss/go_emotions_raw")
383
+ ```
384
+
385
  ### Curation Rationale
386
 
387
  [More Information Needed]