Datasets:
albertvillanova
HF staff
Convert dataset sizes from base 2 to base 10 in the dataset card (#3)
d8b8066
metadata
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: EmpatheticDialogues
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- conversational
- question-answering
task_ids:
- dialogue-generation
- open-domain-qa
paperswithcode_id: empatheticdialogues
dataset_info:
features:
- name: conv_id
dtype: string
- name: utterance_idx
dtype: int32
- name: context
dtype: string
- name: prompt
dtype: string
- name: speaker_idx
dtype: int32
- name: utterance
dtype: string
- name: selfeval
dtype: string
- name: tags
dtype: string
splits:
- name: test
num_bytes: 3011332
num_examples: 10943
- name: train
num_bytes: 19040509
num_examples: 76673
- name: validation
num_bytes: 3077481
num_examples: 12030
download_size: 28022709
dataset_size: 25129322
Dataset Card for "empathetic_dialogues"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://github.com/facebookresearch/EmpatheticDialogues
- Repository: https://github.com/facebookresearch/EmpatheticDialogues
- Paper: Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 28.02 MB
- Size of the generated dataset: 25.13 MB
- Total amount of disk used: 53.15 MB
Dataset Summary
PyTorch original implementation of Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
default
- Size of downloaded dataset files: 28.02 MB
- Size of the generated dataset: 25.13 MB
- Total amount of disk used: 53.15 MB
An example of 'train' looks as follows.
{
"context": "sentimental",
"conv_id": "hit:0_conv:1",
"prompt": "I remember going to the fireworks with my best friend. There was a lot of people_comma_ but it only felt like us in the world.",
"selfeval": "5|5|5_2|2|5",
"speaker_idx": 1,
"tags": "",
"utterance": "I remember going to see the fireworks with my best friend. It was the first time we ever spent time alone together. Although there was a lot of people_comma_ we felt like the only people in the world.",
"utterance_idx": 1
}
Data Fields
The data fields are the same among all splits.
default
conv_id
: astring
feature.utterance_idx
: aint32
feature.context
: astring
feature.prompt
: astring
feature.speaker_idx
: aint32
feature.utterance
: astring
feature.selfeval
: astring
feature.tags
: astring
feature.
Data Splits
name | train | validation | test |
---|---|---|---|
default | 76673 | 12030 | 10943 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Creative Commons Attribution-NonCommercial 4.0 International.
Citation Information
@inproceedings{rashkin-etal-2019-towards,
title = "Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset",
author = "Rashkin, Hannah and
Smith, Eric Michael and
Li, Margaret and
Boureau, Y-Lan",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1534",
doi = "10.18653/v1/P19-1534",
pages = "5370--5381",
}
Contributions
Thanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset.