movie_rationales / README.md
albertvillanova's picture
Convert dataset sizes from base 2 to base 10 in the dataset card (#2)
bc22b13
|
raw
history blame
6.65 kB
metadata
annotations_creators:
  - crowdsourced
language_creators:
  - found
language:
  - en
license:
  - unknown
multilinguality:
  - monolingual
size_categories:
  - 1K<n<10K
source_datasets:
  - original
task_categories:
  - text-classification
task_ids:
  - sentiment-classification
pretty_name: MovieRationales
dataset_info:
  features:
    - name: review
      dtype: string
    - name: label
      dtype:
        class_label:
          names:
            '0': NEG
            '1': POS
    - name: evidences
      sequence: string
  splits:
    - name: test
      num_bytes: 1046377
      num_examples: 199
    - name: train
      num_bytes: 6853624
      num_examples: 1600
    - name: validation
      num_bytes: 830417
      num_examples: 200
  download_size: 3899487
  dataset_size: 8730418

Dataset Card for "movie_rationales"

Table of Contents

Dataset Description

Dataset Summary

The movie rationale dataset contains human annotated rationales for movie reviews.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

default

  • Size of downloaded dataset files: 3.90 MB
  • Size of the generated dataset: 8.73 MB
  • Total amount of disk used: 12.62 MB

An example of 'validation' looks as follows.

{
    "evidences": ["Fun movie"],
    "label": 1,
    "review": "Fun movie\n"
}

Data Fields

The data fields are the same among all splits.

default

  • review: a string feature.
  • label: a classification label, with possible values including NEG (0), POS (1).
  • evidences: a list of string features.

Data Splits

name train validation test
default 1600 200 199

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

@inproceedings{deyoung-etal-2020-eraser,
    title = "{ERASER}: {A} Benchmark to Evaluate Rationalized {NLP} Models",
    author = "DeYoung, Jay  and
      Jain, Sarthak  and
      Rajani, Nazneen Fatema  and
      Lehman, Eric  and
      Xiong, Caiming  and
      Socher, Richard  and
      Wallace, Byron C.",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.acl-main.408",
    doi = "10.18653/v1/2020.acl-main.408",
    pages = "4443--4458",
}
@InProceedings{zaidan-eisner-piatko-2008:nips,
  author    =  {Omar F. Zaidan  and  Jason Eisner  and  Christine Piatko},
  title     =  {Machine Learning with Annotator Rationales to Reduce Annotation Cost},
  booktitle =  {Proceedings of the NIPS*2008 Workshop on Cost Sensitive Learning},
  month     =  {December},
  year      =  {2008}
}

Contributions

Thanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset.