Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
CEBaB / README.md
norabelrose's picture
Add CC license
d1e1c91
metadata
license: cc-by-4.0
dataset_info:
  features:
    - name: original_id
      dtype: int32
    - name: edit_goal
      dtype: string
    - name: edit_type
      dtype: string
    - name: text
      dtype: string
    - name: food
      dtype: string
    - name: ambiance
      dtype: string
    - name: service
      dtype: string
    - name: noise
      dtype: string
    - name: counterfactual
      dtype: bool
    - name: rating
      dtype: int64
  splits:
    - name: validation
      num_bytes: 306529
      num_examples: 1673
    - name: test
      num_bytes: 309751
      num_examples: 1689
    - name: train
      num_bytes: 2282439
      num_examples: 11728
  download_size: 628886
  dataset_size: 2898719
task_categories:
  - text-classification
language:
  - en

Dataset Card for "CEBaB"

This is a lightly cleaned and simplified version of the CEBaB counterfactual restaurant review dataset from this paper. The most important difference from the original dataset is that the rating column corresponds to the median rating provided by the Mechanical Turkers, rather than the majority rating. These are the same whenever a majority rating exists, but when there is no majority rating (e.g. because there were two 1s, two 2s, and one 3), the original dataset used a "no majority" placeholder whereas we are able to provide an aggregate rating for all reviews.

The exact code used to process the original dataset is provided below:

from ast import literal_eval
from datasets import DatasetDict, Value, load_dataset


def compute_median(x: str):
    """Compute the median rating given a multiset of ratings."""
    # Decode the dictionary from string format
    dist = literal_eval(x)

    # Should be a dictionary whose keys are string-encoded integer ratings
    # and whose values are the number of times that the rating was observed
    assert isinstance(dist, dict)
    assert sum(dist.values()) % 2 == 1, "Number of ratings should be odd"

    ratings = []
    for rating, count in dist.items():
        ratings.extend([int(rating)] * count)

    ratings.sort()
    return ratings[len(ratings) // 2]


cebab = load_dataset('CEBaB/CEBaB')
assert isinstance(cebab, DatasetDict)

# Remove redundant splits
cebab['train'] = cebab.pop('train_inclusive')
del cebab['train_exclusive']
del cebab['train_observational']

cebab = cebab.cast_column(
    'original_id', Value('int32')
).map(
    lambda x: {
        # New column with inverted label for counterfactuals
        'counterfactual': not x['is_original'],
        # Reduce the rating multiset into a single median rating
        'rating': compute_median(x['review_label_distribution'])
    }
).map(
    # Replace the empty string and 'None' with Apache Arrow nulls
    lambda x: {
        k: v if v not in ('', 'no majority', 'None') else None
        for k, v in x.items()
    }
)

# Sanity check that all the splits have the same columns
cols = next(iter(cebab.values())).column_names
assert all(split.column_names == cols for split in cebab.values())

# Clean up the names a bit
cebab = cebab.rename_columns({
    col: col.removesuffix('_majority').removesuffix('_aspect')
    for col in cols if col.endswith('_majority')
}).rename_column(
    'description', 'text'
)

# Drop the unimportant columns
cebab = cebab.remove_columns([
    col for col in cols if col.endswith('_distribution') or col.endswith('_workers')
] + [
    'edit_id', 'edit_worker', 'id', 'is_original', 'opentable_metadata', 'review'
]).sort([
    # Make sure counterfactual reviews come immediately after each original review
    'original_id', 'counterfactual'
])