Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
DOI:
Libraries:
Datasets
Dask
License:
stacked-xsum-1024 / README.md
pszemraj's picture
Upload README.md with huggingface_hub
5c564b5
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 100K<n<1M
source_datasets:
  - xsum
task_categories:
  - summarization
pretty_name: 'Stacked XSUM: 1024 tokens max'
tags:
  - stacked summaries
  - xsum
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: document
      dtype: string
    - name: summary
      dtype: string
    - name: id
      dtype: int64
    - name: chapter_length
      dtype: int64
    - name: summary_length
      dtype: int64
    - name: is_stacked
      dtype: bool
  splits:
    - name: train
      num_bytes: 918588672
      num_examples: 320939
    - name: validation
      num_bytes: 51154057
      num_examples: 17935
    - name: test
      num_bytes: 51118088
      num_examples: 17830
  download_size: 653378162
  dataset_size: 1020860817

stacked-xsum-1024

a "stacked" version of xsum

  1. Original Dataset: copy of the base dataset

  2. Stacked Rows: The original dataset is processed by stacking rows based on certain criteria:

    • Maximum Input Length: The maximum length for input sequences is 1024 tokens in the longt5 model tokenizer.
    • Maximum Output Length: The maximum length for output sequences is also 1024 tokens in the longt5 model tokenizer.
  3. Special Token: The dataset utilizes the [NEXT_CONCEPT] token to indicate a new topic within the same summary. It is recommended to explicitly add this special token to your model's tokenizer before training, ensuring that it is recognized and processed correctly during downstream usage.

updates

  • dec 3: upload initial version
  • dec 4: upload v2 with basic data quality fixes (i.e. the is_stacked column)
  • dec 5 0500: upload v3 which has pre-randomised order and duplicate rows for document+summary dropped

stats

stats

dataset details

see the repo .log file for more details.

train input

[2022-12-05 01:05:17] INFO:root:INPUTS - basic stats - train
[2022-12-05 01:05:17] INFO:root:{'num_columns': 5,
 'num_rows': 204045,
 'num_unique_target': 203107,
 'num_unique_text': 203846,
 'summary - average chars': 125.46,
 'summary - average tokens': 30.383719277610332,
 'text input - average chars': 2202.42,
 'text input - average tokens': 523.9222230390355}

stacked train:

[2022-12-05 04:47:01] INFO:root:stacked 181719 rows, 22326 rows were ineligible
[2022-12-05 04:47:02] INFO:root:dropped 64825 duplicate rows, 320939 rows remain
[2022-12-05 04:47:02] INFO:root:shuffling output with seed 323
[2022-12-05 04:47:03] INFO:root:STACKED - basic stats - train
[2022-12-05 04:47:04] INFO:root:{'num_columns': 6,
 'num_rows': 320939,
 'num_unique_chapters': 320840,
 'num_unique_summaries': 320101,
 'summary - average chars': 199.89,
 'summary - average tokens': 46.29925001324239,
 'text input - average chars': 2629.19,
 'text input - average tokens': 621.541532814647}

Citation

If you find this useful in your work, please consider citing us.

@misc {stacked_summaries_2023,
    author       = { {Stacked Summaries: Karim Foda and Peter Szemraj} },
    title        = { stacked-xsum-1024 (Revision 2d47220) },
    year         = 2023,
    url          = { https://huggingface.co/datasets/stacked-summaries/stacked-xsum-1024 },
    doi          = { 10.57967/hf/0390 },
    publisher    = { Hugging Face }
}