Datasets:
File size: 3,425 Bytes
c321954 5c564b5 c321954 5c564b5 667c94d 6f1801f 5c564b5 6f1801f 5c564b5 c321954 667c94d 2d47220 667c94d 04553c9 667c94d b812e92 3e4cf01 b734e1c 3e4cf01 04553c9 3e4cf01 06b7ab6 185057a 06b7ab6 185057a 06b7ab6 3e4cf01 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
---
language:
- en
license: apache-2.0
size_categories:
- 100K<n<1M
source_datasets:
- xsum
task_categories:
- summarization
pretty_name: 'Stacked XSUM: 1024 tokens max'
tags:
- stacked summaries
- xsum
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
- name: id
dtype: int64
- name: chapter_length
dtype: int64
- name: summary_length
dtype: int64
- name: is_stacked
dtype: bool
splits:
- name: train
num_bytes: 918588672
num_examples: 320939
- name: validation
num_bytes: 51154057
num_examples: 17935
- name: test
num_bytes: 51118088
num_examples: 17830
download_size: 653378162
dataset_size: 1020860817
---
# stacked-xsum-1024
a "stacked" version of `xsum`
1. Original Dataset: copy of the base dataset
2. Stacked Rows: The original dataset is processed by stacking rows based on certain criteria:
- Maximum Input Length: The maximum length for input sequences is 1024 tokens in the longt5 model tokenizer.
- Maximum Output Length: The maximum length for output sequences is also 1024 tokens in the longt5 model tokenizer.
3. Special Token: The dataset utilizes the `[NEXT_CONCEPT]` token to indicate a new topic **within** the same summary. It is recommended to explicitly add this special token to your model's tokenizer before training, ensuring that it is recognized and processed correctly during downstream usage.
4.
## updates
- dec 3: upload initial version
- dec 4: upload v2 with basic data quality fixes (i.e. the `is_stacked` column)
- dec 5 0500: upload v3 which has pre-randomised order and duplicate rows for document+summary dropped
## stats
![stats](https://i.imgur.com/TyyDthT.png)
## dataset details
see the repo `.log` file for more details.
train input
```python
[2022-12-05 01:05:17] INFO:root:INPUTS - basic stats - train
[2022-12-05 01:05:17] INFO:root:{'num_columns': 5,
'num_rows': 204045,
'num_unique_target': 203107,
'num_unique_text': 203846,
'summary - average chars': 125.46,
'summary - average tokens': 30.383719277610332,
'text input - average chars': 2202.42,
'text input - average tokens': 523.9222230390355}
```
stacked train:
```python
[2022-12-05 04:47:01] INFO:root:stacked 181719 rows, 22326 rows were ineligible
[2022-12-05 04:47:02] INFO:root:dropped 64825 duplicate rows, 320939 rows remain
[2022-12-05 04:47:02] INFO:root:shuffling output with seed 323
[2022-12-05 04:47:03] INFO:root:STACKED - basic stats - train
[2022-12-05 04:47:04] INFO:root:{'num_columns': 6,
'num_rows': 320939,
'num_unique_chapters': 320840,
'num_unique_summaries': 320101,
'summary - average chars': 199.89,
'summary - average tokens': 46.29925001324239,
'text input - average chars': 2629.19,
'text input - average tokens': 621.541532814647}
```
## Citation
If you find this useful in your work, please consider citing us.
```
@misc {stacked_summaries_2023,
author = { {Stacked Summaries: Karim Foda and Peter Szemraj} },
title = { stacked-xsum-1024 (Revision 2d47220) },
year = 2023,
url = { https://huggingface.co/datasets/stacked-summaries/stacked-xsum-1024 },
doi = { 10.57967/hf/0390 },
publisher = { Hugging Face }
}
``` |