Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
DOI:
Libraries:
Datasets
Dask
License:
pszemraj commited on
Commit
04553c9
1 Parent(s): 185057a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -19,12 +19,26 @@ size_categories:
19
 
20
  a "stacked" version of `xsum`
21
 
 
 
 
 
 
 
 
 
22
  ## updates
23
 
24
  - dec 3: upload initial version
25
  - dec 4: upload v2 with basic data quality fixes (i.e. the `is_stacked` column)
26
  - dec 5 0500: upload v3 which has pre-randomised order and duplicate rows for document+summary dropped
27
 
 
 
 
 
 
 
28
  ## dataset details
29
 
30
  see the repo `.log` file for more details.
 
19
 
20
  a "stacked" version of `xsum`
21
 
22
+ 1. Original Dataset: copy of the base dataset
23
+
24
+ 2. Stacked Rows: The original dataset is processed by stacking rows based on certain criteria:
25
+ - Maximum Input Length: The maximum length for input sequences is 1024 tokens in the longt5 model tokenizer.
26
+ - Maximum Output Length: The maximum length for output sequences is also 1024 tokens in the longt5 model tokenizer.
27
+
28
+ 3. Special Token: The dataset utilizes the `[NEXT_CONCEPT]` token to indicate a new topic **within** the same summary. It is recommended to explicitly add this special token to your model's tokenizer before training, ensuring that it is recognized and processed correctly during downstream usage.
29
+ 4.
30
  ## updates
31
 
32
  - dec 3: upload initial version
33
  - dec 4: upload v2 with basic data quality fixes (i.e. the `is_stacked` column)
34
  - dec 5 0500: upload v3 which has pre-randomised order and duplicate rows for document+summary dropped
35
 
36
+
37
+ ## stats
38
+
39
+
40
+ ![stats](https://i.imgur.com/TyyDthT.png)
41
+
42
  ## dataset details
43
 
44
  see the repo `.log` file for more details.