File size: 9,140 Bytes
1ceed21
ae322ad
 
 
 
78512b7
421609d
78512b7
c2d7658
ae322ad
 
 
7009309
ae322ad
 
 
 
 
 
39b5f97
63209c7
 
 
 
 
 
 
 
 
 
39b5f97
 
 
 
75ec0e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ceed21
 
7009309
1ceed21
 
 
 
c69db12
1ceed21
 
 
 
c69db12
1ceed21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e26ab6
1ceed21
63209c7
7009309
 
1ceed21
 
 
 
 
3e26ab6
1ceed21
7009309
1ceed21
 
 
 
 
 
c69db12
1ceed21
7009309
 
 
 
 
 
 
 
 
 
1ceed21
3e26ab6
1ceed21
63209c7
1ceed21
3e26ab6
1ceed21
3e26ab6
1ceed21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e26ab6
1ceed21
 
 
 
 
 
 
 
 
 
 
 
 
c69db12
1ceed21
 
 
 
 
7009309
 
3e26ab6
1ceed21
3e26ab6
1ceed21
7009309
1ceed21
3e26ab6
1ceed21
7009309
 
c69db12
 
7009309
 
c69db12
 
 
7009309
1ceed21
3e26ab6
1ceed21
c69db12
 
 
 
 
 
1ceed21
 
3e26ab6
1ceed21
 
 
3e26ab6
1ceed21
3e26ab6
1ceed21
7009309
1ceed21
3e26ab6
1ceed21
 
 
3e26ab6
1ceed21
7009309
 
63209c7
1ceed21
3e26ab6
1ceed21
3e26ab6
1ceed21
7009309
1ceed21
3e26ab6
1ceed21
 
 
3e26ab6
1ceed21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39b5f97
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: Reddit Webis-TLDR-17
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- summarization
task_ids: []
train-eval-index:
- config: default
  task: summarization
  task_id: summarization
  splits:
    train_split: train
  col_mapping:
    content: text
    summary: target
  metrics:
  - type: rouge
    name: Rouge
tags:
- reddit-posts-summarization
dataset_info:
  features:
  - name: author
    dtype: string
  - name: body
    dtype: string
  - name: normalizedBody
    dtype: string
  - name: subreddit
    dtype: string
  - name: subreddit_id
    dtype: string
  - name: id
    dtype: string
  - name: content
    dtype: string
  - name: summary
    dtype: string
  splits:
  - name: train
    num_bytes: 18940542951
    num_examples: 3848330
  download_size: 3141854161
  dataset_size: 18940542951
---

# Dataset Card for Reddit Webis-TLDR-17

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [https://webis.de/data/webis-tldr-17.html](https://webis.de/data/webis-tldr-17.html)
- **Repository:** [https://github.com/webis-de/webis-tldr-17-corpus](https://github.com/webis-de/webis-tldr-17-corpus)
- **Paper:** [https://aclanthology.org/W17-4508]
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2996.31 MB
- **Size of the generated dataset:** 18063.11 MB
- **Total amount of disk used:** 21059.41 MB

### Dataset Summary

This corpus contains preprocessed posts from the Reddit dataset (Webis-TLDR-17).
The dataset consists of 3,848,330 posts with an average length of 270 words for content,
and 28 words for the summary.

Features includes strings: author, body, normalizedBody, content, summary, subreddit, subreddit_id.
Content is used as document and summary is used as summary.

### Supported Tasks and Leaderboards

Summarization (abstractive)

Known ROUGE scores achieved for the Webis-TLDR-17:

| Model | ROUGE-1 | ROUGE-2 | ROUGE-L | Paper/Source |
|-------|-------|-------|-------|------:|
| Transformer + Copy (Gehrmann et al., 2019) | 22 | 6 | 17 | Generating Summaries with Finetuned Language Models | 	
| Unified VAE + PGN (Choi et al., 2019) |	19 | 4 | 15 | VAE-PGN based Abstractive Model in Multi-stage Architecture for Text Summarization | 	

(Source: https://github.com/sebastianruder/NLP-progress/blob/master/english/summarization.md)

### Languages

English

## Dataset Structure

### Data Instances

#### default

- **Size of downloaded dataset files:** 2996.31 MB
- **Size of the generated dataset:** 18063.11 MB
- **Total amount of disk used:** 21059.41 MB

An example of 'train' looks as follows.
```
{
    "author": "me",
    "body": "<>",
    "content": "input document.",
    "id": "1",
    "normalizedBody": "",
    "subreddit": "machinelearning",
    "subreddit_id": "2",
    "summary": "output summary."
}
```

### Data Fields

The data fields are the same among all splits.

#### default
- `author`: a `string` feature.
- `body`: a `string` feature.
- `normalizedBody`: a `string` feature.
- `subreddit`: a `string` feature.
- `subreddit_id`: a `string` feature.
- `id`: a `string` feature.
- `content`: a `string` feature.
- `summary`: a `string` feature.

### Data Splits

| name  | train |
|-------|------:|
|default|3848330|

This corpus does not contain a separate test set. Thus it is up to the users to divide the corpus into appropriate training, validation and test sets.

## Dataset Creation

### Curation Rationale

In the scope of the task of absractive summarization the creators of the Webis-TLDR-17 propose mining social media for author-provided summaries and taking advantage of the common practice of appending a "TL;DR" to long posts. A large Reddit crawl was used to yield the Webis-TLDR-17 corpus. This dataset intends to complement the existing summarization corpora primarily from the news genre.

### Source Data

Reddit subreddits posts (submissions & comments) containing "TL;DR" from 2006 to 2016. Multiple subreddits are included.

#### Initial Data Collection and Normalization

Initial data: a set of 286 million submissions and 1.6 billion comments posted to Reddit between 2006 and 2016.
Then a five-step pipeline of consecutive filtering steps was applied.

#### Who are the source language producers?

The contents of the dataset are produced by human authors, bot-generated content was eliminated by filtering out all bot accounts with the help of an extensive list provided by the Reddit community, as well as manual inspection of cases where the user name contained the substring "bot."

### Annotations

#### Annotation process

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

#### Who are the annotators?

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Personal and Sensitive Information

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

## Considerations for Using the Data

### Social Impact of Dataset

This dataset has been created to serve as a source of large-scale summarization training data. It is primarily geared towards the automatic abstractive summarization task, that can be considered one of the most challenging variants of automatic summarization. It also aims to tackle the lack of genre diversity in the summarization datasets (most are news-related).

### Discussion of Biases

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Other Known Limitations

Reddit users write TL;DRs with various intentions, such as providing a “true” summary, asking questions or for help, or forming judgments and conclusions. As noted in the paper introducing the dataset, although the first kind of TL;DR posts are most important for training summarization models, yet, the latter allow for various alternative summarization-related tasks.

Although filtering was performed abusive language maybe still be present.

## Additional Information

### Dataset Curators

Michael Völske, Martin Potthast, Shahbaz Syed, Benno Stein

### Licensing Information

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Citation Information

```

@inproceedings{volske-etal-2017-tl,
    title = "{TL};{DR}: Mining {R}eddit to Learn Automatic Summarization",
    author = {V{"o}lske, Michael  and
      Potthast, Martin  and
      Syed, Shahbaz  and
      Stein, Benno},
    booktitle = "Proceedings of the Workshop on New Frontiers in Summarization",
    month = sep,
    year = "2017",
    address = "Copenhagen, Denmark",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/W17-4508",
    doi = "10.18653/v1/W17-4508",
    pages = "59--63",
    abstract = "Recent advances in automatic text summarization have used deep neural networks to generate high-quality abstractive summaries, but the performance of these models strongly depends on large amounts of suitable training data. We propose a new method for mining social media for author-provided summaries, taking advantage of the common practice of appending a {``}TL;DR{''} to long posts. A case study using a large Reddit crawl yields the Webis-TLDR-17 dataset, complementing existing corpora primarily from the news genre. Our technique is likely applicable to other social media sites and general web crawls.",
}

```


### Contributions

Thanks to [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.