Datasets:

Modalities:
Text
ArXiv:
Libraries:
Datasets
License:
abhik1505040 commited on
Commit
68f72f3
2 Parent(s): b59185c 6cccd3c

Merge branch 'main' of https://huggingface.co/datasets/csebuetnlp/xlsum into main

Browse files
Files changed (1) hide show
  1. README.md +150 -85
README.md CHANGED
@@ -1,13 +1,57 @@
1
  ---
2
  task_categories:
3
  - conditional-text-generation
4
- - text-classification
5
  task_ids:
6
  - summarization
7
  languages:
 
 
 
 
 
 
 
8
  - english
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  licenses:
10
- - other-research-only
11
  multilinguality:
12
  - multilingual
13
  size_categories:
@@ -50,164 +94,185 @@ paperswithcode_id: xl-sum
50
 
51
  ## Dataset Description
52
 
53
- - **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
54
- - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
55
- - **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
56
- - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
57
- - **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
58
 
59
  ### Dataset Summary
60
 
61
- Briefly summarize the dataset, its intended use and the supported tasks. Give an overview of how and why the dataset was created. The summary should explicitly mention the languages present in the dataset (possibly in broad terms, e.g. *translations between several pairs of European languages*), and describe the domain, topic, or genre covered.
62
 
63
- ### Supported Tasks and Leaderboards
64
 
65
- For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
66
 
67
- - `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
68
 
69
  ### Languages
70
 
71
- Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
72
 
73
- When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available.
74
 
75
  ## Dataset Structure
76
 
77
  ### Data Instances
78
 
79
- Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
80
-
81
- ```
82
- {
83
- 'example_field': ...,
84
- ...
 
 
85
  }
86
- ```
87
-
88
- Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
89
 
90
  ### Data Fields
91
 
92
- List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
93
-
94
- - `example_field`: description of `example_field`
95
-
96
- Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
97
 
98
  ### Data Splits
99
 
100
- Describe and name the splits in the dataset if there are more than one.
101
-
102
- Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
103
-
104
- Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
105
-
106
- | | Tain | Valid | Test |
107
- | ----- | ------ | ----- | ---- |
108
- | Input Sentences | | | |
109
- | Average Sentence Length | | | |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
110
 
111
  ## Dataset Creation
112
 
113
  ### Curation Rationale
114
 
115
- What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
116
 
117
  ### Source Data
118
 
119
- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
120
 
121
  #### Initial Data Collection and Normalization
122
 
123
- Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
124
-
125
- If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
126
 
127
- If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
128
 
129
  #### Who are the source language producers?
130
 
131
- State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
132
 
133
- If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
134
-
135
- Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
136
-
137
- Describe other people represented or mentioned in the data. Where possible, link to references for the information.
138
 
139
  ### Annotations
140
 
141
- If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
 
142
 
143
  #### Annotation process
144
 
145
- If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
146
 
147
  #### Who are the annotators?
148
 
149
- If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
150
-
151
- Describe the people or systems who originally created the annotations and their selection criteria if applicable.
152
-
153
- If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
154
-
155
- Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
156
 
157
  ### Personal and Sensitive Information
158
 
159
- State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
160
-
161
- State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
162
-
163
- If efforts were made to anonymize the data, describe the anonymization process.
164
 
165
  ## Considerations for Using the Data
166
 
167
  ### Social Impact of Dataset
168
 
169
- Please discuss some of the ways you believe the use of this dataset will impact society.
170
-
171
- The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
172
-
173
- Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
174
 
175
  ### Discussion of Biases
176
 
177
- Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
178
-
179
- For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
180
-
181
- If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
182
 
183
  ### Other Known Limitations
184
 
185
- If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
186
 
187
  ## Additional Information
188
 
189
  ### Dataset Curators
190
 
191
- List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
192
 
193
  ### Licensing Information
194
 
195
- Provide the license and link to the license webpage if available.
196
-
197
  ### Citation Information
198
 
199
- Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
200
  ```
201
- @article{article_id,
202
- author = {Author List},
203
- title = {Dataset Paper Title},
204
- journal = {Publication Venue},
205
- year = {2525}
 
 
 
 
 
 
 
 
 
 
 
 
206
  }
207
  ```
208
 
209
- If the dataset has a [DOI](https://www.doi.org/), please provide it here.
210
 
211
  ### Contributions
212
 
213
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
 
1
  ---
2
  task_categories:
3
  - conditional-text-generation
 
4
  task_ids:
5
  - summarization
6
  languages:
7
+ - amharic
8
+ - arabic
9
+ - azerbaijani
10
+ - bengali
11
+ - burmese
12
+ - chinese_simplified
13
+ - chinese_traditional
14
  - english
15
+ - french
16
+ - gujarati
17
+ - hausa
18
+ - hindi
19
+ - igbo
20
+ - indonesian
21
+ - japanese
22
+ - kirundi
23
+ - korean
24
+ - kyrgyz
25
+ - marathi
26
+ - nepali
27
+ - oromo
28
+ - pashto
29
+ - persian
30
+ - pidgin
31
+ - portuguese
32
+ - punjabi
33
+ - russian
34
+ - scottish_gaelic
35
+ - serbian_cyrillic
36
+ - serbian_latin
37
+ - sinhala
38
+ - somali
39
+ - spanish
40
+ - swahili
41
+ - tamil
42
+ - telugu
43
+ - thai
44
+ - tigrinya
45
+ - turkish
46
+ - ukrainian
47
+ - urdu
48
+ - uzbek
49
+ - vietnamese
50
+ - welsh
51
+ - yoruba
52
+
53
  licenses:
54
+ - cc-by-nc-sa-4.0
55
  multilinguality:
56
  - multilingual
57
  size_categories:
 
94
 
95
  ## Dataset Description
96
 
97
+ - **Repository:** [https://github.com/csebuetnlp/xl-sum](https://github.com/csebuetnlp/xl-sum)
98
+ - **Paper:** [XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages](https://aclanthology.org/2021.findings-acl.413/)
99
+ - **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
 
 
100
 
101
  ### Dataset Summary
102
 
103
+ We present XLSum, a comprehensive and diverse dataset comprising 1.35 million professionally annotated article-summary pairs from BBC, extracted using a set of carefully designed heuristics. The dataset covers 45 languages ranging from low to high-resource, for many of which no public dataset is currently available. XL-Sum is highly abstractive, concise, and of high quality, as indicated by human and intrinsic evaluation.
104
 
 
105
 
106
+ ### Supported Tasks and Leaderboards
107
 
108
+ [More information needed](https://github.com/csebuetnlp/xl-sum)
109
 
110
  ### Languages
111
 
112
+ `amharic`, `arabic`, `azerbaijani`, `bengali`, `burmese`, `chinese_simplified`, `chinese_traditional`, `english`, `french`, `gujarati`, `hausa`, `hindi`, `igbo`, `indonesian`, `japanese`, `kirundi`, `korean`, `kyrgyz`, `marathi`, `nepali`, `oromo`, `pashto`, `persian`, `pidgin` `**`, `portuguese`, `punjabi`, `russian`, `scottish_gaelic`, `serbian_cyrillic`, `serbian_latin`, `sinhala`, `somali`, `spanish`, `swahili`, `tamil`, `telugu`, `thai`, `tigrinya`, `turkish`, `ukrainian`, `urdu`, `uzbek`, `vietnamese`, `welsh`, `yoruba`
113
 
114
+ `**`West African Pidgin English
115
 
116
  ## Dataset Structure
117
 
118
  ### Data Instances
119
 
120
+ One example from the `English` dataset is given below in JSON format.
121
+ ```
122
+ {
123
+ "id": "technology-17657859",
124
+ "url": "https://www.bbc.com/news/technology-17657859",
125
+ "title": "Yahoo files e-book advert system patent applications",
126
+ "summary": "Yahoo has signalled it is investigating e-book adverts as a way to stimulate its earnings.",
127
+ "text": "Yahoo's patents suggest users could weigh the type of ads against the sizes of discount before purchase. It says in two US patent applications that ads for digital book readers have been \"less than optimal\" to date. The filings suggest that users could be offered titles at a variety of prices depending on the ads' prominence They add that the products shown could be determined by the type of book being read, or even the contents of a specific chapter, phrase or word. The paperwork was published by the US Patent and Trademark Office late last week and relates to work carried out at the firm's headquarters in Sunnyvale, California. \"Greater levels of advertising, which may be more valuable to an advertiser and potentially more distracting to an e-book reader, may warrant higher discounts,\" it states. Free books It suggests users could be offered ads as hyperlinks based within the book's text, in-laid text or even \"dynamic content\" such as video. Another idea suggests boxes at the bottom of a page could trail later chapters or quotes saying \"brought to you by Company A\". It adds that the more willing the customer is to see the ads, the greater the potential discount. \"Higher frequencies... may even be great enough to allow the e-book to be obtained for free,\" it states. The authors write that the type of ad could influence the value of the discount, with \"lower class advertising... such as teeth whitener advertisements\" offering a cheaper price than \"high\" or \"middle class\" adverts, for things like pizza. The inventors also suggest that ads could be linked to the mood or emotional state the reader is in as a they progress through a title. For example, they say if characters fall in love or show affection during a chapter, then ads for flowers or entertainment could be triggered. The patents also suggest this could applied to children's books - giving the Tom Hanks animated film Polar Express as an example. It says a scene showing a waiter giving the protagonists hot drinks \"may be an excellent opportunity to show an advertisement for hot cocoa, or a branded chocolate bar\". Another example states: \"If the setting includes young characters, a Coke advertisement could be provided, inviting the reader to enjoy a glass of Coke with his book, and providing a graphic of a cool glass.\" It adds that such targeting could be further enhanced by taking account of previous titles the owner has bought. 'Advertising-free zone' At present, several Amazon and Kobo e-book readers offer full-screen adverts when the device is switched off and show smaller ads on their menu screens, but the main text of the titles remains free of marketing. Yahoo does not currently provide ads to these devices, and a move into the area could boost its shrinking revenues. However, Philip Jones, deputy editor of the Bookseller magazine, said that the internet firm might struggle to get some of its ideas adopted. \"This has been mooted before and was fairly well decried,\" he said. \"Perhaps in a limited context it could work if the merchandise was strongly related to the title and was kept away from the text. \"But readers - particularly parents - like the fact that reading is an advertising-free zone. Authors would also want something to say about ads interrupting their narrative flow.\""
128
  }
129
+ ```
 
 
130
 
131
  ### Data Fields
132
 
133
+ See [Data Instances](#data-instances). The fields are self-explanatory.
 
 
 
 
134
 
135
  ### Data Splits
136
 
137
+ We used a 80%-10%-10% split for all languages with a few exceptions. `English` was split 93%-3.5%-3.5% for the evaluation set size to resemble that of `CNN/DM` and `XSum`; `Scottish Gaelic`, `Kyrgyz` and `Sinhala` had relatively fewer samples, their evaluation sets were increased to 500 samples for more reliable evaluation. Same articles were used for evaluation in the two variants of Chinese and Serbian to prevent data leakage in multilingual training. Individual dataset download links with train-dev-test example counts are given below:
138
+
139
+ Language | ISO 639-1 Code | BBC subdomain(s) | Train | Dev | Test | Total |
140
+ --------------|----------------|------------------|-------|-----|------|-------|
141
+ Amharic | am | https://www.bbc.com/amharic | 5761 | 719 | 719 | 7199 |
142
+ Arabic | ar | https://www.bbc.com/arabic | 37519 | 4689 | 4689 | 46897 |
143
+ Azerbaijani | az | https://www.bbc.com/azeri | 6478 | 809 | 809 | 8096 |
144
+ Bengali | bn | https://www.bbc.com/bengali | 8102 | 1012 | 1012 | 10126 |
145
+ Burmese | my | https://www.bbc.com/burmese | 4569 | 570 | 570 | 5709 |
146
+ Chinese (Simplified) | zh-CN | https://www.bbc.com/ukchina/simp, https://www.bbc.com/zhongwen/simp | 37362 | 4670 | 4670 | 46702 |
147
+ Chinese (Traditional) | zh-TW | https://www.bbc.com/ukchina/trad, https://www.bbc.com/zhongwen/trad | 37373 | 4670 | 4670 | 46713 |
148
+ English | en | https://www.bbc.com/english, https://www.bbc.com/sinhala `*` | 306522 | 11535 | 11535 | 329592 |
149
+ French | fr | https://www.bbc.com/afrique | 8697 | 1086 | 1086 | 10869 |
150
+ Gujarati | gu | https://www.bbc.com/gujarati | 9119 | 1139 | 1139 | 11397 |
151
+ Hausa | ha | https://www.bbc.com/hausa | 6418 | 802 | 802 | 8022 |
152
+ Hindi | hi | https://www.bbc.com/hindi | 70778 | 8847 | 8847 | 88472 |
153
+ Igbo | ig | https://www.bbc.com/igbo | 4183 | 522 | 522 | 5227 |
154
+ Indonesian | id | https://www.bbc.com/indonesia | 38242 | 4780 | 4780 | 47802 |
155
+ Japanese | ja | https://www.bbc.com/japanese | 7113 | 889 | 889 | 8891 |
156
+ Kirundi | rn | https://www.bbc.com/gahuza | 5746 | 718 | 718 | 7182 |
157
+ Korean | ko | https://www.bbc.com/korean | 4407 | 550 | 550 | 5507 |
158
+ Kyrgyz | ky | https://www.bbc.com/kyrgyz | 2266 | 500 | 500 | 3266 |
159
+ Marathi | mr | https://www.bbc.com/marathi | 10903 | 1362 | 1362 | 13627 |
160
+ Nepali | np | https://www.bbc.com/nepali | 5808 | 725 | 725 | 7258 |
161
+ Oromo | om | https://www.bbc.com/afaanoromoo | 6063 | 757 | 757 | 7577 |
162
+ Pashto | ps | https://www.bbc.com/pashto | 14353 | 1794 | 1794 | 17941 |
163
+ Persian | fa | https://www.bbc.com/persian | 47251 | 5906 | 5906 | 59063 |
164
+ Pidgin`**` | n/a | https://www.bbc.com/pidgin | 9208 | 1151 | 1151 | 11510 |
165
+ Portuguese | pt | https://www.bbc.com/portuguese | 57402 | 7175 | 7175 | 71752 |
166
+ Punjabi | pa | https://www.bbc.com/punjabi | 8215 | 1026 | 1026 | 10267 |
167
+ Russian | ru | https://www.bbc.com/russian, https://www.bbc.com/ukrainian `*` | 62243 | 7780 | 7780 | 77803 |
168
+ Scottish Gaelic | gd | https://www.bbc.com/naidheachdan | 1313 | 500 | 500 | 2313 |
169
+ Serbian (Cyrillic) | sr | https://www.bbc.com/serbian/cyr | 7275 | 909 | 909 | 9093 |
170
+ Serbian (Latin) | sr | https://www.bbc.com/serbian/lat | 7276 | 909 | 909 | 9094 |
171
+ Sinhala | si | https://www.bbc.com/sinhala | 3249 | 500 | 500 | 4249 |
172
+ Somali | so | https://www.bbc.com/somali | 5962 | 745 | 745 | 7452 |
173
+ Spanish | es | https://www.bbc.com/mundo | 38110 | 4763 | 4763 | 47636 |
174
+ Swahili | sw | https://www.bbc.com/swahili | 7898 | 987 | 987 | 9872 |
175
+ Tamil | ta | https://www.bbc.com/tamil | 16222 | 2027 | 2027 | 20276 |
176
+ Telugu | te | https://www.bbc.com/telugu | 10421 | 1302 | 1302 | 13025 |
177
+ Thai | th | https://www.bbc.com/thai | 6616 | 826 | 826 | 8268 |
178
+ Tigrinya | ti | https://www.bbc.com/tigrinya | 5451 | 681 | 681 | 6813 |
179
+ Turkish | tr | https://www.bbc.com/turkce | 27176 | 3397 | 3397 | 33970 |
180
+ Ukrainian | uk | https://www.bbc.com/ukrainian | 43201 | 5399 | 5399 | 53999 |
181
+ Urdu | ur | https://www.bbc.com/urdu | 67665 | 8458 | 8458 | 84581 |
182
+ Uzbek | uz | https://www.bbc.com/uzbek | 4728 | 590 | 590 | 5908 |
183
+ Vietnamese | vi | https://www.bbc.com/vietnamese | 32111 | 4013 | 4013 | 40137 |
184
+ Welsh | cy | https://www.bbc.com/cymrufyw | 9732 | 1216 | 1216 | 12164 |
185
+ Yoruba | yo | https://www.bbc.com/yoruba | 6350 | 793 | 793 | 7936 |
186
+
187
+ `*` A lot of articles in BBC Sinhala and BBC Ukrainian were written in English and Russian respectively. They were identified using [Fasttext](https://arxiv.org/abs/1607.01759) and moved accordingly.
188
+
189
+ `**` West African Pidgin English
190
 
191
  ## Dataset Creation
192
 
193
  ### Curation Rationale
194
 
195
+ [More information needed](https://github.com/csebuetnlp/xl-sum)
196
 
197
  ### Source Data
198
 
199
+ [BBC News](https://www.bbc.co.uk/ws/languages)
200
 
201
  #### Initial Data Collection and Normalization
202
 
203
+ [Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/)
 
 
204
 
 
205
 
206
  #### Who are the source language producers?
207
 
208
+ [Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/)
209
 
 
 
 
 
 
210
 
211
  ### Annotations
212
 
213
+ [Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/)
214
+
215
 
216
  #### Annotation process
217
 
218
+ [Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/)
219
 
220
  #### Who are the annotators?
221
 
222
+ [Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/)
 
 
 
 
 
 
223
 
224
  ### Personal and Sensitive Information
225
 
226
+ [More information needed](https://github.com/csebuetnlp/xl-sum)
 
 
 
 
227
 
228
  ## Considerations for Using the Data
229
 
230
  ### Social Impact of Dataset
231
 
232
+ [More information needed](https://github.com/csebuetnlp/xl-sum)
 
 
 
 
233
 
234
  ### Discussion of Biases
235
 
236
+ [More information needed](https://github.com/csebuetnlp/xl-sum)
 
 
 
 
237
 
238
  ### Other Known Limitations
239
 
240
+ [More information needed](https://github.com/csebuetnlp/xl-sum)
241
 
242
  ## Additional Information
243
 
244
  ### Dataset Curators
245
 
246
+ [More information needed](https://github.com/csebuetnlp/xl-sum)
247
 
248
  ### Licensing Information
249
 
250
+ Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
 
251
  ### Citation Information
252
 
253
+ If you use any of the datasets, models or code modules, please cite the following paper:
254
  ```
255
+ @inproceedings{hasan-etal-2021-xl,
256
+ title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
257
+ author = "Hasan, Tahmid and
258
+ Bhattacharjee, Abhik and
259
+ Islam, Md. Saiful and
260
+ Mubasshir, Kazi and
261
+ Li, Yuan-Fang and
262
+ Kang, Yong-Bin and
263
+ Rahman, M. Sohel and
264
+ Shahriyar, Rifat",
265
+ booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
266
+ month = aug,
267
+ year = "2021",
268
+ address = "Online",
269
+ publisher = "Association for Computational Linguistics",
270
+ url = "https://aclanthology.org/2021.findings-acl.413",
271
+ pages = "4693--4703",
272
  }
273
  ```
274
 
 
275
 
276
  ### Contributions
277
 
278
+ Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset.