Datasets:
Dataset Viewer
Full Screen Viewer
Full Screen
The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for "scientific_papers"
Dataset Summary
Scientific papers datasets contains two sets of long and structured documents. The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
Both "arxiv" and "pubmed" have two features:
- article: the body of the document, paragraphs separated by "/n".
- abstract: the abstract of the document, paragraphs separated by "/n".
- section_names: titles of sections, separated by "/n".
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
arxiv
- Size of downloaded dataset files: 4.50 GB
- Size of the generated dataset: 7.58 GB
- Total amount of disk used: 12.09 GB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"abstract": "\" we have studied the leptonic decay @xmath0 , via the decay channel @xmath1 , using a sample of tagged @xmath2 decays collected...",
"article": "\"the leptonic decays of a charged pseudoscalar meson @xmath7 are processes of the type @xmath8 , where @xmath9 , @xmath10 , or @...",
"section_names": "[sec:introduction]introduction\n[sec:detector]data and the cleo- detector\n[sec:analysys]analysis method\n[sec:conclusion]summary"
}
pubmed
- Size of downloaded dataset files: 4.50 GB
- Size of the generated dataset: 2.51 GB
- Total amount of disk used: 7.01 GB
An example of 'validation' looks as follows.
This example was too long and was cropped:
{
"abstract": "\" background and aim : there is lack of substantial indian data on venous thromboembolism ( vte ) . \\n the aim of this study was...",
"article": "\"approximately , one - third of patients with symptomatic vte manifests pe , whereas two - thirds manifest dvt alone .\\nboth dvt...",
"section_names": "\"Introduction\\nSubjects and Methods\\nResults\\nDemographics and characteristics of venous thromboembolism patients\\nRisk factors ..."
}
Data Fields
The data fields are the same among all splits.
arxiv
article
: astring
feature.abstract
: astring
feature.section_names
: astring
feature.
pubmed
article
: astring
feature.abstract
: astring
feature.section_names
: astring
feature.
Data Splits
name | train | validation | test |
---|---|---|---|
arxiv | 203037 | 6436 | 6440 |
pubmed | 119924 | 6633 | 6658 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@article{Cohan_2018,
title={A Discourse-Aware Attention Model for Abstractive Summarization of
Long Documents},
url={http://dx.doi.org/10.18653/v1/n18-2097},
DOI={10.18653/v1/n18-2097},
journal={Proceedings of the 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language
Technologies, Volume 2 (Short Papers)},
publisher={Association for Computational Linguistics},
author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
year={2018}
}
Contributions
Thanks to @thomwolf, @jplu, @lewtun, @patrickvonplaten for adding this dataset.
- Downloads last month
- 1,752
Homepage:
Homepage:
Repository:
github.com
Paper:
A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
Size of downloaded dataset files:
9.01 GB
Models trained or fine-tuned on armanc/scientific_papers
Summarization
•
Updated
•
8.55k
•
52