metadata
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: ScientificPapers
size_categories:
- 100K<n<1M
source_datasets:
- scientific_papers
task_categories:
- summarization
task_ids: []
paperswithcode_id: null
tags:
- abstractive-summarization
dataset_info:
features:
- name: article
dtype: string
- name: abstract
dtype: string
- name: embeddings
sequence: float64
splits:
- name: train
num_bytes: 8367611540
num_examples: 203037
- name: validation
num_bytes: 256178362
num_examples: 6440
- name: test
num_bytes: 255771184
num_examples: 6436
download_size: 4718720913
dataset_size: 8879561086
Dataset Card for "scientific_papers"
This dataset is derived from https://huggingface.co/datasets/scientific_papers with additional creation of embeddings via https://huggingface.co/docs/transformers/model_doc/rag for Natural Questions trained Base Model. This dataset is created for purpose of Retrieval Augmented Generation examples and experiments.
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage:
- Repository: https://github.com/armancohan/long-summarization
- Paper: A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
- Point of Contact: More Information Needed
Dataset Summary
Scientific papers datasets contains one sets of long and structured documents. The datasets are obtained from ArXiv repositories.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
arxiv
- Size of downloaded dataset files: 4.50 GB
- Size of the generated dataset: 7.58 GB
- Total amount of disk used: 12.09 GB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"abstract": "\" we have studied the leptonic decay @xmath0 , via the decay channel @xmath1 , using a sample of tagged @xmath2 decays collected...",
"article": "\"the leptonic decays of a charged pseudoscalar meson @xmath7 are processes of the type @xmath8 , where @xmath9 , @xmath10 , or @...",
"section_names": "[sec:introduction]introduction\n[sec:detector]data and the cleo- detector\n[sec:analysys]analysis method\n[sec:conclusion]summary"
}
Data Fields
The data fields are the same among all splits.
arxiv
article
: astring
feature.abstract
: astring
feature.section_names
: astring
feature.embeddings
: afloat
768 dimensional vector
Data Splits
name | train | validation | test |
---|---|---|---|
arxiv | 203037 | 6436 | 6440 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@article{Cohan_2018,
title={A Discourse-Aware Attention Model for Abstractive Summarization of
Long Documents},
url={http://dx.doi.org/10.18653/v1/n18-2097},
DOI={10.18653/v1/n18-2097},
journal={Proceedings of the 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language
Technologies, Volume 2 (Short Papers)},
publisher={Association for Computational Linguistics},
author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
year={2018}
}
Contributions
Thanks to @thomwolf, @jplu, @lewtun, @patrickvonplaten for adding this dataset.