Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
File size: 2,522 Bytes
6d0f7ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
625e142
6d0f7ba
6077454
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
625e142
6077454
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: abstract
    dtype: string
  - name: authors
    dtype: string
  - name: published_date
    dtype: string
  - name: link
    dtype: string
  - name: markdown
    dtype: string
  splits:
  - name: train
    num_bytes: 6952989384
    num_examples: 138380
  download_size: 3232936300
  dataset_size: 6952989384
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: cc-by-nc-sa-4.0
---
## Arxiver Dataset
Arxiver consists of 138,830 [arXiv](https://arxiv.org/) papers converted to multi-markdown (**.mmd**) format. Our dataset includes original arXiv article IDs, titles, abstracts, authors, publication dates, URLs and corresponding markdown files published between January 2023 and October 2023.

We hope our dataset will be useful for various applications such as semantic search, domain specific language modeling, question answering and summarization.

## Curation
The Arxiver dataset is created using a neural OCR - [Nougat](https://facebookresearch.github.io/nougat/). After OCR processing, we apply custom text processing steps to refine the data. This includes extracting author information, removing reference sections, and performing additional cleaning and formatting.

## Using Arxiver
You can easily download and use the arxiver dataset with Hugging Face's [datasets](https://huggingface.co/datasets) library.
```py
from datasets import load_dataset

# whole dataset takes 3.3GB
dataset = load_dataset("neuralwork/arxiver") 
print(dataset)
```

Alternatively, you can stream the dataset to save disk space or to partially download the dataset:
```py
from datasets import load_dataset

dataset = load_dataset("neuralwork/arxiver", streaming=True)
print(dataset)
print(next(iter(dataset['train'])))
```

## References
The original articles are maintained by [arXiv](https://arxiv.org/) and copyrighted to the original authors, please refer to the arXiv license information [page](https://info.arxiv.org/help/license/index.html) for details. We release our dataset with a Creative Commons Attribution-Noncommercial-ShareAlike (CC BY-NC-SA 4.0) license, if you use this dataset in your research or project, please cite it as follows:
```
@misc{acar_arxiver2024,
  author = {Alican Acar, Alara Dirik, Muhammet Hatipoglu},
  title = {ArXiver},
  year = {2024},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/neuralwork/arxiver}}
}
```