Datasets:
PES2O -> peS2o
Browse files
README.md
CHANGED
@@ -25,22 +25,22 @@ tags:
|
|
25 |
- art
|
26 |
- history
|
27 |
- philosophy
|
28 |
-
pretty_name:
|
29 |
size_categories:
|
30 |
- 10B<n<100B
|
31 |
source_datasets:
|
32 |
- allenai/s2orc
|
33 |
---
|
34 |
|
35 |
-
#
|
36 |
|
37 |
*Pretraining Efficiently on [S2ORC][2]!*
|
38 |
|
39 |
-
The
|
40 |
cleaned, filtered, and formatted for pre-training of language models. It is derived from
|
41 |
the [Semantic Scholar Open Research Corpus][2]([Lo et al, 2020][1]), or S2ORC.
|
42 |
|
43 |
-
We release multiple version of
|
44 |
date. We recommend you to use the latest version available.
|
45 |
|
46 |
If you use this dataset, please cite:
|
@@ -49,7 +49,7 @@ If you use this dataset, please cite:
|
|
49 |
@techreport{pes2o,
|
50 |
author = {Luca Soldaini and Kyle Lo},
|
51 |
year = 2023,
|
52 |
-
title = {{
|
53 |
institution = {{Allen Institute for AI}},
|
54 |
note = {\url{https://huggingface.co/datasets/allenai/pes2o}}
|
55 |
}
|
@@ -66,11 +66,11 @@ Each document in the dataset is a dictionary with the following fields:
|
|
66 |
- `s2orc`: collection of full-text papers
|
67 |
- `s2ag`: collection of title and abstracts
|
68 |
- `text`: Text of the document. Paragraphs are separated by two newlines (`\n\n`).
|
69 |
-
- `version`: version of
|
70 |
|
71 |
------
|
72 |
|
73 |
-
##
|
74 |
|
75 |
### Key Facts
|
76 |
|
@@ -84,7 +84,7 @@ Processing differs slightly wether it was derived from the full-text corpus (`s2
|
|
84 |
|
85 |
#### S2ORC-derived documents
|
86 |
|
87 |
-
Unfiltered, S2ORC contains 11.3M papers and 46.9B whitespace-separated tokens as of 2023-01-03. To derive
|
88 |
|
89 |
- The paper must have a title and abstract.
|
90 |
- From each paper, we use [Grobid](https://github.com/kermitt2/grobid) to extract section headers and paragraphs; figures, tables, and references, and any other non-textual content is removed. Title and abstracts are also available, but they come from the Semantic Scholar metadata (obtained through the APIs), not Grobid.
|
@@ -106,7 +106,7 @@ the validation set includes documents published after 2022-12-01 and until 2023-
|
|
106 |
#### S2AG-derived documents
|
107 |
|
108 |
The S2AG corpus contains titles and abstracts of papers in Semantic Scholar.
|
109 |
-
Unfiltered, the corpus contains 91.1M papers and 15.5B whitespace-separated tokens as of 2023-01-03. To derive
|
110 |
|
111 |
- Abstract must be in English.
|
112 |
- To calculate the language, we once again use pycld3
|
@@ -130,7 +130,7 @@ Unfiltered, the corpus contains 91.1M papers and 15.5B whitespace-separated toke
|
|
130 |
|
131 |
------
|
132 |
|
133 |
-
##
|
134 |
|
135 |
|
136 |
### Key Facts
|
@@ -141,7 +141,7 @@ Unfiltered, the corpus contains 91.1M papers and 15.5B whitespace-separated toke
|
|
141 |
|
142 |
### Processing
|
143 |
|
144 |
-
|
145 |
|
146 |
First, we check if the abstract was obtained from Semantic Scholar sources that are likely to contain OCR'ed content. For any abstract derived from those sources, we count how often the text contains subsequences matching `\b([A-Za-z]\s)([a-z]\s)*[A-Za-z]\b`, i.e. individual alpha letters separated by a space. This heuristic matches cases such as `A b stra ct` (2 matching subsequences), where the OCR parser inserted erroneous spaces.
|
147 |
Any abstract with more than 4 matching subsequences is removed.
|
|
|
25 |
- art
|
26 |
- history
|
27 |
- philosophy
|
28 |
+
pretty_name: peS2o (Pretraining Efficiently on S2ORC)
|
29 |
size_categories:
|
30 |
- 10B<n<100B
|
31 |
source_datasets:
|
32 |
- allenai/s2orc
|
33 |
---
|
34 |
|
35 |
+
# peS2o ๐ฟ๐
|
36 |
|
37 |
*Pretraining Efficiently on [S2ORC][2]!*
|
38 |
|
39 |
+
The peS2o dataset is a collection of ~40M creative commmon licensed academic papers,
|
40 |
cleaned, filtered, and formatted for pre-training of language models. It is derived from
|
41 |
the [Semantic Scholar Open Research Corpus][2]([Lo et al, 2020][1]), or S2ORC.
|
42 |
|
43 |
+
We release multiple version of peS2o, each with different processing and knowledge cutoff
|
44 |
date. We recommend you to use the latest version available.
|
45 |
|
46 |
If you use this dataset, please cite:
|
|
|
49 |
@techreport{pes2o,
|
50 |
author = {Luca Soldaini and Kyle Lo},
|
51 |
year = 2023,
|
52 |
+
title = {{peS2o (Pretraining Efficiently on S2ORC) Dataset}},
|
53 |
institution = {{Allen Institute for AI}},
|
54 |
note = {\url{https://huggingface.co/datasets/allenai/pes2o}}
|
55 |
}
|
|
|
66 |
- `s2orc`: collection of full-text papers
|
67 |
- `s2ag`: collection of title and abstracts
|
68 |
- `text`: Text of the document. Paragraphs are separated by two newlines (`\n\n`).
|
69 |
+
- `version`: version of peS2o.
|
70 |
|
71 |
------
|
72 |
|
73 |
+
## peS2o V1
|
74 |
|
75 |
### Key Facts
|
76 |
|
|
|
84 |
|
85 |
#### S2ORC-derived documents
|
86 |
|
87 |
+
Unfiltered, S2ORC contains 11.3M papers and 46.9B whitespace-separated tokens as of 2023-01-03. To derive peS2o v1, we impose the following constraints:
|
88 |
|
89 |
- The paper must have a title and abstract.
|
90 |
- From each paper, we use [Grobid](https://github.com/kermitt2/grobid) to extract section headers and paragraphs; figures, tables, and references, and any other non-textual content is removed. Title and abstracts are also available, but they come from the Semantic Scholar metadata (obtained through the APIs), not Grobid.
|
|
|
106 |
#### S2AG-derived documents
|
107 |
|
108 |
The S2AG corpus contains titles and abstracts of papers in Semantic Scholar.
|
109 |
+
Unfiltered, the corpus contains 91.1M papers and 15.5B whitespace-separated tokens as of 2023-01-03. To derive peS2o v1, we impose the following constraints:
|
110 |
|
111 |
- Abstract must be in English.
|
112 |
- To calculate the language, we once again use pycld3
|
|
|
130 |
|
131 |
------
|
132 |
|
133 |
+
## peS2o V2
|
134 |
|
135 |
|
136 |
### Key Facts
|
|
|
141 |
|
142 |
### Processing
|
143 |
|
144 |
+
peS2o V2 is largely the same as V1, but it includes additional heuristics s2ag aimed at filtering out OCR errors from abstract.
|
145 |
|
146 |
First, we check if the abstract was obtained from Semantic Scholar sources that are likely to contain OCR'ed content. For any abstract derived from those sources, we count how often the text contains subsequences matching `\b([A-Za-z]\s)([a-z]\s)*[A-Za-z]\b`, i.e. individual alpha letters separated by a space. This heuristic matches cases such as `A b stra ct` (2 matching subsequences), where the OCR parser inserted erroneous spaces.
|
147 |
Any abstract with more than 4 matching subsequences is removed.
|