ai-forever
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -9,10 +9,10 @@ license: mit
|
|
9 |
multilinguality:
|
10 |
- monolingual
|
11 |
size_categories:
|
12 |
-
- 10K<n<
|
13 |
task_categories:
|
14 |
- text-generation
|
15 |
-
pretty_name: Russian Spellcheck Benchmark
|
16 |
language_bcp47:
|
17 |
- ru-RU
|
18 |
tags:
|
@@ -20,7 +20,7 @@ tags:
|
|
20 |
- russian
|
21 |
---
|
22 |
|
23 |
-
# Dataset Card for Russian Spellcheck Benchmark
|
24 |
|
25 |
## Table of Contents
|
26 |
- [Table of Contents](#table-of-contents)
|
@@ -50,24 +50,29 @@ tags:
|
|
50 |
## Dataset Description
|
51 |
|
52 |
- **Repository:** [SAGE](https://github.com/ai-forever/sage)
|
53 |
-
- **Paper:** [
|
54 |
- **Point of Contact:** nikita.martynov.98@list.ru
|
55 |
|
56 |
### Dataset Summary
|
57 |
|
58 |
-
|
59 |
-
|
|
|
|
|
60 |
Datasets were gathered from various sources and domains including social networks, internet blogs, github commits, medical anamnesis, literature, news, reviews and more.
|
61 |
|
62 |
All datasets were passed through two-stage manual labeling pipeline.
|
63 |
-
|
64 |
-
|
65 |
-
|
|
|
|
|
66 |
|
67 |
### Supported Tasks and Leaderboards
|
68 |
|
69 |
- **Task:** automatic spelling correction.
|
70 |
- **Metrics:** https://www.dialog-21.ru/media/3427/sorokinaaetal.pdf.
|
|
|
71 |
|
72 |
|
73 |
### Languages
|
@@ -80,30 +85,30 @@ Russian.
|
|
80 |
|
81 |
#### RUSpellRU
|
82 |
|
83 |
-
- **Size of downloaded dataset files:** 3.
|
84 |
-
- **Size of the generated dataset:** 1.
|
85 |
-
- **Total amount of disk used:** 4.
|
86 |
|
87 |
An example of "train" / "test" looks as follows
|
88 |
```
|
89 |
{
|
90 |
-
"source": "
|
91 |
-
"correction": "
|
92 |
}
|
93 |
```
|
94 |
|
95 |
#### MultidomainGold
|
96 |
|
97 |
-
- **Size of downloaded dataset files:** 15.
|
98 |
- **Size of the generated dataset:** 5.43 Mb
|
99 |
-
- **Total amount of disk used:** 20.
|
100 |
|
101 |
An example of "test" looks as follows
|
102 |
```
|
103 |
{
|
104 |
-
"source": "
|
105 |
-
"correction": "
|
106 |
-
"domain": "
|
107 |
|
108 |
}
|
109 |
```
|
@@ -117,8 +122,8 @@ An example of "test" looks as follows
|
|
117 |
An example of "test" looks as follows
|
118 |
```
|
119 |
{
|
120 |
-
"source": "
|
121 |
-
"correction": "
|
122 |
}
|
123 |
```
|
124 |
|
@@ -132,8 +137,8 @@ An example of "test" looks as follows
|
|
132 |
An example of "test" looks as follows
|
133 |
```
|
134 |
{
|
135 |
-
"source": "
|
136 |
-
"correction": "
|
137 |
}
|
138 |
```
|
139 |
|
@@ -178,10 +183,10 @@ An example of "test" looks as follows
|
|
178 |
|
179 |
| |train|test|
|
180 |
|---|---:|---:|
|
181 |
-
|web|
|
182 |
|news|361|245|
|
183 |
|social_media|430|200|
|
184 |
-
|reviews|
|
185 |
|subtitles|1810|1810|
|
186 |
|strategic_documents|-|250|
|
187 |
|literature|-|260|
|
@@ -207,7 +212,7 @@ An example of "test" looks as follows
|
|
207 |
|
208 |
The datasets are chosen in accordance with the specified criteria.
|
209 |
First, domain variation: half of the datasets are chosen from different domains to ensure diversity, while the remaining half are from a single domain.
|
210 |
-
Another criterion is spelling orthographic mistakes:
|
211 |
the datasets exclusively comprised mistyping, omitting grammatical or more complex errors of nonnative speakers.
|
212 |
- **RUSpellRU**: texts collected from ([LiveJournal](https://www.livejournal.com/media)), with manually corrected typos and errors;
|
213 |
- **MultidomainGold**: examples from several text sources including the open web, news, social media, reviews, subtitles, policy documents and literary works were collected:
|
@@ -239,10 +244,14 @@ Instructions do not provide rigorous criteria on the matter of distinguishing th
|
|
239 |
To ensure we receive qualified expertise, we set up test iteration on a small subset of the data for both stages. We manually validated the test results and selected annotators, who processed at least six samples (2% of the total test iteration) and did not make a single error. After test iteration, we cut 85% and 86% of labellers for gathering and validation stages.
|
240 |
We especially urge annotators to correct mistakes associated with the substitution of the letters "ё" "й" and "щ" for corresponding "е" "и" and "ш" and not to explain abbreviations and correct punctuation errors. Each annotator is also warned about potentially sensitive topics in data (e.g., politics, societal minorities, and religion).
|
241 |
|
|
|
|
|
242 |
#### Who are the annotators?
|
243 |
|
244 |
Native Russian speakers who passed the language exam.
|
245 |
|
|
|
|
|
246 |
|
247 |
## Considerations for Using the Data
|
248 |
|
@@ -293,15 +302,24 @@ All our datasets are published by MIT License.
|
|
293 |
year={2023}
|
294 |
}
|
295 |
|
296 |
-
@
|
297 |
-
|
298 |
-
|
299 |
-
|
300 |
-
|
301 |
-
|
302 |
-
|
303 |
-
|
304 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
305 |
}
|
306 |
|
307 |
```
|
|
|
9 |
multilinguality:
|
10 |
- monolingual
|
11 |
size_categories:
|
12 |
+
- 10K<n<100K
|
13 |
task_categories:
|
14 |
- text-generation
|
15 |
+
pretty_name: Russian Spellcheck Punctuation Benchmark
|
16 |
language_bcp47:
|
17 |
- ru-RU
|
18 |
tags:
|
|
|
20 |
- russian
|
21 |
---
|
22 |
|
23 |
+
# Dataset Card for Russian Spellcheck Punctuation Benchmark
|
24 |
|
25 |
## Table of Contents
|
26 |
- [Table of Contents](#table-of-contents)
|
|
|
50 |
## Dataset Description
|
51 |
|
52 |
- **Repository:** [SAGE](https://github.com/ai-forever/sage)
|
53 |
+
- **Paper:** [EACL 2024 paper](https://aclanthology.org/2024.findings-eacl.10/)
|
54 |
- **Point of Contact:** nikita.martynov.98@list.ru
|
55 |
|
56 |
### Dataset Summary
|
57 |
|
58 |
+
The collection is an updated version of [Russian Spellcheck Benchmark](https://huggingface.co/datasets/ai-forever/spellcheck_benchmark) with punctuation corrected.
|
59 |
+
|
60 |
+
The Benchmark includes four datasets, each of which consists of pairs of sentences in Russian language.
|
61 |
+
Each pair embodies sentence, which may contain spelling and punctuation errors, and its corresponding correction.
|
62 |
Datasets were gathered from various sources and domains including social networks, internet blogs, github commits, medical anamnesis, literature, news, reviews and more.
|
63 |
|
64 |
All datasets were passed through two-stage manual labeling pipeline.
|
65 |
+
The correction of a sentence is defined by an agreement of at least two human annotators.
|
66 |
+
Manual labeling scheme accounts for jargonisms, collocations and common language, hence in some cases it encourages
|
67 |
+
annotators not to amend a word in favor of preserving style of a text.
|
68 |
+
|
69 |
+
The latter does not apply to punctuation. Punctuation signs are rigorously marked in accordance to the rules of the Russian punctuation system.
|
70 |
|
71 |
### Supported Tasks and Leaderboards
|
72 |
|
73 |
- **Task:** automatic spelling correction.
|
74 |
- **Metrics:** https://www.dialog-21.ru/media/3427/sorokinaaetal.pdf.
|
75 |
+
- **ERRANT:** https://github.com/chrisjbryant/errant.
|
76 |
|
77 |
|
78 |
### Languages
|
|
|
85 |
|
86 |
#### RUSpellRU
|
87 |
|
88 |
+
- **Size of downloaded dataset files:** 3.65 Mb
|
89 |
+
- **Size of the generated dataset:** 1.31 Mb
|
90 |
+
- **Total amount of disk used:** 4.96 Mb
|
91 |
|
92 |
An example of "train" / "test" looks as follows
|
93 |
```
|
94 |
{
|
95 |
+
"source": "давольно милый и летом и зимой обогреваемый теплым солнушком",
|
96 |
+
"correction": "Довольно милый, и летом, и зимой обогреваемый тёплым солнышком.",
|
97 |
}
|
98 |
```
|
99 |
|
100 |
#### MultidomainGold
|
101 |
|
102 |
+
- **Size of downloaded dataset files:** 15.03 Mb
|
103 |
- **Size of the generated dataset:** 5.43 Mb
|
104 |
+
- **Total amount of disk used:** 20.46 Mb
|
105 |
|
106 |
An example of "test" looks as follows
|
107 |
```
|
108 |
{
|
109 |
+
"source": "для меня всё материальное тленно и лишь находясь в гармонии-для начала с собой-можно радовацца чужому счастью искренне",
|
110 |
+
"correction": "Для меня всё материальное тленно, и лишь находясь в гармонии - для начала с собой - можно радоваться чужому счастью искренне.",
|
111 |
+
"domain": "web",
|
112 |
|
113 |
}
|
114 |
```
|
|
|
122 |
An example of "test" looks as follows
|
123 |
```
|
124 |
{
|
125 |
+
"source": "Накануне (18.02.2012 г",
|
126 |
+
"correction": "Накануне (18.02.2012 г.).",
|
127 |
}
|
128 |
```
|
129 |
|
|
|
137 |
An example of "test" looks as follows
|
138 |
```
|
139 |
{
|
140 |
+
"source": "text: Пожалуйста выберите чат, чтобы начать общение",
|
141 |
+
"correction": "text: Пожалуйста, выберите чат, чтобы начать общение.",
|
142 |
}
|
143 |
```
|
144 |
|
|
|
183 |
|
184 |
| |train|test|
|
185 |
|---|---:|---:|
|
186 |
+
|web|385|756|
|
187 |
|news|361|245|
|
188 |
|social_media|430|200|
|
189 |
+
|reviews|583|585|
|
190 |
|subtitles|1810|1810|
|
191 |
|strategic_documents|-|250|
|
192 |
|literature|-|260|
|
|
|
212 |
|
213 |
The datasets are chosen in accordance with the specified criteria.
|
214 |
First, domain variation: half of the datasets are chosen from different domains to ensure diversity, while the remaining half are from a single domain.
|
215 |
+
Another criterion is presence of spelling orthographic and punctuation mistakes:
|
216 |
the datasets exclusively comprised mistyping, omitting grammatical or more complex errors of nonnative speakers.
|
217 |
- **RUSpellRU**: texts collected from ([LiveJournal](https://www.livejournal.com/media)), with manually corrected typos and errors;
|
218 |
- **MultidomainGold**: examples from several text sources including the open web, news, social media, reviews, subtitles, policy documents and literary works were collected:
|
|
|
244 |
To ensure we receive qualified expertise, we set up test iteration on a small subset of the data for both stages. We manually validated the test results and selected annotators, who processed at least six samples (2% of the total test iteration) and did not make a single error. After test iteration, we cut 85% and 86% of labellers for gathering and validation stages.
|
245 |
We especially urge annotators to correct mistakes associated with the substitution of the letters "ё" "й" and "щ" for corresponding "е" "и" and "ш" and not to explain abbreviations and correct punctuation errors. Each annotator is also warned about potentially sensitive topics in data (e.g., politics, societal minorities, and religion).
|
246 |
|
247 |
+
The annotation of punctuation errors has been done in one iteration considering the low variation and difficulty of the task (relative to spelling correction). The annotators have been asked to correct punctuation signs in accordance with the rules of the Russian punctuation system.
|
248 |
+
|
249 |
#### Who are the annotators?
|
250 |
|
251 |
Native Russian speakers who passed the language exam.
|
252 |
|
253 |
+
The annotators for punctuation errors are also professional editors and linguists.
|
254 |
+
|
255 |
|
256 |
## Considerations for Using the Data
|
257 |
|
|
|
302 |
year={2023}
|
303 |
}
|
304 |
|
305 |
+
@inproceedings{martynov-etal-2024-methodology,
|
306 |
+
title = "A Methodology for Generative Spelling Correction via Natural Spelling Errors Emulation across Multiple Domains and Languages",
|
307 |
+
author = "Martynov, Nikita and
|
308 |
+
Baushenko, Mark and
|
309 |
+
Kozlova, Anastasia and
|
310 |
+
Kolomeytseva, Katerina and
|
311 |
+
Abramov, Aleksandr and
|
312 |
+
Fenogenova, Alena",
|
313 |
+
editor = "Graham, Yvette and
|
314 |
+
Purver, Matthew",
|
315 |
+
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
|
316 |
+
month = mar,
|
317 |
+
year = "2024",
|
318 |
+
address = "St. Julian{'}s, Malta",
|
319 |
+
publisher = "Association for Computational Linguistics",
|
320 |
+
url = "https://aclanthology.org/2024.findings-eacl.10",
|
321 |
+
pages = "138--155",
|
322 |
+
abstract = "Large language models excel in text generation and generalization, however they face challenges in text editing tasks, especially in correcting spelling errors and mistyping.In this paper, we present a methodology for generative spelling correction (SC), tested on English and Russian languages and potentially can be extended to any language with minor changes. Our research mainly focuses on exploring natural spelling errors and mistyping in texts and studying how those errors can be emulated in correct sentences to enrich generative models{'} pre-train procedure effectively. We investigate the effects of emulations in various text domains and examine two spelling corruption techniques: 1) first one mimics human behavior when making a mistake through leveraging statistics of errors from a particular dataset, and 2) second adds the most common spelling errors, keyboard miss clicks, and some heuristics within the texts.We conducted experiments employing various corruption strategies, models{'} architectures, and sizes in the pre-training and fine-tuning stages and evaluated the models using single-domain and multi-domain test sets. As a practical outcome of our work, we introduce SAGE (Spell checking via Augmentation and Generative distribution Emulation).",
|
323 |
}
|
324 |
|
325 |
```
|