File size: 2,669 Bytes
4316d1e
 
1803a2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7349ec2
 
4316d1e
7349ec2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: mit
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 129624
    num_examples: 10000
  - name: validation_top1
    num_bytes: 10754
    num_examples: 1000
  - name: test_top1
    num_bytes: 10948
    num_examples: 1000
  - name: validation_1_10
    num_bytes: 11618
    num_examples: 1000
  - name: test_1_10
    num_bytes: 11692
    num_examples: 1000
  - name: validation_10_20
    num_bytes: 13401
    num_examples: 1000
  - name: test_10_20
    num_bytes: 13450
    num_examples: 1000
  - name: validation_20_30
    num_bytes: 15112
    num_examples: 1000
  - name: test_20_30
    num_bytes: 15069
    num_examples: 1000
  - name: validation_bottom50
    num_bytes: 15204
    num_examples: 1000
  - name: test_bottom50
    num_bytes: 15076
    num_examples: 1000
  download_size: 241234
  dataset_size: 261948
language:
- en
---

# WikiSpell

## Description
This dataset is a **custom implementation** of the WikiSpell dataset introduced in [Character-Aware Models Improve Visual Text Rendering](https://arxiv.org/pdf/2212.10562.pdf) by Liu et al. (2022).

Similarly to the original WikiSpell dataset, the training set is composed of 5000 words taken uniformly from the 50% least common Wiktionary words, and 5000 words sampled according to their frequencies taken from the 50% most common Wiktionary words.

Contrary to the original Wiktionary, we compute the frequency of the words using the first 100k sentences from OpenWebText ([Skylion007/openwebtext](https://huggingface.co/datasets/Skylion007/openwebtext)) instead of mC4.

## Usage
This dataset is used for testing spelling in Large Language Models. To do so, the labels should be computed using the following:

```python
sample = ds["train"][0]
label = " ".join(sample["text"])
```

**They are not included in the dataset.**

## Citation

Please cite the original paper introducing WikiSpell if you're using this dataset:

```
@inproceedings{liu-etal-2023-character,
    title = "Character-Aware Models Improve Visual Text Rendering",
    author = "Liu, Rosanne  and
      Garrette, Dan  and
      Saharia, Chitwan  and
      Chan, William  and
      Roberts, Adam  and
      Narang, Sharan  and
      Blok, Irina  and
      Mical, Rj  and
      Norouzi, Mohammad  and
      Constant, Noah",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.900",
    pages = "16270--16297",
}
```