RomanCast commited on
Commit
663a894
1 Parent(s): 6494bc8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -4
README.md CHANGED
@@ -43,6 +43,10 @@ dataset_info:
43
  language:
44
  - en
45
  viewer: true
 
 
 
 
46
  ---
47
 
48
  # WikiSpell
@@ -50,19 +54,27 @@ viewer: true
50
  ## Description
51
  This dataset is a **custom implementation** of the WikiSpell dataset introduced in [Character-Aware Models Improve Visual Text Rendering](https://arxiv.org/pdf/2212.10562.pdf) by Liu et al. (2022).
52
 
53
- Similarly to the original WikiSpell dataset, the training set is composed of 5000 words taken uniformly from the 50% least common Wiktionary words, and 5000 words sampled according to their frequencies taken from the 50% most common Wiktionary words.
 
 
 
 
 
 
 
 
 
54
 
55
- Contrary to the original Wiktionary, we compute the frequency of the words using the first 100k sentences from OpenWebText ([Skylion007/openwebtext](https://huggingface.co/datasets/Skylion007/openwebtext)) instead of mC4.
56
 
57
  ## Usage
58
- This dataset is used for testing spelling in Large Language Models. To do so, the labels should be computed using the following:
59
 
60
  ```python
61
  sample = ds["train"][0]
62
  label = " ".join(sample["text"])
63
  ```
64
 
65
- **They are not included in the dataset.**
66
 
67
  ## Citation
68
 
 
43
  language:
44
  - en
45
  viewer: true
46
+ task_categories:
47
+ - text-generation
48
+ size_categories:
49
+ - 1K<n<10K
50
  ---
51
 
52
  # WikiSpell
 
54
  ## Description
55
  This dataset is a **custom implementation** of the WikiSpell dataset introduced in [Character-Aware Models Improve Visual Text Rendering](https://arxiv.org/pdf/2212.10562.pdf) by Liu et al. (2022).
56
 
57
+ Similarly to the original WikiSpell dataset, the training set is composed of 5000 words taken uniformly from the 50% least common Wiktionary words (taken from [this Wiktionary extraction](https://kaikki.org/dictionary/rawdata.html)), and 5000 words sampled according to their frequencies taken from the 50% most common Wiktionary words.
58
+
59
+ The validation and test are splitted in 5 sets, sampled depending on their frequency in the corpus:
60
+ - 1% most common words
61
+ - 1 - 10% most common words
62
+ - 10 - 20% most common words
63
+ - 20 - 30% most common words
64
+ - 50% least common words
65
+
66
+ Contrary to the original WikiSpell dataset, we compute the frequency of the words using the first 100k sentences from OpenWebText ([Skylion007/openwebtext](https://huggingface.co/datasets/Skylion007/openwebtext)) instead of mC4.
67
 
 
68
 
69
  ## Usage
70
+ This dataset is used for testing spelling in Large Language Models. To do so, the labels should be computed like in the following snippet:
71
 
72
  ```python
73
  sample = ds["train"][0]
74
  label = " ".join(sample["text"])
75
  ```
76
 
77
+ **The labels are not included in the dataset files directly.**
78
 
79
  ## Citation
80