Datasets:

Modalities:
Text
Formats:
text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
File size: 1,654 Bytes
a879ae9
 
82453c5
 
 
 
a879ae9
82453c5
3bb4aa5
82453c5
3bb4aa5
82453c5
3bb4aa5
 
82453c5
3bb4aa5
 
82453c5
3bb4aa5
 
82453c5
3bb4aa5
 
82453c5
529b119
3bb4aa5
 
 
 
 
 
 
 
 
 
4870540
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: cc-by-sa-4.0
language:
- en
size_categories:
- 10M<n<100M
---

This is the dataset presented in my [ASRU-2023 paper](https://arxiv.org/abs/2309.17267).

It consists of multiple files:

    Keys2Paragraphs.txt (internal name in scripts: yago_wiki.txt): 
        4.3 million unique words/phrases (English Wikipedia titles or their parts) occurring in 33.8 million English Wikipedia paragraphs.

    Keys2Corruptions.txt (internal name in scripts: sub_misspells.txt):
        26 million phrase pairs in the corrupted phrase inventory, as recognized by different ASR models

    Keys2Related.txt (internal name in scripts: related_phrases.txt):
        62.7 million phrase pairs in the related phrase inventory

    FalsePositives.txt (internal name in scripts: false_positives.txt):
        449 thousand phrase pairs in the false positive phrase inventory

    NgramMappings.txt (internal name in scripts: replacement_vocab_filt.txt):
        5.5 million character n-gram mappings dictionary

    asr
        outputs of g2p+tts+asr using 4 different ASR systems (conformer ctc was used twice),
        gives pairs of initial phrase and its recognition result.
        Does not include .wav files, but these can be reproduced by feeding g2p to tts 

    giza
        raw outputs of GIZA++ alignments for each corpus,
        from these we get NgramMappings.txt and Keys2Corruptions.txt

This [example code](https://github.com/bene-ges/nemo_compatible/blob/spellmapper_new_false_positive_sampling/scripts/nlp/en_spellmapper/dataset_preparation/build_training_data_from_wiki_en_asr_adapt.sh) shows how to generate training data from this dataset.