Datasets:

Modalities:
Text
Formats:
text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
wiki-en-asr-adapt / README.md
bene-ges's picture
Update README.md
4870540
metadata
license: cc-by-sa-4.0
language:
  - en
size_categories:
  - 10M<n<100M

This is the dataset presented in my ASRU-2023 paper.

It consists of multiple files:

Keys2Paragraphs.txt (internal name in scripts: yago_wiki.txt): 
    4.3 million unique words/phrases (English Wikipedia titles or their parts) occurring in 33.8 million English Wikipedia paragraphs.

Keys2Corruptions.txt (internal name in scripts: sub_misspells.txt):
    26 million phrase pairs in the corrupted phrase inventory, as recognized by different ASR models

Keys2Related.txt (internal name in scripts: related_phrases.txt):
    62.7 million phrase pairs in the related phrase inventory

FalsePositives.txt (internal name in scripts: false_positives.txt):
    449 thousand phrase pairs in the false positive phrase inventory

NgramMappings.txt (internal name in scripts: replacement_vocab_filt.txt):
    5.5 million character n-gram mappings dictionary

asr
    outputs of g2p+tts+asr using 4 different ASR systems (conformer ctc was used twice),
    gives pairs of initial phrase and its recognition result.
    Does not include .wav files, but these can be reproduced by feeding g2p to tts 

giza
    raw outputs of GIZA++ alignments for each corpus,
    from these we get NgramMappings.txt and Keys2Corruptions.txt

This example code shows how to generate training data from this dataset.