File size: 3,314 Bytes
48ec905
 
6780ea5
52f4962
6780ea5
 
 
 
52f4962
 
ccd090b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9e06160
ccd090b
0d1b6d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d86debf
 
0d1b6d5
 
 
d86debf
 
 
 
 
0d1b6d5
 
 
e3f60f3
 
 
0d1b6d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e3f60f3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
license: cc-by-4.0
task_categories:
- token-classification
language:
- en
size_categories:
- 100K<n<1M
tags:
- education
dataset_info:
  features:
  - name: audio_path
    dtype: string
  - name: asr_transcript
    dtype: string
  - name: original_text
    dtype: string
  - name: mutated_text
    dtype: string
  - name: index_tags
    dtype: string
  - name: mutated_tags
    dtype: string
  splits:
  - name: DEL
    num_bytes: 208676326
    num_examples: 351867
  - name: SUB
    num_bytes: 243003228
    num_examples: 351867
  - name: REP
    num_bytes: 303304320
    num_examples: 351867
  download_size: 0
  dataset_size: 754983874
---
# Dataset Card for Running Records Errors Dataset

## Dataset Description

- **Repository:** 
- **Paper:** 
- **Leaderboard:** 
- **Point of Contact:** 

### Dataset Summary

The Running Records Errors dataset is an English-language dataset containing 1,055,601 sentences based on the Europarl corpus. As described in our paper, 
we take the sentences from the English version of the Europarl corpus and randomly inject three types of errors into the sentences: *repetitions*, where 
certain words or phrases are repeated, *substitutions*, where certain words are replaced with a different word, and *deletions*, where the word is completely
omitted. The sentences are then passed into a TTS pipeline consisting of TacoTron2 and HifiGAN model to produce audio recordings of those mutated sentences. Lastly,
the data is passed into a Quartznet 15x5 model which produces a transcript of the spoken audio. 

### Supported Tasks and Leaderboards

The original purpose of this dataset was to construct a model pipeline that could score running records assesments given a transcript of a child's speech along with
the true text for that assesment. However, we provide this dataset to support other tasks involving error detection in text.

### Languages

All of the data in the dataset is in English.

## Dataset Structure

### Data Instances

For each instance, there is a string for the audio transcript, a string for the original text before we added any errors, as well as a string of the sentence with the errors we generated. 
In addition, we provide two lists. One list denotes the original position of each word in the mutated text, and the second list denotes the error applied to that word.

### Data Fields

- asr_transcript: The transcript of the audio processed by our Quartznet 15x5 model.
- original_text: The original text that was in the Europarl corupus. This text contains no artificial errors.
- mutated_text: This text contains the errors we injected. 
- index_tags: This list denotes the original position of each word in `mutated_text.`
- mutated_tags: This list denotes the error applied to each word in `mutated_text.`

### Data Splits

- DEL: Sentences that have had random words removed.
- REP: Sentences that have had repetitions inserted.
- SUB: Sentences that have had words randomly substituted.

## Dataset Creation

### Curation Rationale

[More Information Needed]

### Source Data

#### Initial Data Collection and Normalization

[More Information Needed]

#### Who are the source language producers?

[More Information Needed]

## Additional Information

### Dataset Curators

This dataset was generated with the guidance of Carl Ehrett.