File size: 4,106 Bytes
a71e4b6
a873fa7
 
 
a71e4b6
a873fa7
 
 
 
 
a71e4b6
a873fa7
 
a71e4b6
a873fa7
a71e4b6
 
a873fa7
 
a71e4b6
a873fa7
 
a71e4b6
 
a873fa7
a71e4b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a873fa7
 
 
a71e4b6
 
 
 
 
a873fa7
a71e4b6
 
 
 
 
 
 
a873fa7
a71e4b6
 
 
 
 
 
 
 
a873fa7
 
 
 
 
 
a71e4b6
 
 
 
a873fa7
a71e4b6
 
 
a873fa7
a71e4b6
 
 
a873fa7
a71e4b6
a873fa7
a71e4b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a873fa7
a71e4b6
 
 
 
 
a873fa7
a71e4b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a873fa7
a71e4b6
 
 
a873fa7
a71e4b6
 
 
a873fa7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
---
Pretty_name: Youtube Casual Audio

Annotations_creators:
- crowdsourced

Language_creators:
- datlq

Languages:
- vi

Licenses:
- cc0-1.0

multilinguality:
  vi:
  - 190K<n<200K

source_datasets:
- extended|youtube

task_categories:
- speech-processing

task_ids:
- automatic-speech-recognition
---

# Dataset Card for common_voice

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]

### Dataset Summary

[Needs More Information]

### Supported Tasks and Leaderboards

[Needs More Information]

### Languages

Vietnamese

## Dataset Structure

### Data Instances

A typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment.

`
{
'file_path': 'audio/_1OsFqkFI38_34.304_39.424.wav', 
'script': 'Ik vind dat een dubieuze procedure.', 
'audio': {'path': 'audio/_1OsFqkFI38_34.304_39.424.wav', 
          'array': array([-0.00048828, -0.00018311, -0.00137329, ...,  0.00079346, 0.00091553,  0.00085449], dtype=float32), 
          'sampling_rate': 16000}
`

### Data Fields

file_path: The path to the audio file

audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.

script: The sentence the user was prompted to speak

### Data Splits

The speech material has been subdivided into portions for train, test, validated.

The val, test, train are all data that has been reviewed, deemed of high quality and split into val, test and train.

## Dataset Creation

### Curation Rationale

[Needs More Information]

### Source Data

#### Initial Data Collection and Normalization

[Needs More Information]

#### Who are the source language producers?

[Needs More Information]

### Annotations

#### Annotation process

[Needs More Information]

#### Who are the annotators?

[Needs More Information]

### Personal and Sensitive Information

[Needs More Information]

## Considerations for Using the Data

### Social Impact of Dataset

[Needs More Information]

### Discussion of Biases

[More Information Needed] 

### Other Known Limitations

[More Information Needed] 

## Additional Information

### Dataset Curators

[More Information Needed] 

### Licensing Information

[Needs More Information]

### Citation Information

[Needs More Information]

### Contributions

Thanks to [@datlq](https://github.com/datlq98) for adding this dataset.