Bharat Ramanathan commited on
Commit
5926197
1 Parent(s): 4c22fb1

add readme and loading script

Browse files
Files changed (2) hide show
  1. README.md +144 -0
  2. kannda_asr_corpus.py +147 -0
README.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language:
5
+ - kn
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Kannada ASR Corpus
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - extended|openslr
17
+ tags: []
18
+ task_categories:
19
+ - automatic-speech-recognition
20
+ task_ids: []
21
+ ---
22
+
23
+ # Dataset Card for [Kannada Asr Corpus]
24
+
25
+ ## Table of Contents
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:**
53
+ - **Repository:**
54
+ - **Paper:**
55
+ - **Leaderboard:**
56
+ - **Point of Contact:**
57
+
58
+ ### Dataset Summary
59
+
60
+ [More Information Needed]
61
+
62
+ ### Supported Tasks and Leaderboards
63
+
64
+ [More Information Needed]
65
+
66
+ ### Languages
67
+
68
+ [More Information Needed]
69
+
70
+ ## Dataset Structure
71
+
72
+ ### Data Instances
73
+
74
+ [More Information Needed]
75
+
76
+ ### Data Fields
77
+
78
+ [More Information Needed]
79
+
80
+ ### Data Splits
81
+
82
+ [More Information Needed]
83
+
84
+ ## Dataset Creation
85
+
86
+ ### Curation Rationale
87
+
88
+ [More Information Needed]
89
+
90
+ ### Source Data
91
+
92
+ #### Initial Data Collection and Normalization
93
+
94
+ [More Information Needed]
95
+
96
+ #### Who are the source language producers?
97
+
98
+ [More Information Needed]
99
+
100
+ ### Annotations
101
+
102
+ #### Annotation process
103
+
104
+ [More Information Needed]
105
+
106
+ #### Who are the annotators?
107
+
108
+ [More Information Needed]
109
+
110
+ ### Personal and Sensitive Information
111
+
112
+ [More Information Needed]
113
+
114
+ ## Considerations for Using the Data
115
+
116
+ ### Social Impact of Dataset
117
+
118
+ [More Information Needed]
119
+
120
+ ### Discussion of Biases
121
+
122
+ [More Information Needed]
123
+
124
+ ### Other Known Limitations
125
+
126
+ [More Information Needed]
127
+
128
+ ## Additional Information
129
+
130
+ ### Dataset Curators
131
+
132
+ [More Information Needed]
133
+
134
+ ### Licensing Information
135
+
136
+ [More Information Needed]
137
+
138
+ ### Citation Information
139
+
140
+ [More Information Needed]
141
+
142
+ ### Contributions
143
+
144
+ Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
kannda_asr_corpus.py ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """Filtered Kannada ASR corpus collected from fleurs, openslr79, and ucla corpora filtered for duration between 3 - 30 secs"""
16
+
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+ _CITATION = """\
24
+ @misc{https://doi.org/10.48550/arxiv.2211.09536,
25
+ doi = {10.48550/ARXIV.2211.09536},
26
+
27
+ url = {https://arxiv.org/abs/2211.09536},
28
+
29
+ author = {Kumar, Gokul Karthik and S, Praveen and Kumar, Pratyush and Khapra, Mitesh M. and Nandakumar, Karthik},
30
+
31
+ keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
32
+
33
+ title = {Towards Building Text-To-Speech Systems for the Next Billion Users},
34
+
35
+ publisher = {arXiv},
36
+
37
+ year = {2022},
38
+
39
+ copyright = {arXiv.org perpetual, non-exclusive license}
40
+ }
41
+
42
+ @inproceedings{commonvoice:2020,
43
+ author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
44
+ title = {Common Voice: A Massively-Multilingual Speech Corpus},
45
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
46
+ pages = {4211--4215},
47
+ year = 2020
48
+ }
49
+
50
+ @misc{https://doi.org/10.48550/arxiv.2205.12446,
51
+ doi = {10.48550/ARXIV.2205.12446},
52
+
53
+ url = {https://arxiv.org/abs/2205.12446},
54
+
55
+ author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
56
+
57
+ keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
58
+
59
+ title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
60
+
61
+ publisher = {arXiv},
62
+
63
+ year = {2022},
64
+
65
+ copyright = {Creative Commons Attribution 4.0 International}
66
+ }
67
+
68
+ """
69
+
70
+ _DESCRIPTION = """\
71
+ The corpus contains roughly 360 hours of audio and transcripts in Kannada language. The transcripts have beed de-duplicated using exact match deduplication.
72
+ """
73
+
74
+ _HOMEPAGE = ""
75
+
76
+ _LICENSE = "https://creativecommons.org/licenses/"
77
+
78
+
79
+ _METADATA_URLS = {
80
+ "train": "data/train.jsonl",
81
+ }
82
+ _URLS = {
83
+ "train": "data/train.tar.gz",
84
+
85
+ }
86
+
87
+ class KannadaASRCorpus(datasets.GeneratorBasedBuilder):
88
+ """Kannada ASR Corpus contains transcribed speech corpus for training ASR systems for Kannada language."""
89
+
90
+ VERSION = datasets.Version("1.1.0")
91
+ def _info(self):
92
+ features = datasets.Features(
93
+ {
94
+ "audio": datasets.Audio(sampling_rate=16_000),
95
+ "path": datasets.Value("string"),
96
+ "sentence": datasets.Value("string"),
97
+ "length": datasets.Value("float")
98
+ }
99
+ )
100
+ return datasets.DatasetInfo(
101
+ description=_DESCRIPTION,
102
+ features=features,
103
+ supervised_keys=("sentence", "label"),
104
+ homepage=_HOMEPAGE,
105
+ license=_LICENSE,
106
+ citation=_CITATION,
107
+ )
108
+
109
+ def _split_generators(self, dl_manager):
110
+ metadata_paths = dl_manager.download(_METADATA_URLS)
111
+ train_archive = dl_manager.download(_URLS["train"])
112
+ local_extracted_train_archive = dl_manager.extract(train_archive) if not dl_manager.is_streaming else None
113
+ train_dir = "train"
114
+
115
+ return [
116
+ datasets.SplitGenerator(
117
+ name=datasets.Split.TRAIN,
118
+ gen_kwargs={
119
+ "metadata_path": metadata_paths["train"],
120
+ "local_extracted_archive": local_extracted_train_archive,
121
+ "path_to_clips": train_dir,
122
+ "audio_files": dl_manager.iter_archive(train_archive),
123
+ },
124
+ ),
125
+ ]
126
+
127
+ def _generate_examples(self, metadata_path, local_extracted_archive, path_to_clips, audio_files):
128
+ """Yields examples as (key, example) tuples."""
129
+ examples = {}
130
+ with open(metadata_path, encoding="utf-8") as f:
131
+ for key, row in enumerate(f):
132
+ data = json.loads(row)
133
+ examples[data["path"]] = data
134
+ inside_clips_dir = False
135
+ id_ = 0
136
+ for path, f in audio_files:
137
+ if path.startswith(path_to_clips):
138
+ inside_clips_dir = True
139
+ if path in examples:
140
+ result = examples[path]
141
+ path = os.path.join(local_extracted_archive, path) if local_extracted_archive else path
142
+ result["audio"] = {"path": path, "bytes": f.read()}
143
+ result["path"] = path
144
+ yield id_, result
145
+ id_ += 1
146
+ elif inside_clips_dir:
147
+ break