janaab commited on
Commit
9105f9c
1 Parent(s): 1de9ac0

Add datasheet to README

Browse files
Files changed (1) hide show
  1. README.md +117 -1
README.md CHANGED
@@ -13,4 +13,120 @@ size_categories:
13
  - 1K<n<10K
14
  ---
15
 
16
- An Indian Accented Speech to Intent dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  - 1K<n<10K
14
  ---
15
 
16
+ A Speech to Intent dataset for Indian English (`en-IN`)
17
+
18
+ <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
19
+ <p>This Datasheet is inspired from the <a href="https://arxiv.org/pdf/1803.09010.pdf" target="_blank">Datasheets for datasets</a> paper.</p>
20
+ </div>
21
+
22
+ ## Motivation
23
+
24
+ Q1) For what purpose was the dataset created ? Was there a specific task in mind ? Was there a specific gap that needed to be filled ?
25
+
26
+ Ans. This is a dataset for Intent classification from (Indian English) speech, and covers 14 coarse-grained intents from the Banking domain. While there are other datasets that have approached this task, here we provide a much largee training dataset (`>650` samples per intent) to train models in an end-to-end fashion. We also provide anonymised speaker information to help answer questions around model robustness and bias.
27
+
28
+ Q2) Who created the dataset and on behalf of which entity ?
29
+
30
+ Ans. The (internal) Operations team at Skit was involved in the generation of the dataset, and provided their information for (anonymous) release. [Unnati Senani](https://unnu.so/about/) was involved in the curation of utterance templates, and [Kriti Anandan](https://github.com/kritianandan98) and [Kumarmanas Nethil](https://huggingface.co/janaab) were involved in the planning and collection of utterances - using an internal tool called [sandbox](https://github.com/skit-ai/sandbox). These contributors worked on this dataset as part of the Conversational UX and ML teams at Skit.
31
+
32
+ Q3) Who funded the creation of the dataset ?
33
+
34
+ Ans. Skit funded the creation of this dataset.
35
+
36
+ ## Composition
37
+
38
+ Q4) What do the instances that comprise the dataset consist of ?
39
+
40
+ Ans. The intent dataset is split across `train.csv` and `test.csv`. In both, individual instances consist of the following fields:
41
+ - `id`
42
+ - `intent_class`
43
+ - `template`
44
+ - `audio_path`
45
+ - `speaker_id`
46
+
47
+ You can trace more information on the intents, using the shared `intent_class` field in `intent_info.csv`:
48
+ - `intent_class`
49
+ - `intent_name`
50
+ - `description`
51
+
52
+ You can trace more information on the speakers, using the shared `speaker_id` field in `speaker_info.csv`:
53
+ - `speaker_id`
54
+ - `native_language`
55
+ - `languages_spoken`
56
+ - `places_lived`
57
+ - `gender`
58
+
59
+
60
+ Q5) How many instances are there in total (of each type, if appropriate) ?
61
+
62
+ Ans. In all there are `11845` samples, across the train and test splits:
63
+
64
+ - `test.csv` has a total of `1400` samples, with exactly `100` samples per intent
65
+ - `train.csv` has a total of `10445` samples, with atleast `650` samples per intent
66
+
67
+ The 11 speakers are distributed across the dataset, but unequally. However:
68
+ - each intent has data from all speakers
69
+ - the speakers are stratified across the train and test split - for each intent independently
70
+
71
+ Some statistics on the speakers are provided below. More granular information can be found in `speaker_info.csv`:
72
+ - Native languages: `Hindi`(4), `Bengali`(3), `Kannada`(2), `Malayalam`(1), `Punjabi`(1)
73
+ - Languages spoken: `Hindi`, `English`, `Bengali`, `Odia`, `Kannada`, `Punjabi`, `Malayalam`, `Bihari`, `Marathi`
74
+ - Indian states lived in: `Bihar`, `Odisha`, `Karnataka`, `West Bengal`, `Punjab`, `Kerala`, `Jharkhand`, `Maharashtra`
75
+
76
+ Q6) Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set ?
77
+
78
+ Ans. For each intent, our Conversational UX team generated a list of templates. These are meant to be a (satisfactory) representation of all the variations in utterances, seen in human speech. These templates were used as a guide by the speakers when generating data. So, this dataset is limited by the templates and the variations that speakers added (spontaneously).
79
+
80
+ Q7) Are there recommended data splits (e.g., training, development/validation, testing) ?
81
+
82
+ Ans. The recommended split into train and test sets is provided as `train.csv` and `test.csv` respectively.
83
+
84
+ Q8) Are there any errors, sources of noise, or redundancies in the dataset?
85
+
86
+ Ans. There could be channel noise present in the dataset, because the data was generated through telephone calls. However, background noise will not be as prevalent as in real-world scenarios, since these telephone calls were made in a semi-controlled environment.
87
+
88
+ Q9) Other comments.
89
+
90
+ Ans. Speakers were responsible for generating variations in utterances, using the `template` field as a guide. So, there could be some unintentional overlap across the content of utterances.
91
+
92
+ ## Collection Process
93
+
94
+ Q10) How was the data associated with each instance acquired ?
95
+
96
+ Ans. Members of the (internal) Operation team generated each utterance - using the associated `template` field as a guide, and injecting their own variations into the speech utterance.
97
+
98
+ Q11) Who was involved in the data collection process and how were they compensated ?
99
+
100
+ Ans. The data was generated by the (internal) Operations team and they are/were full-time employees.
101
+
102
+ Q12) Over what timeframe was the data collected ?
103
+
104
+ Ans. This data was collected over a time period of 1 month.
105
+
106
+ Q13) Was any preprocessing/cleaning/labelling of the data done ?
107
+
108
+ Ans. Audio instances in the dataset were *auto-labelled* with their associated `intent` and `template` fields. For more information on this, refer to the documentation of [sandbox](https://github.com/skit-ai/sandbox).
109
+
110
+ ## Recommended Uses
111
+
112
+ Q14) Has the dataset been used for any tasks already ?
113
+
114
+ Ans. It has been used to benchmark models for the task of intent classification from speech.
115
+
116
+ Q15) What (other) tasks could the dataset be used for ?
117
+
118
+ Ans. We provide speaker characteristics. So, this dataset could be used for alternate classification tasks from speech - like, gender or native language.
119
+
120
+ ## Distribution and Maintenance
121
+
122
+ Q16) Will the dataset be distributed under a copyright or other intellectual property (IP) license ?
123
+
124
+ Ans. This dataset is being distributed under a [CC BY NC license](https://creativecommons.org/licenses/by-nc/4.0/).
125
+
126
+ Q17) Who will be maintaining the dataset ?
127
+
128
+ Ans. The research team at Skit will be maintaining the dataset. They can be contacted by sending an email to ml-research@skit.ai.
129
+
130
+ Q18) Will the dataset be updated in the future (e.g., to correct labelling errors, add new instances, delete instances) ?
131
+
132
+ Ans. Incase there are errors, we will try to collate and share an updated version every 3 months. We also plan to add more instances and variations to the dataset - to make it more robust.