janaab commited on
Commit
37d737c
1 Parent(s): 9105f9c

formatting for readability

Browse files
Files changed (1) hide show
  1. README.md +23 -23
README.md CHANGED
@@ -19,23 +19,23 @@ A Speech to Intent dataset for Indian English (`en-IN`)
19
  <p>This Datasheet is inspired from the <a href="https://arxiv.org/pdf/1803.09010.pdf" target="_blank">Datasheets for datasets</a> paper.</p>
20
  </div>
21
 
22
- ## Motivation
23
 
24
- Q1) For what purpose was the dataset created ? Was there a specific task in mind ? Was there a specific gap that needed to be filled ?
25
 
26
  Ans. This is a dataset for Intent classification from (Indian English) speech, and covers 14 coarse-grained intents from the Banking domain. While there are other datasets that have approached this task, here we provide a much largee training dataset (`>650` samples per intent) to train models in an end-to-end fashion. We also provide anonymised speaker information to help answer questions around model robustness and bias.
27
 
28
- Q2) Who created the dataset and on behalf of which entity ?
29
 
30
  Ans. The (internal) Operations team at Skit was involved in the generation of the dataset, and provided their information for (anonymous) release. [Unnati Senani](https://unnu.so/about/) was involved in the curation of utterance templates, and [Kriti Anandan](https://github.com/kritianandan98) and [Kumarmanas Nethil](https://huggingface.co/janaab) were involved in the planning and collection of utterances - using an internal tool called [sandbox](https://github.com/skit-ai/sandbox). These contributors worked on this dataset as part of the Conversational UX and ML teams at Skit.
31
 
32
- Q3) Who funded the creation of the dataset ?
33
 
34
  Ans. Skit funded the creation of this dataset.
35
 
36
- ## Composition
37
 
38
- Q4) What do the instances that comprise the dataset consist of ?
39
 
40
  Ans. The intent dataset is split across `train.csv` and `test.csv`. In both, individual instances consist of the following fields:
41
  - `id`
@@ -57,7 +57,7 @@ You can trace more information on the speakers, using the shared `speaker_id` fi
57
  - `gender`
58
 
59
 
60
- Q5) How many instances are there in total (of each type, if appropriate) ?
61
 
62
  Ans. In all there are `11845` samples, across the train and test splits:
63
 
@@ -73,60 +73,60 @@ Some statistics on the speakers are provided below. More granular information ca
73
  - Languages spoken: `Hindi`, `English`, `Bengali`, `Odia`, `Kannada`, `Punjabi`, `Malayalam`, `Bihari`, `Marathi`
74
  - Indian states lived in: `Bihar`, `Odisha`, `Karnataka`, `West Bengal`, `Punjab`, `Kerala`, `Jharkhand`, `Maharashtra`
75
 
76
- Q6) Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set ?
77
 
78
  Ans. For each intent, our Conversational UX team generated a list of templates. These are meant to be a (satisfactory) representation of all the variations in utterances, seen in human speech. These templates were used as a guide by the speakers when generating data. So, this dataset is limited by the templates and the variations that speakers added (spontaneously).
79
 
80
- Q7) Are there recommended data splits (e.g., training, development/validation, testing) ?
81
 
82
  Ans. The recommended split into train and test sets is provided as `train.csv` and `test.csv` respectively.
83
 
84
- Q8) Are there any errors, sources of noise, or redundancies in the dataset?
85
 
86
  Ans. There could be channel noise present in the dataset, because the data was generated through telephone calls. However, background noise will not be as prevalent as in real-world scenarios, since these telephone calls were made in a semi-controlled environment.
87
 
88
- Q9) Other comments.
89
 
90
  Ans. Speakers were responsible for generating variations in utterances, using the `template` field as a guide. So, there could be some unintentional overlap across the content of utterances.
91
 
92
- ## Collection Process
93
 
94
- Q10) How was the data associated with each instance acquired ?
95
 
96
  Ans. Members of the (internal) Operation team generated each utterance - using the associated `template` field as a guide, and injecting their own variations into the speech utterance.
97
 
98
- Q11) Who was involved in the data collection process and how were they compensated ?
99
 
100
  Ans. The data was generated by the (internal) Operations team and they are/were full-time employees.
101
 
102
- Q12) Over what timeframe was the data collected ?
103
 
104
  Ans. This data was collected over a time period of 1 month.
105
 
106
- Q13) Was any preprocessing/cleaning/labelling of the data done ?
107
 
108
  Ans. Audio instances in the dataset were *auto-labelled* with their associated `intent` and `template` fields. For more information on this, refer to the documentation of [sandbox](https://github.com/skit-ai/sandbox).
109
 
110
- ## Recommended Uses
111
 
112
- Q14) Has the dataset been used for any tasks already ?
113
 
114
  Ans. It has been used to benchmark models for the task of intent classification from speech.
115
 
116
- Q15) What (other) tasks could the dataset be used for ?
117
 
118
  Ans. We provide speaker characteristics. So, this dataset could be used for alternate classification tasks from speech - like, gender or native language.
119
 
120
- ## Distribution and Maintenance
121
 
122
- Q16) Will the dataset be distributed under a copyright or other intellectual property (IP) license ?
123
 
124
  Ans. This dataset is being distributed under a [CC BY NC license](https://creativecommons.org/licenses/by-nc/4.0/).
125
 
126
- Q17) Who will be maintaining the dataset ?
127
 
128
  Ans. The research team at Skit will be maintaining the dataset. They can be contacted by sending an email to ml-research@skit.ai.
129
 
130
- Q18) Will the dataset be updated in the future (e.g., to correct labelling errors, add new instances, delete instances) ?
131
 
132
  Ans. Incase there are errors, we will try to collate and share an updated version every 3 months. We also plan to add more instances and variations to the dataset - to make it more robust.
 
19
  <p>This Datasheet is inspired from the <a href="https://arxiv.org/pdf/1803.09010.pdf" target="_blank">Datasheets for datasets</a> paper.</p>
20
  </div>
21
 
22
+ # Motivation
23
 
24
+ **Q1) For what purpose was the dataset created ? Was there a specific task in mind ? Was there a specific gap that needed to be filled ?**
25
 
26
  Ans. This is a dataset for Intent classification from (Indian English) speech, and covers 14 coarse-grained intents from the Banking domain. While there are other datasets that have approached this task, here we provide a much largee training dataset (`>650` samples per intent) to train models in an end-to-end fashion. We also provide anonymised speaker information to help answer questions around model robustness and bias.
27
 
28
+ **Q2) Who created the dataset and on behalf of which entity ?**
29
 
30
  Ans. The (internal) Operations team at Skit was involved in the generation of the dataset, and provided their information for (anonymous) release. [Unnati Senani](https://unnu.so/about/) was involved in the curation of utterance templates, and [Kriti Anandan](https://github.com/kritianandan98) and [Kumarmanas Nethil](https://huggingface.co/janaab) were involved in the planning and collection of utterances - using an internal tool called [sandbox](https://github.com/skit-ai/sandbox). These contributors worked on this dataset as part of the Conversational UX and ML teams at Skit.
31
 
32
+ **Q3) Who funded the creation of the dataset ?**
33
 
34
  Ans. Skit funded the creation of this dataset.
35
 
36
+ # Composition
37
 
38
+ **Q4) What do the instances that comprise the dataset consist of ?**
39
 
40
  Ans. The intent dataset is split across `train.csv` and `test.csv`. In both, individual instances consist of the following fields:
41
  - `id`
 
57
  - `gender`
58
 
59
 
60
+ **Q5) How many instances are there in total (of each type, if appropriate) ?**
61
 
62
  Ans. In all there are `11845` samples, across the train and test splits:
63
 
 
73
  - Languages spoken: `Hindi`, `English`, `Bengali`, `Odia`, `Kannada`, `Punjabi`, `Malayalam`, `Bihari`, `Marathi`
74
  - Indian states lived in: `Bihar`, `Odisha`, `Karnataka`, `West Bengal`, `Punjab`, `Kerala`, `Jharkhand`, `Maharashtra`
75
 
76
+ **Q6) Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set ?**
77
 
78
  Ans. For each intent, our Conversational UX team generated a list of templates. These are meant to be a (satisfactory) representation of all the variations in utterances, seen in human speech. These templates were used as a guide by the speakers when generating data. So, this dataset is limited by the templates and the variations that speakers added (spontaneously).
79
 
80
+ **Q7) Are there recommended data splits (e.g., training, development/validation, testing) ?**
81
 
82
  Ans. The recommended split into train and test sets is provided as `train.csv` and `test.csv` respectively.
83
 
84
+ **Q8) Are there any errors, sources of noise, or redundancies in the dataset?**
85
 
86
  Ans. There could be channel noise present in the dataset, because the data was generated through telephone calls. However, background noise will not be as prevalent as in real-world scenarios, since these telephone calls were made in a semi-controlled environment.
87
 
88
+ **Q9) Other comments.**
89
 
90
  Ans. Speakers were responsible for generating variations in utterances, using the `template` field as a guide. So, there could be some unintentional overlap across the content of utterances.
91
 
92
+ # Collection Process
93
 
94
+ **Q10) How was the data associated with each instance acquired ?**
95
 
96
  Ans. Members of the (internal) Operation team generated each utterance - using the associated `template` field as a guide, and injecting their own variations into the speech utterance.
97
 
98
+ **Q11) Who was involved in the data collection process and how were they compensated ?**
99
 
100
  Ans. The data was generated by the (internal) Operations team and they are/were full-time employees.
101
 
102
+ **Q12) Over what timeframe was the data collected ?**
103
 
104
  Ans. This data was collected over a time period of 1 month.
105
 
106
+ **Q13) Was any preprocessing/cleaning/labelling of the data done ?**
107
 
108
  Ans. Audio instances in the dataset were *auto-labelled* with their associated `intent` and `template` fields. For more information on this, refer to the documentation of [sandbox](https://github.com/skit-ai/sandbox).
109
 
110
+ # Recommended Uses
111
 
112
+ **Q14) Has the dataset been used for any tasks already ?**
113
 
114
  Ans. It has been used to benchmark models for the task of intent classification from speech.
115
 
116
+ **Q15) What (other) tasks could the dataset be used for ?**
117
 
118
  Ans. We provide speaker characteristics. So, this dataset could be used for alternate classification tasks from speech - like, gender or native language.
119
 
120
+ # Distribution and Maintenance
121
 
122
+ **Q16) Will the dataset be distributed under a copyright or other intellectual property (IP) license ?**
123
 
124
  Ans. This dataset is being distributed under a [CC BY NC license](https://creativecommons.org/licenses/by-nc/4.0/).
125
 
126
+ **Q17) Who will be maintaining the dataset ?**
127
 
128
  Ans. The research team at Skit will be maintaining the dataset. They can be contacted by sending an email to ml-research@skit.ai.
129
 
130
+ **Q18) Will the dataset be updated in the future (e.g., to correct labelling errors, add new instances, delete instances) ?**
131
 
132
  Ans. Incase there are errors, we will try to collate and share an updated version every 3 months. We also plan to add more instances and variations to the dataset - to make it more robust.