File size: 8,034 Bytes
577e74c
dd0d6e2
 
577e74c
dd0d6e2
 
1de9ac0
 
 
dd0d6e2
1de9ac0
 
 
dd0d6e2
 
f266332
 
dd0d6e2
 
 
 
 
 
 
 
f266332
dd0d6e2
 
f266332
dd0d6e2
f266332
 
dd0d6e2
 
 
 
 
 
 
577e74c
 
d3cdaf7
 
 
 
 
 
9105f9c
 
d3cdaf7
9105f9c
 
37d737c
9105f9c
37d737c
9105f9c
 
 
37d737c
9105f9c
 
 
37d737c
9105f9c
 
 
37d737c
9105f9c
37d737c
9105f9c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37d737c
9105f9c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37d737c
9105f9c
 
 
37d737c
9105f9c
 
 
37d737c
9105f9c
 
 
37d737c
9105f9c
 
 
37d737c
9105f9c
37d737c
9105f9c
 
 
37d737c
9105f9c
 
 
37d737c
9105f9c
 
 
37d737c
9105f9c
 
 
37d737c
9105f9c
37d737c
9105f9c
 
 
37d737c
9105f9c
 
 
37d737c
9105f9c
37d737c
9105f9c
 
 
37d737c
9105f9c
 
 
37d737c
9105f9c
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
---
language:
- en
license: cc-by-nc-4.0
size_categories:
- 1K<n<10K
task_categories:
- audio-classification
- automatic-speech-recognition
pretty_name: Skit-S2I
tags:
- intent-recognition
- speech
dataset_info:
  features:
  - name: audio
    dtype: audio
  - name: intent_class
    dtype: int64
  - name: template
    dtype: string
  - name: speaker_id
    dtype: int64
  splits:
  - name: train
    num_bytes: 698801842.48
    num_examples: 10445
  - name: test
    num_bytes: 93949690.4
    num_examples: 1400
  download_size: 495247674
  dataset_size: 792751532.88
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

Skit-S2I is a **Speech to Intent** dataset for Indian English (`en-IN`), that covers 14 coarse-grained intents from the Banking domain. This work is inspired by a similar release in the [Minds-14 dataset](https://huggingface.co/datasets/PolyAI/minds14) - here, we restrict ourselves to Indian English but with a larger training set. The dataset is split into:
- test - `100` samples per intent
- train - `>650` samples per intent

The data was generated by 11 Indian speakers, recording over a telephony line. We also provide access to anonymised speaker information - like gender, languages spoken, native language - to enable more structured discussions around robustness and bias, in the models you train.


<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
  <p>This Datasheet follows from the <a href="https://arxiv.org/pdf/1803.09010.pdf" target="_blank">Datasheets for datasets</a> paper.</p>
</div>

# Motivation

**Q1) For what purpose was the dataset created ? Was there a specific task in mind ? Was there a specific gap that needed to be filled ?**

Ans. This is a dataset for Intent classification from (Indian English) speech, and covers 14 coarse-grained intents from the Banking domain. While there are other datasets that have approached this task, here we provide a much largee training dataset (`>650` samples per intent) to train models in an end-to-end fashion. We also provide anonymised speaker information to help answer questions around model robustness and bias.

**Q2) Who created the dataset and on behalf of which entity ?**

Ans. The (internal) Operations team at Skit was involved in the generation of the dataset, and provided their information for (anonymous) release. [Unnati Senani](https://unnu.so/about/) was involved in the curation of utterance templates, and [Kriti Anandan](https://github.com/kritianandan98) and [Kumarmanas Nethil](https://huggingface.co/janaab) were involved in the planning and collection of utterances - using an internal tool called [sandbox](https://github.com/skit-ai/sandbox). These contributors worked on this dataset as part of the Conversational UX and ML teams at Skit.

**Q3) Who funded the creation of the dataset ?**

Ans. Skit funded the creation of this dataset.

# Composition

**Q4) What do the instances that comprise the dataset consist of ?**

Ans. The intent dataset is split across `train.csv` and `test.csv`. In both, individual instances consist of the following fields:
- `id`
- `intent_class`
- `template`
- `audio_path`
- `speaker_id`

You can trace more information on the intents, using the shared `intent_class` field in `intent_info.csv`:
- `intent_class`
- `intent_name`
- `description`

You can trace more information on the speakers, using the shared `speaker_id` field in `speaker_info.csv`:
- `speaker_id`
- `native_language`
- `languages_spoken`
- `places_lived`
- `gender`


**Q5) How many instances are there in total (of each type, if appropriate) ?**

Ans. In all there are `11845` samples, across the train and test splits:

- `test.csv` has a total of `1400` samples, with exactly `100` samples per intent
- `train.csv` has a total of `10445` samples, with atleast `650` samples per intent

The 11 speakers are distributed across the dataset, but unequally. However:
- each intent has data from all speakers 
- the speakers are stratified across the train and test split - for each intent independently

Some statistics on the speakers are provided below. More granular information can be found in `speaker_info.csv`:
- Native languages: `Hindi`(4), `Bengali`(3), `Kannada`(2), `Malayalam`(1), `Punjabi`(1)
- Languages spoken: `Hindi`, `English`, `Bengali`, `Odia`, `Kannada`, `Punjabi`, `Malayalam`, `Bihari`, `Marathi`
- Indian states lived in: `Bihar`, `Odisha`, `Karnataka`, `West Bengal`, `Punjab`, `Kerala`, `Jharkhand`, `Maharashtra`

**Q6) Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set ?**

Ans. For each intent, our Conversational UX team generated a list of templates. These are meant to be a (satisfactory) representation of all the variations in utterances, seen in human speech. These templates were used as a guide by the speakers when generating data. So, this dataset is limited by the templates and the variations that speakers added (spontaneously).

**Q7) Are there recommended data splits (e.g., training, development/validation, testing) ?**

Ans. The recommended split into train and test sets is provided as `train.csv` and `test.csv` respectively.

**Q8) Are there any errors, sources of noise, or redundancies in the dataset?**

Ans. There could be channel noise present in the dataset, because the data was generated through telephone calls. However, background noise will not be as prevalent as in real-world scenarios, since these telephone calls were made in a semi-controlled environment.

**Q9) Other comments.**

Ans. Speakers were responsible for generating variations in utterances, using the `template` field as a guide. So, there could be some unintentional overlap across the content of utterances.

# Collection Process

**Q10) How was the data associated with each instance acquired ?**

Ans. Members of the (internal) Operation team generated each utterance - using the associated `template` field as a guide, and injecting their own variations into the speech utterance.

**Q11) Who was involved in the data collection process and how were they compensated ?**

Ans. The data was generated by the (internal) Operations team and they are/were full-time employees.

**Q12) Over what timeframe was the data collected ?**

Ans. This data was collected over a time period of 1 month.

**Q13) Was any preprocessing/cleaning/labelling of the data done ?**

Ans. Audio instances in the dataset were *auto-labelled* with their associated `intent` and `template` fields. For more information on this, refer to the documentation of [sandbox](https://github.com/skit-ai/sandbox).

# Recommended Uses

**Q14) Has the dataset been used for any tasks already ?**

Ans. It has been used to benchmark models for the task of intent classification from speech.

**Q15) What (other) tasks could the dataset be used for ?**

Ans. We provide speaker characteristics. So, this dataset could be used for alternate classification tasks from speech - like, gender or native language.

# Distribution and Maintenance

**Q16) Will the dataset be distributed under a copyright or other intellectual property (IP) license ?**

Ans. This dataset is being distributed under a [CC BY NC license](https://creativecommons.org/licenses/by-nc/4.0/).

**Q17) Who will be maintaining the dataset ?**

Ans. The research team at Skit will be maintaining the dataset. They can be contacted by sending an email to ml-research@skit.ai.

**Q18) Will the dataset be updated in the future (e.g., to correct labelling errors, add new instances, delete instances) ?**

Ans. Incase there are errors, we will try to collate and share an updated version every 3 months. We also plan to add more instances and variations to the dataset - to make it more robust.