|
--- |
|
task_categories: |
|
- audio-classification |
|
language: |
|
- ru |
|
size_categories: |
|
- 100K<n<1M |
|
pretty_name: Russian speech emotions |
|
--- |
|
|
|
This dataset was taken from the creators [GitHub repository](https://github.com/salute-developers/golos/tree/master/dusha) and converted for my own studying needs. |
|
|
|
# Dusha dataset |
|
Dusha is a bi-modal corpus suitable for speech emotion recognition (SER) tasks. The dataset consists of about 300 000 audio recordings with Russian speech, their transcripts and emotional labels. The corpus contains approximately 350 hours of data. Four basic emotions that usually appear in a dialog with a virtual assistant were selected: Happiness (Positive), Sadness, Anger and Neutral emotion. |
|
|
|
## **License** |
|
[English Version](https://github.com/salute-developers/golos/blob/master/license/en_us.pdf) |
|
|
|
[Russian Version](https://github.com/salute-developers/golos/blob/master/license/ru.pdf) |
|
|
|
## **Authors** |
|
- Artem Sokolov |
|
- Fedor Minkin |
|
- Nikita Savushkin |
|
- Nikolay Karpov |
|
- Oleg Kutuzov |
|
- Vladimir Kondratenko |