id
stringlengths
3
32
wav
stringlengths
26
105
emotion
stringclasses
8 values
intensity
float64
0
6
length
float64
1
10
0015_000355
./data/ESD/Emotional Speech Dataset (ESD)/0015/Angry/evaluation/0015_000355.wav
Anger
0
1.895
Train_dia551_utt1
./data/MELD/MELD.Raw/Train/train_splits/dia551_utt1.mp4
Happiness
0
1.557333
0006_001138
./data/ESD/Emotional Speech Dataset (ESD)/0006/Sad/train/0006_001138.wav
Sadness
0
3.937
7031d44e353b7b584ee9874f5472992e
/notebooks/data/DUSHA/crowd_test/wavs/7031d44e353b7b584ee9874f5472992e.wav
Happiness
0
3.3665
Train_dia670_utt0
./data/MELD/MELD.Raw/Train/train_splits/dia670_utt0.mp4
Happiness
0
3.754667
74456264b6a319e31f356484548ef419
/notebooks/data/DUSHA/crowd_train/wavs/74456264b6a319e31f356484548ef419.wav
Anger
0
3.28
0012_001092
./data/ESD/Emotional Speech Dataset (ESD)/0012/Sad/test/0012_001092.wav
Sadness
0
2.189
58b85ad0676026a66642ba72d5d446f9
/notebooks/data/DUSHA/crowd_train/wavs/58b85ad0676026a66642ba72d5d446f9.wav
Anger
0
2.530563
5182f908d22b6562ea60575c2724759e
/notebooks/data/DUSHA/crowd_train/wavs/5182f908d22b6562ea60575c2724759e.wav
Sadness
0
2.815625
0013_001042
./data/ESD/Emotional Speech Dataset (ESD)/0013/Happy/train/0013_001042.wav
Happiness
0
2.438
e8d3b7c2377b253922a23d7bcc091793
/notebooks/data/DUSHA/crowd_train/wavs/e8d3b7c2377b253922a23d7bcc091793.wav
Happiness
0
3.18075
1067_ITH_SAD_XX
/notebooks/data/CREMAD/AudioWAV/1067_ITH_SAD_XX.wav
Sadness
0
2.635938
be2f82abaf770c99702f17e6930f9343
/notebooks/data/DUSHA/crowd_train/wavs/be2f82abaf770c99702f17e6930f9343.wav
Sadness
0
3.3665
0018_001572
./data/ESD/Emotional Speech Dataset (ESD)/0018/Surprise/train/0018_001572.wav
Surprise
0
1.876
Train_dia516_utt4
./data/MELD/MELD.Raw/Train/train_splits/dia516_utt4.mp4
Happiness
0
4.522667
e6d45339f050eb24091e2f2689a83ca0
/notebooks/data/DUSHA/crowd_train/wavs/e6d45339f050eb24091e2f2689a83ca0.wav
Sadness
0
2.837
03-02-04-01-02-02-19
./data/RAVDESS_S/Actor_19/03-02-04-01-02-02-19.wav
Sadness
1
4.738063
Ses05F_impro07_M031
./data/IEMOCAP/IEMOCAP_full_release/Session5/sentences/wav/Ses05F_impro07/Ses05F_impro07_M031.wav
Excited
3
1.509938
2b3cda5e505970bc82b62acbb00fb328
/notebooks/data/DUSHA/crowd_train/wavs/2b3cda5e505970bc82b62acbb00fb328.wav
Sadness
0
3.041438
d50cbe506675c64f411ef45c20ee8fe6
/notebooks/data/DUSHA/crowd_train/wavs/d50cbe506675c64f411ef45c20ee8fe6.wav
Anger
0
3.1
Ses03M_script02_2_F037
./data/IEMOCAP/IEMOCAP_full_release/Session3/sentences/wav/Ses03M_script02_2/Ses03M_script02_2_F037.wav
Sadness
3
2.278125
e3b51804c628a037913779ba310dd577
/notebooks/data/DUSHA/crowd_test/wavs/e3b51804c628a037913779ba310dd577.wav
Happiness
0
3.4
363f3f336b251aaa9e122e2632cc1de0
/notebooks/data/DUSHA/crowd_train/wavs/363f3f336b251aaa9e122e2632cc1de0.wav
Anger
0
3.26
516dab12e52ba4dd146895b1fb9fff3e
/notebooks/data/DUSHA/crowd_train/wavs/516dab12e52ba4dd146895b1fb9fff3e.wav
Sadness
0
2.431625
1090_MTI_DIS_XX
/notebooks/data/CREMAD/AudioWAV/1090_MTI_DIS_XX.wav
Disgust
0
3.803813
Ses03F_script01_1_F029
./data/IEMOCAP/IEMOCAP_full_release/Session3/sentences/wav/Ses03F_script01_1/Ses03F_script01_1_F029.wav
Frustration
3.5
2.490625
0004_000542
./data/ESD/Emotional Speech Dataset (ESD)/0004/Angry/train/0004_000542.wav
Anger
0
2.255
0007_001398
./data/ESD/Emotional Speech Dataset (ESD)/0007/Sad/train/0007_001398.wav
Sadness
0
4.407
90dccc0d8e70e51287cefb25689225c3
/notebooks/data/DUSHA/crowd_test/wavs/90dccc0d8e70e51287cefb25689225c3.wav
Anger
0
2.530563
0008_001073
./data/ESD/Emotional Speech Dataset (ESD)/0008/Sad/test/0008_001073.wav
Sadness
0
3.351
30f431b60cd218d6290db2fefd7130d3
/notebooks/data/DUSHA/crowd_train/wavs/30f431b60cd218d6290db2fefd7130d3.wav
Sadness
0
2.298375
cac2a8a92625869b28e9b4c191b6db27
/notebooks/data/DUSHA/crowd_train/wavs/cac2a8a92625869b28e9b4c191b6db27.wav
Sadness
0
3.111063
1ecba92c8883a311c3ca39cb4fd69ec6
/notebooks/data/DUSHA/crowd_train/wavs/1ecba92c8883a311c3ca39cb4fd69ec6.wav
Happiness
0
3.583625
0001_001326
./data/ESD/Emotional Speech Dataset (ESD)/0001/Sad/train/0001_001326.wav
Sadness
0
5.62
0002_000908
./data/ESD/Emotional Speech Dataset (ESD)/0002/Happy/train/0002_000908.wav
Happiness
0
3.1
1006_IWL_FEA_XX
/notebooks/data/CREMAD/AudioWAV/1006_IWL_FEA_XX.wav
Fear
0
2.702688
5960d2bb2f2e0ecd1b930f2f0fc94972
/notebooks/data/DUSHA/crowd_train/wavs/5960d2bb2f2e0ecd1b930f2f0fc94972.wav
Anger
0
2.364125
16efb2e29ca29f96c351ceda4deee28d
/notebooks/data/DUSHA/crowd_train/wavs/16efb2e29ca29f96c351ceda4deee28d.wav
Anger
0
2.39125
ad32840309e1a3d1c3dacb0ad7ee8ed7
/notebooks/data/DUSHA/crowd_train/wavs/ad32840309e1a3d1c3dacb0ad7ee8ed7.wav
Happiness
0
2.902125
YAF_south_ps
./data/TESS/YAF_south_ps.wav
Surprise
0
1.959204
0008_001574
./data/ESD/Emotional Speech Dataset (ESD)/0008/Surprise/train/0008_001574.wav
Surprise
0
3.533
07b36a9fb0a4fd910456aa587923986c
/notebooks/data/DUSHA/crowd_train/wavs/07b36a9fb0a4fd910456aa587923986c.wav
Sadness
0
3.06
Ses05F_script01_3_M022
./data/IEMOCAP/IEMOCAP_full_release/Session5/sentences/wav/Ses05F_script01_3/Ses05F_script01_3_M022.wav
Frustration
3
2.3135
0006_000682
./data/ESD/Emotional Speech Dataset (ESD)/0006/Angry/train/0006_000682.wav
Anger
0
2.953
0012_001129
./data/ESD/Emotional Speech Dataset (ESD)/0012/Sad/train/0012_001129.wav
Sadness
0
2.874
1816b203502a4f1d6c241ff805738e6f
/notebooks/data/DUSHA/crowd_train/wavs/1816b203502a4f1d6c241ff805738e6f.wav
Sadness
0
3.250375
55f00362c9a30166e6c999f022e9d575
/notebooks/data/DUSHA/crowd_train/wavs/55f00362c9a30166e6c999f022e9d575.wav
Anger
0
3.3665
0001_001090
./data/ESD/Emotional Speech Dataset (ESD)/0001/Sad/test/0001_001090.wav
Sadness
0
4.039
0fe4c81d22a78d165cf5a51a77ce73b9
/notebooks/data/DUSHA/crowd_train/wavs/0fe4c81d22a78d165cf5a51a77ce73b9.wav
Anger
0
3.064625
eabf17d4cd9aa94b5da3722d8b5236b2
/notebooks/data/DUSHA/crowd_train/wavs/eabf17d4cd9aa94b5da3722d8b5236b2.wav
Happiness
0
3.34
1d374c130e7f732a46bd7f4709a90640
/notebooks/data/DUSHA/crowd_train/wavs/1d374c130e7f732a46bd7f4709a90640.wav
Sadness
0
3.391625
0004_001168
./data/ESD/Emotional Speech Dataset (ESD)/0004/Sad/train/0004_001168.wav
Sadness
0
4.17
0130dc2f00e9a9d7dc0a00035e267e3a
/notebooks/data/DUSHA/crowd_train/wavs/0130dc2f00e9a9d7dc0a00035e267e3a.wav
Anger
0
3.32
0011_001246
./data/ESD/Emotional Speech Dataset (ESD)/0011/Sad/train/0011_001246.wav
Sadness
0
2.787
0008_001316
./data/ESD/Emotional Speech Dataset (ESD)/0008/Sad/train/0008_001316.wav
Sadness
0
5.362
0020_000896
./data/ESD/Emotional Speech Dataset (ESD)/0020/Happy/train/0020_000896.wav
Happiness
0
3.772
0010_001519
./data/ESD/Emotional Speech Dataset (ESD)/0010/Surprise/train/0010_001519.wav
Surprise
0
2.728
458ab4d63eb6b153a9727a1ec2d5920a
/notebooks/data/DUSHA/crowd_train/wavs/458ab4d63eb6b153a9727a1ec2d5920a.wav
Sadness
0
2.901
sa06
data/SAVEE/AudioData/KL/sa06.wav
Sadness
0
3.499297
Train_dia572_utt19
./data/MELD/MELD.Raw/Train/train_splits/dia572_utt19.mp4
Anger
0
5.589333
d73043c79e545ef2056181ac0849d695
/notebooks/data/DUSHA/crowd_train/wavs/d73043c79e545ef2056181ac0849d695.wav
Sadness
0
3.562313
5447e3658a68189fda96ac0656709e96
/notebooks/data/DUSHA/crowd_train/wavs/5447e3658a68189fda96ac0656709e96.wav
Sadness
0
2.832438
1047_IEO_ANG_HI
/notebooks/data/CREMAD/AudioWAV/1047_IEO_ANG_HI.wav
Anger
5
2.535875
0009_001286
./data/ESD/Emotional Speech Dataset (ESD)/0009/Sad/train/0009_001286.wav
Sadness
0
3.48
OAF_goose_sad
./data/TESS/OAF_goose_sad.wav
Sadness
0
2.515852
0014_000929
./data/ESD/Emotional Speech Dataset (ESD)/0014/Happy/train/0014_000929.wav
Happiness
0
2.659
OAF_pass_fear
./data/TESS/OAF_pass_fear.wav
Fear
0
1.528385
896681ed138e75536f315897e108ad53
/notebooks/data/DUSHA/crowd_test/wavs/896681ed138e75536f315897e108ad53.wav
Happiness
0
2.669875
Test_dia23_utt0
./data/MELD/MELD.Raw/Test/output_repeated_splits_test/dia23_utt0.mp4
Happiness
0
1.6
0002_001613
./data/ESD/Emotional Speech Dataset (ESD)/0002/Surprise/train/0002_001613.wav
Surprise
0
2.977
3c0df71bc0739288acb0476a17611f4a
/notebooks/data/DUSHA/crowd_train/wavs/3c0df71bc0739288acb0476a17611f4a.wav
Happiness
0
3.32
1fcf0904a1f77a2dfa2048930dc0d8d3
/notebooks/data/DUSHA/crowd_test/wavs/1fcf0904a1f77a2dfa2048930dc0d8d3.wav
Happiness
0
3.018188
1091_WSI_ANG_XX
/notebooks/data/CREMAD/AudioWAV/1091_WSI_ANG_XX.wav
Anger
0
2.669313
1059_TSI_HAP_XX
/notebooks/data/CREMAD/AudioWAV/1059_TSI_HAP_XX.wav
Happiness
0
2.06875
1064_IWL_SAD_XX
/notebooks/data/CREMAD/AudioWAV/1064_IWL_SAD_XX.wav
Sadness
0
2.702688
a3d0c49d3768f4b5219e267ddfb900a1
/notebooks/data/DUSHA/crowd_train/wavs/a3d0c49d3768f4b5219e267ddfb900a1.wav
Sadness
0
2.955313
4f2c619421df25c1dbb16e1e43d9c4cc
/notebooks/data/DUSHA/crowd_train/wavs/4f2c619421df25c1dbb16e1e43d9c4cc.wav
Sadness
0
2.530563
1018_MTI_FEA_XX
/notebooks/data/CREMAD/AudioWAV/1018_MTI_FEA_XX.wav
Fear
0
1.835188
1daa049ca0d5684aeb34b0b5b50eb356
/notebooks/data/DUSHA/crowd_train/wavs/1daa049ca0d5684aeb34b0b5b50eb356.wav
Sadness
0
2.76275
266905b33eb80ac75e93df1262e4657a
/notebooks/data/DUSHA/crowd_train/wavs/266905b33eb80ac75e93df1262e4657a.wav
Happiness
0
3.14
367559ff35fa7d3c4e6a3a888fad3ccd
/notebooks/data/DUSHA/crowd_test/wavs/367559ff35fa7d3c4e6a3a888fad3ccd.wav
Sadness
0
3.18075
8bb1221e9c814bceb46234a69d997de4
/notebooks/data/DUSHA/crowd_train/wavs/8bb1221e9c814bceb46234a69d997de4.wav
Sadness
0
3.562313
710a6f65fddc6b2ade1ff386f46f4f84
/notebooks/data/DUSHA/crowd_train/wavs/710a6f65fddc6b2ade1ff386f46f4f84.wav
Sadness
0
3.42
294f170a7de5db1ed27e7c7cb07e8e5d
/notebooks/data/DUSHA/crowd_train/wavs/294f170a7de5db1ed27e7c7cb07e8e5d.wav
Happiness
0
1.749
1086_ITS_ANG_XX
/notebooks/data/CREMAD/AudioWAV/1086_ITS_ANG_XX.wav
Anger
0
2.335688
0013_000641
./data/ESD/Emotional Speech Dataset (ESD)/0013/Angry/train/0013_000641.wav
Anger
0
3.103
871a57ea6118c12fabf7011d7bfb9f4b
/notebooks/data/DUSHA/crowd_train/wavs/871a57ea6118c12fabf7011d7bfb9f4b.wav
Anger
0
3.111063
4a4cf42758885dab4e496eb20b095174
/notebooks/data/DUSHA/crowd_train/wavs/4a4cf42758885dab4e496eb20b095174.wav
Anger
0
2.623438
0007_000559
./data/ESD/Emotional Speech Dataset (ESD)/0007/Angry/train/0007_000559.wav
Anger
0
2.402
ed872b4f0d4bb426bceb1a3091acc770
/notebooks/data/DUSHA/crowd_test/wavs/ed872b4f0d4bb426bceb1a3091acc770.wav
Sadness
0
3.5
1021_DFA_FEA_XX
/notebooks/data/CREMAD/AudioWAV/1021_DFA_FEA_XX.wav
Fear
0
2.902875
1010_IWW_FEA_XX
/notebooks/data/CREMAD/AudioWAV/1010_IWW_FEA_XX.wav
Fear
0
3.603625
9c64fd4bd947048ba4b1cea0c4a2be9c
/notebooks/data/DUSHA/crowd_train/wavs/9c64fd4bd947048ba4b1cea0c4a2be9c.wav
Sadness
0
3.4
e9617913037045b437bd58ae21b40653
/notebooks/data/DUSHA/crowd_train/wavs/e9617913037045b437bd58ae21b40653.wav
Anger
0
2.298375
a992e22d6ddc0c4731758d3bcfe6062e
/notebooks/data/DUSHA/crowd_train/wavs/a992e22d6ddc0c4731758d3bcfe6062e.wav
Happiness
0
2.858313
1081_TAI_ANG_XX
/notebooks/data/CREMAD/AudioWAV/1081_TAI_ANG_XX.wav
Anger
0
2.06875
f8a3172d527db85c886ef28a00ff8eb4
/notebooks/data/DUSHA/crowd_train/wavs/f8a3172d527db85c886ef28a00ff8eb4.wav
Happiness
0
3.38
Ses04F_script02_2_F034
./data/IEMOCAP/IEMOCAP_full_release/Session4/sentences/wav/Ses04F_script02_2/Ses04F_script02_2_F034.wav
Anger
4
5.639938
0012_000932
./data/ESD/Emotional Speech Dataset (ESD)/0012/Happy/train/0012_000932.wav
Happiness
0
2.747625
a2380c75bb294ed28c639970406d64ca
/notebooks/data/DUSHA/crowd_test/wavs/a2380c75bb294ed28c639970406d64ca.wav
Anger
0
3.087875

Speech Emotion Intensity Recognition Database (SEIR-DB)

Dataset Summary

The SEIR-DB is a comprehensive, multilingual speech emotion intensity recognition dataset containing over 600,000 instances from various sources. It is designed to support tasks related to speech emotion recognition and emotion intensity estimation. The database includes languages such as English, Russian, Mandarin, Greek, Italian, and French.

Supported Tasks and Leaderboards

The SEIR-DB is suitable for:

  • Speech Emotion Recognition (classification of discrete emotional states)
  • Speech Emotion Intensity Estimation (a subset of this dataset, where intensity is rated from 1–5)

SPEAR (8 emotions – 375 hours)

SPEAR (Speech Emotion Analysis and Recognition System) is an ensemble model and serves as the SER benchmark for this dataset. Below is a comparison of its performance against the best fine-tuned pre-trained model (WavLM Large):

WavLM Large Test Accuracy SPEAR Test Accuracy Improvement
87.8% 90.8% +3.0%

More detailed metrics for SPEAR:

Train Accuracy (%) Validation Accuracy (%) Test Accuracy (%)
99.8% 90.4% 90.8%

Languages

SEIR-DB encompasses multilingual data, featuring languages such as English, Russian, Mandarin, Greek, Italian, and French.

Dataset Structure

Data Instances

The raw data collection comprises over 600,000 data instances (375 hours). Users of the database can access the raw audio data, which is stored in subdirectories of the data directory (in their respective datasets).

After processing, cleaning, and formatting, the dataset contains approximately 120,000 training instances with an average audio utterance length of 3.8 seconds.

Data Fields

  • ID: unique sample identifier
  • WAV: path to the audio file, located in the data directory
  • EMOTION: annotated emotion
  • INTENSITY: annotated intensity (ranging from 1-5), where 1 denotes low intensity, and 5 signifies high intensity; 0 indicates no annotation
  • LENGTH: duration of the audio utterance

Data Splits

The data is divided into train, test, and validation sets, located in the respective JSON manifest files.

  • Train: 80%
  • Validation: 10%
  • Test: 10%

For added flexibility, unsplit data is also available in data.csv to allow custom splits.

Dataset Creation

Curation Rationale

The SEIR-DB was curated to maximize the volume of data instances, addressing a significant limitation in speech emotion recognition (SER) experimentation—the lack of emotion data and the small size of available datasets. This database aims to resolve these issues by providing a large volume of emotion-annotated data that is cleanly formatted for experimentation.

Source Data

The dataset was compiled from various sources.

Annotations

Annotation process

For details on the annotation process, please refer to the source for each dataset, as they were conducted differently. However, the entire database is human-annotated.

Who are the annotators?

Please consult the source documentation for information on the annotators.

Personal and Sensitive Information

No attempt was made to remove personal and sensitive information, as consent and recordings were not obtained internally.

Considerations for Using the Data

Social Impact of Dataset

The SEIR-DB dataset can significantly impact the research and development of speech emotion recognition technologies by providing a large volume of annotated data. These technologies have the potential to enhance various applications, such as mental health monitoring, virtual assistants, customer support, and communication devices for people with disabilities.

Discussion of Biases

During the dataset cleaning process, efforts were made to balance the database concerning the number of samples for each dataset, emotion distribution (with a greater focus on primary emotions and less on secondary emotions), and language distribution. However, biases may still be present.

Other Known Limitations

No specific limitations have been identified at this time.

Additional Information

Dataset Curators

Gabriel Giangi - Concordia University - Montreal, QC Canada - gabegiangi@gmail.com

Licensing Information

This dataset can be used for research and academic purposes. For commercial purposes, please contact gabegiangi@gmail.com.

Citation Information

Aljuhani, R. H., Alshutayri, A., & Alahdal, S. (2021). Arabic speech emotion recognition from Saudi dialect corpus. IEEE Access, 9, 127081-127085.

Basu, S., Chakraborty, J., & Aftabuddin, M. (2017). Emotion recognition from speech using convolutional neural network with recurrent neural network architecture. In ICCES.

Baevski, A., Zhou, H. H., & Collobert, R. (2020). Wav2vec 2.0: A framework for self-supervised learning of speech representations. In NeurIPS.

Busso, C., Bulut, M., Lee, C. C., Kazemzadeh, A., Mower, E., Kim, S., ... & Narayanan, S. (2008). Iemocap: Interactive emotional dyadic motion capture database. In LREC.

Cao, H., Cooper, D.G., Keutmann, M.K., Gur, R.C., Nenkova, A., & Verma, R. (2014). CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset. IEEE Transactions on Affective Computing, 5, 377-390.

Chopra, S., Mathur, P., Sawhney, R., & Shah, R. R. (2021). Meta-Learning for Low-Resource Speech Emotion Recognition. In ICASSP.

Costantini, G., Iaderola, I., Paoloni, A., & Todisco, M. (2014). EMOVO Corpus: an Italian Emotional Speech Database. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14) (pp. 3501-3504). European Language Resources Association (ELRA). Reykjavik, Iceland. http://www.lrec-conf.org/proceedings/lrec2014/pdf/591_Paper.pdf

Duville, Mathilde Marie; Alonso-Valerdi, Luz María; Ibarra-Zarate, David I. (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5

Gournay, Philippe, Lahaie, Olivier, & Lefebvre, Roch. (2018). A Canadian French Emotional Speech Dataset (1.1) [Data set]. ACM Multimedia Systems Conference (MMSys 2018) (MMSys'18), Amsterdam, The Netherlands. Zenodo. https://doi.org/10.5281/zenodo.1478765

Kandali, A., Routray, A., & Basu, T. (2008). Emotion recognition from Assamese speeches using MFCC features and GMM classifier. In TENCON.

Kondratenko, V., Sokolov, A., Karpov, N., Kutuzov, O., Savushkin, N., & Minkin, F. (2022). Large Raw Emotional Dataset with Aggregation Mechanism. arXiv preprint arXiv:2212.12266.

Kwon, S. (2021). MLT-DNet: Speech emotion recognition using 1D dilated CNN based on multi-learning trick approach. Expert Systems with Applications, 167, 114177.

Lee, Y., Lee, J. W., & Kim, S. (2019). Emotion recognition using convolutional neural network and multiple feature fusion. In ICASSP.

Li, Y., Baidoo, C., Cai, T., & Kusi, G. A. (2019). Speech emotion recognition using 1d cnn with no attention. In ICSEC.

Lian, Z., Tao, J., Liu, B., Huang, J., Yang, Z., & Li, R. (2020). Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition. In Interspeech.

Livingstone, S. R., & Russo, F. A. (2018). The Ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE, 13(5), e0196391.

Peng, Z., Li, X., Zhu, Z., Unoki, M., Dang, J., & Akagi, M. (2020). Speech emotion recognition using 3d convolutions and attention-based sliding recurrent networks with auditory front-ends. IEEE Access, 8, 16560-16572.

Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., & Mihalcea, R. (2019). Meld: A multimodal multi-party dataset for emotion recognition in conversations. In ACL.

Schneider, A., Baevski, A., & Collobert, R. (2019). Wav2vec: Unsupervised pre-training for speech recognition. In ICLR.

Schuller, B., Rigoll, G., & Lang, M. (2010). Speech emotion recognition: Features and classification models. In Interspeech.

Sinnott, R. O., Radulescu, A., & Kousidis, S. (2013). Surrey audiovisual expressed emotion (savee) database. In AVEC.

Vryzas, N., Kotsakis, R., Liatsou, A., Dimoulas, C. A., & Kalliris, G. (2018). Speech emotion recognition for performance interaction. Journal of the Audio Engineering Society, 66(6), 457-467.

Vryzas, N., Matsiola, M., Kotsakis, R., Dimoulas, C., & Kalliris, G. (2018, September). Subjective Evaluation of a Speech Emotion Recognition Interaction Framework. In Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion (p. 34). ACM.

Wang, Y., Yang, Y., Liu, Y., Chen, Y., Han, N., & Zhou, J. (2019). Speech emotion recognition using a combination of cnn and rnn. In Interspeech.

Yoon, S., Byun, S., & Jung, K. (2018). Multimodal speech emotion recognition using audio and text. In SLT.

Zhang, R., & Liu, M. (2020). Speech emotion recognition with self-attention. In ACL.

Contributions

Gabriel Giangi - Concordia University - Montreal, QC Canada - gabegiangi@gmail.com

Downloads last month
23