Mispronunciation detection using self-supervised speech representations
Abstract
In recent years, <PRE_TAG>self-supervised learning (SSL) models</POST_TAG> have produced promising results in a variety of speech-processing tasks, especially in contexts of data scarcity. In this paper, we study the use of SSL models for the task of mispronunciation detection for second language learners. We compare two downstream approaches: 1) training the model for <PRE_TAG>phone recognition (PR)</POST_TAG> using <PRE_TAG>native English data</POST_TAG>, and 2) training a model directly for the target task using <PRE_TAG>non-<PRE_TAG><PRE_TAG>native English data</POST_TAG></POST_TAG></POST_TAG>. We compare the performance of these two approaches for various SSL representations as well as a representation extracted from a traditional DNN-based speech recognition model. We evaluate the models on L2Arctic and EpaDB, two datasets of non-native speech annotated with pronunciation labels at the phone level. Overall, we find that using a downstream model trained for the target task gives the best performance and that most upstream models perform similarly for the task.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper