Papers
arxiv:2307.16324

Mispronunciation detection using self-supervised speech representations

Published on Jul 30, 2023
Authors:
,
,

Abstract

In recent years, <PRE_TAG>self-supervised learning (SSL) models</POST_TAG> have produced promising results in a variety of speech-processing tasks, especially in contexts of data scarcity. In this paper, we study the use of SSL models for the task of mispronunciation detection for second language learners. We compare two downstream approaches: 1) training the model for <PRE_TAG>phone recognition (PR)</POST_TAG> using <PRE_TAG>native English data</POST_TAG>, and 2) training a model directly for the target task using <PRE_TAG>non-<PRE_TAG><PRE_TAG>native English data</POST_TAG></POST_TAG></POST_TAG>. We compare the performance of these two approaches for various SSL representations as well as a representation extracted from a traditional DNN-based speech recognition model. We evaluate the models on L2Arctic and EpaDB, two datasets of non-native speech annotated with pronunciation labels at the phone level. Overall, we find that using a downstream model trained for the target task gives the best performance and that most upstream models perform similarly for the task.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2307.16324 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2307.16324 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2307.16324 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.