Papers
arxiv:2109.06912

fairseq S^2: A Scalable and Integrable Speech Synthesis Toolkit

Published on Sep 14, 2021
Authors:
,
,
,
,
,
,
,

Abstract

This paper presents fairseq S^2, a fairseq extension for speech synthesis. We implement a number of autoregressive (AR) and non-AR text-to-speech models, and their multi-speaker variants. To enable training speech synthesis models with less curated data, a number of preprocessing tools are built and their importance is shown empirically. To facilitate faster iteration of development and analysis, a suite of automatic metrics is included. Apart from the features added specifically for this extension, fairseq S^2 also benefits from the scalability offered by fairseq and can be easily integrated with other state-of-the-art systems provided in this framework. The code, documentation, and pre-trained models are available at https://github.com/pytorch/fairseq/tree/master/examples/speech_synthesis.

Community

Sign up or log in to comment

Models citing this paper 19

Browse 19 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2109.06912 in a dataset README.md to link it from this page.

Spaces citing this paper 231

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.