Pretrained Model of Amphion VITS

We provide the pre-trained checkpoint of VITS trained on LJSpeech, which consists of 13,100 short audio clips of a single speaker and has a total length of approximately 24 hours.

Quick Start

To utilize the pretrained models, just run the following commands:

Step1: Download the checkpoint

git lfs install
git clone https://huggingface.co/amphion/vits_ljspeech

Step2: Clone the Amphion's Source Code of GitHub

git clone https://github.com/open-mmlab/Amphion.git

Step3: Specify the checkpoint's path

Use the soft link to specify the downloaded checkpoint in the first step:

cd Amphion
mkdir -p ckpts/tts
ln -s  ../../../vits_ljspeech  ckpts/tts/

Step4: Inference

You can follow the inference part of this recipe to generate speech from text. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from a TTS model.", just, run:

sh egs/tts/VITS/run.sh --stage 3 --gpu "0" \
    --config ckpts/tts/vits_ljspeech/args.json \
    --infer_expt_dir ckpts/tts/vits_ljspeech/ \
    --infer_output_dir ckpts/tts/vits_ljspeech/result \
    --infer_mode "single" \
    --infer_text "This is a clip of generated speech with the given text from a TTS model."
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.