|
--- |
|
license: mit |
|
language: |
|
- en |
|
--- |
|
|
|
# Amphion Multi-Speaker TTS Pre-trained Model |
|
## Quick Start |
|
We provide the pre-trained checkpoint of [VITS](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VITS), trained on [Hi-fi TTS](https://www.openslr.org/109/), which consists of a total of 291.6 hours audio contributed by 10 speakers, on an average of 17 hours per speaker. |
|
To utilize the pre-trained model, run the following commands: |
|
|
|
### Step1: Download the checkpoint |
|
```bash |
|
git lfs install |
|
git clone https://huggingface.co/amphion/vits_hifitts |
|
``` |
|
|
|
### Step2: Clone the Amphion's Source Code of GitHub |
|
```bash |
|
git clone https://github.com/open-mmlab/Amphion.git |
|
``` |
|
|
|
### Step3: Specify the checkpoint's path |
|
Use the soft link to specify the downloaded checkpoint in the first step: |
|
|
|
```bash |
|
cd Amphion |
|
mkdir -p ckpts/tts |
|
ln -s ../../../vits_hifitts ckpts/tts/ |
|
``` |
|
|
|
### Step4: Inference |
|
|
|
You can follow the inference part of this [recipe](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VITS#4-inference) to generate speech from text. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from a TTS model.", just, run: |
|
|
|
```bash |
|
sh egs/tts/VITS/run.sh --stage 3 --gpu "0" \ |
|
--config ckpts/tts/vits_hifitts/args.json \ |
|
--infer_expt_dir ckpts/tts/vits_hifitts/ \ |
|
--infer_output_dir ckpts/tts/vits_hifitts/result \ |
|
--infer_mode "single" \ |
|
--infer_text "This is a clip of generated speech with the given text from a TTS model." \ |
|
--infer_speaker_name "hifitts_92" |
|
``` |
|
|
|
**Note**: The supported `infer_speaker_name` values can be seen [here](https://huggingface.co/amphion/vits_hifitts/blob/main/spk2id.json). |