|
--- |
|
license: mit |
|
--- |
|
|
|
# Pretrained Model of Amphion HiFi-GAN |
|
|
|
We provide the pre-trained checkpoint of [HiFi-GAN](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/FastSpeech2) trained on LJSpeech, which consists of 13,100 short audio clips of a single speaker and has a total length of approximately 24 hours. |
|
|
|
|
|
## Quick Start |
|
|
|
To utilize the pre-trained models, just run the following commands: |
|
|
|
### Step1: Download the checkpoint |
|
```bash |
|
git lfs install |
|
git clone https://huggingface.co/amphion/hifigan_ljspeech |
|
``` |
|
|
|
### Step2: Clone the Amphion's Source Code of GitHub |
|
```bash |
|
git clone https://github.com/open-mmlab/Amphion.git |
|
``` |
|
|
|
### Step3: Specify the checkpoint's path |
|
Use the soft link to specify the downloaded checkpoint in the first step: |
|
|
|
```bash |
|
cd Amphion |
|
mkdir -p ckpts/tts |
|
ln -s ../../../hifigan_ljspeech ckpts/tts/ |
|
``` |
|
|
|
### Step4: Inference |
|
|
|
This HiFi-GAN Vocoder is pre-trained to support [Amphion FastSpeech 2](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/FastSpeech2) to generate speech waveform from Mel spectrogram. |
|
You can follow the inference part of [this recipe](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/FastSpeech2#4-inference) to generate speech. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from a TTS model.", just, run: |
|
|
|
```bash |
|
|
|
sh egs/tts/FastSpeech2/run.sh --stage 3 \ |
|
--config ckpts/tts/fastspeech2_ljspeech/args.json \ |
|
--infer_expt_dir ckpts/tts/fastspeech2_ljspeech/ \ |
|
--infer_output_dir ckpts/tts/fastspeech2_ljspeech/results \ |
|
--infer_mode "single" \ |
|
--infer_text "This is a clip of generated speech with the given text from a TTS model." \ |
|
--vocoder_dir ckpts/vocoder/hifigan_ljspeech/checkpoints/ \ |
|
``` |
|
|
|
|