File size: 1,790 Bytes
81e429c 603bef0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
license: mit
---
# Pretrained Model of Amphion HiFi-GAN
We provide the pre-trained checkpoint of [HiFi-GAN](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/FastSpeech2) trained on LJSpeech, which consists of 13,100 short audio clips of a single speaker and has a total length of approximately 24 hours.
## Quick Start
To utilize the pre-trained models, just run the following commands:
### Step1: Download the checkpoint
```bash
git lfs install
git clone https://huggingface.co/amphion/hifigan_ljspeech
```
### Step2: Clone the Amphion's Source Code of GitHub
```bash
git clone https://github.com/open-mmlab/Amphion.git
```
### Step3: Specify the checkpoint's path
Use the soft link to specify the downloaded checkpoint in the first step:
```bash
cd Amphion
mkdir -p ckpts/tts
ln -s ../../../hifigan_ljspeech ckpts/tts/
```
### Step4: Inference
This HiFi-GAN Vocoder is pre-trained to support [Amphion FastSpeech 2](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/FastSpeech2) to generate speech waveform from Mel spectrogram.
You can follow the inference part of [this recipe](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/FastSpeech2#4-inference) to generate speech. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from a TTS model.", just, run:
```bash
sh egs/tts/FastSpeech2/run.sh --stage 3 \
--config ckpts/tts/fastspeech2_ljspeech/args.json \
--infer_expt_dir ckpts/tts/fastspeech2_ljspeech/ \
--infer_output_dir ckpts/tts/fastspeech2_ljspeech/results \
--infer_mode "single" \
--infer_text "This is a clip of generated speech with the given text from a TTS model." \
--vocoder_dir ckpts/vocoder/hifigan_ljspeech/checkpoints/ \
```
|