AudioLCM / README.md
liuhuadai's picture
Update README.md
464dea1 verified
|
raw
history blame
1.67 kB
metadata
license: mit
library_name: transformers
pipeline_tag: text-to-audio

🎵🎵🎵AudioLCM:Text-to-Audio Generation with Latent Consistency Models

We develop AudioLCM building on LCM (latent consistency models) for text-to-audio generation.

code

Our code is released here : https://github.com/liuhuadai/AudioLCM)

Please follow the instructions in the repository for installation, usage and experiments.

Quickstart Guide

Download the AudioLCM model and generate audio from a text prompt:

from pythonscripts.InferAPI import AudioLCMInfer


prompt="Constant rattling noise and sharp vibrations"
config_path="./audiolcm.yaml"
model_path="./audiolcm.ckpt"
vocoder_path="./model/vocoder"
audio_path = AudioLCMInfer(prompt, config_path=config_path, model_path=model_path, vocoder_path=vocoder_path)

Use the AudioLCMBatchInfer function to generate multiple audio samples for a batch of text prompts:

from pythonscripts.InferAPI import AudioLCMBatchInfer


prompts=[
    "Constant rattling noise and sharp vibrations",
    "A rocket flies by followed by a loud explosion and fire crackling as a truck engine runs idle",
    "Humming and vibrating with a man and children speaking and laughing"
        ]
config_path="./audiolcm.yaml"
model_path="./audiolcm.ckpt"
vocoder_path="./model/vocoder"
audio_path = AudioLCMBatchInfer(prompts, config_path=config_path, model_path=model_path, vocoder_path=vocoder_path)

DEMO

🎵🎵Welcome to try our demo🎵🎵: https://huggingface.co/spaces/AIGC-Audio/AudioLCM