File size: 3,962 Bytes
3afa7d5 7369aca 097164a a9a3eae 3afa7d5 a38ae9e 07664ba a38ae9e fe30c61 8832f57 a38ae9e 1f187f7 a38ae9e 350df86 a38ae9e 350df86 a38ae9e 8f0a25f 8832f57 8f0a25f 65a680d fd78313 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
license: cc-by-nc-sa-4.0
datasets:
- bjoernp/AudioCaps
language:
- en
pipeline_tag: text-to-audio
tags:
- text-to-audio
---
# TANGO: Text to Audio using iNstruction-Guided diffusiOn
**TANGO** is a latent diffusion model for text-to-audio generation. **TANGO** can generate realistic audios including human sounds, animal sounds, natural and artificial sounds and sound effects from textual prompts. We use the frozen instruction-tuned LLM Flan-T5 as the text encoder and train a UNet based diffusion model for audio generation. We outperform current state-of-the-art models for audio generation across both objective and subjective metrics. We release our model, training, inference code and pre-trained checkpoints for the research community.
📣 We are releasing [**Tango-Full-FT-Audiocaps**](https://huggingface.co/declare-lab/tango-full-ft-audiocaps) which was first pre-trained on [**TangoPromptBank**](https://huggingface.co/datasets/declare-lab/TangoPromptBank), a collection of diverse text, audio pairs. We later fine tuned this checkpoint on AudioCaps. This checkpoint obtained state-of-the-art results for text-to-audio generation on AudioCaps.
## Code
Our code is released here: [https://github.com/declare-lab/tango](https://github.com/declare-lab/tango)
We uploaded several **TANGO** generated samples here: [https://tango-web.github.io/](https://tango-web.github.io/)
Please follow the instructions in the repository for installation, usage and experiments.
## Quickstart Guide
Download the **TANGO** model and generate audio from a text prompt:
```python
import IPython
import soundfile as sf
from tango import Tango
tango = Tango("declare-lab/tango")
prompt = "An audience cheering and clapping"
audio = tango.generate(prompt)
sf.write(f"{prompt}.wav", audio, samplerate=16000)
IPython.display.Audio(data=audio, rate=16000)
```
[An audience cheering and clapping.webm](https://user-images.githubusercontent.com/13917097/233851915-e702524d-cd35-43f7-93e0-86ea579231a7.webm)
The model will be automatically downloaded and saved in cache. Subsequent runs will load the model directly from cache.
The `generate` function uses 100 steps by default to sample from the latent diffusion model. We recommend using 200 steps for generating better quality audios. This comes at the cost of increased run-time.
```python
prompt = "Rolling thunder with lightning strikes"
audio = tango.generate(prompt, steps=200)
IPython.display.Audio(data=audio, rate=16000)
```
[Rolling thunder with lightning strikes.webm](https://user-images.githubusercontent.com/13917097/233851929-90501e41-911d-453f-a00b-b215743365b4.webm)
<!-- [MachineClicking](https://user-images.githubusercontent.com/25340239/233857834-bfda52b4-4fcc-48de-b47a-6a6ddcb3671b.mp4 "sample 1") -->
Use the `generate_for_batch` function to generate multiple audio samples for a batch of text prompts:
```python
prompts = [
"A car engine revving",
"A dog barks and rustles with some clicking",
"Water flowing and trickling"
]
audios = tango.generate_for_batch(prompts, samples=2)
```
This will generate two samples for each of the three text prompts.
## Limitations
TANGO is trained on the small AudioCaps dataset so it may not generate good audio samples related to concepts that it has not seen in training (e.g. _singing_). For the same reason, TANGO is not always able to finely control its generations over textual control prompts. For example, the generations from TANGO for prompts _Chopping tomatoes on a wooden table_ and _Chopping potatoes on a metal table_ are very similar. _Chopping vegetables on a table_ also produces similar audio samples. Training text-to-audio generation models on larger datasets is thus required for the model to learn the composition of textual concepts and varied text-audio mappings.
We are training another version of TANGO on larger datasets to enhance its generalization, compositional and controllable generation ability. |