File size: 9,934 Bytes
81bf954
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83800a2
 
81bf954
 
e74d9d6
81bf954
3e1f50a
 
e74d9d6
3e1f50a
81bf954
 
 
3e1f50a
e74d9d6
3e1f50a
b9686d5
 
81bf954
 
3e1f50a
81bf954
3e1f50a
81bf954
3e1f50a
81bf954
3e1f50a
 
81bf954
 
b9686d5
 
81bf954
 
3e1f50a
81bf954
3e1f50a
81bf954
3e1f50a
81bf954
3e1f50a
81bf954
3e1f50a
81bf954
 
 
3e1f50a
b9686d5
3e1f50a
b9686d5
3e1f50a
81bf954
 
 
 
 
3e1f50a
81bf954
3e1f50a
9a64251
 
81bf954
3e1f50a
b9686d5
81bf954
3e1f50a
81bf954
 
3e1f50a
81bf954
 
 
 
3e1f50a
b9686d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11b27d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b9686d5
81bf954
 
 
 
 
3e1f50a
81bf954
3e1f50a
81bf954
3e1f50a
81bf954
 
 
 
 
3e1f50a
81bf954
3e1f50a
81bf954
3e1f50a
81bf954
 
 
 
 
 
 
 
 
 
3e1f50a
81bf954
 
 
 
 
 
 
 
 
 
3e1f50a
81bf954
3e1f50a
81bf954
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
- fr
- es
- pt
- pl
- de
- nl
- it
pipeline_tag: text-to-speech
inference: false
datasets:
- facebook/multilingual_librispeech
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls_eng
- parler-tts/mls-eng-speaker-descriptions
- ylacombe/mls-annotated
- ylacombe/cml-tts-filtered-annotated
- PHBJT/cml-tts-filtered
---
  
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>


# Parler-TTS Mini Multilingual v1.1

<a target="_blank" href="https://huggingface.co/spaces/PHBJT/multi_parler_tts">
  <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>

**Parler-TTS Mini Multilingual v1.1** is a multilingual extension of [Parler-TTS Mini](https://huggingface.co/parler-tts/parler-tts-mini-v1.1).

🚨 As compared to [Mini Multilingual v1](https://huggingface.co/parler-tts/parler-tts-mini-multilingual), this version was trained with some consistent speaker names and with better format for descriptions. 🚨

It is a fine-tuned version, trained on a [cleaned version](https://huggingface.co/datasets/PHBJT/cml-tts-filtered) of [CML-TTS](https://huggingface.co/datasets/ylacombe/cml-tts) and on the non-English version of [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech).
In all, this represents some 9,200 hours of non-English data. To retain English capabilities, we also added back the [LibriTTS-R English dataset](https://huggingface.co/datasets/parler-tts/libritts_r_filtered), some 580h of high-quality English data.

**Parler-TTS Mini Multilingual** can speak in 8 European languages: English, French, Spanish, Portuguese, Polish, German, Italian and Dutch. 

Thanks to its **better prompt tokenizer**, it can easily be extended to other languages. This tokenizer has a larger vocabulary and handles byte fallback, which simplifies multilingual training.

🚨 This work is the result of a collaboration between the **HuggingFace audio team** and the **[Quantum Squadra](https://quantumsquadra.com/) team**. The **[AI4Bharat](https://ai4bharat.iitm.ac.in/) team** also provided advice and assistance in improving tokenization. 🚨


## πŸ“– Quick Index
* [πŸ‘¨β€πŸ’» Installation](#πŸ‘¨β€πŸ’»-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)

## πŸ› οΈ Usage

🚨Unlike previous versions of Parler-TTS, here we use two tokenizers - one for the prompt and one for the description.🚨

### πŸ‘¨β€πŸ’» Installation

Using Parler-TTS is as simple as "bonjour". Simply install the library once:

```sh
pip install git+https://github.com/huggingface/parler-tts.git
```

### 🎲 Random voice

**Parler-TTS Mini Multilingual** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:

```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf

device = "cuda:0" if torch.cuda.is_available() else "cpu"

model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-multilingual-v1.1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-multilingual-v1.1")
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path)

prompt = "Salut toi, comment vas-tu aujourd'hui?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."

input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)

generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```

### 🎯 Using a specific speaker

To ensure speaker consistency across generations, this checkpoint was also trained on 16 speakers, characterized by name (e.g. Daniel, Christine, Richard, Nicole, ...).

To take advantage of this, simply adapt your text description to specify which speaker to use: `Daniel's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`

```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf

device = "cuda:0" if torch.cuda.is_available() else "cpu"

model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-multilingual-v1.1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-multilingual-v1.1")
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path)

prompt = "Salut toi, comment vas-tu aujourd'hui?"
description = "Daniel's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."

input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)

generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```

You can choose a speaker from this list: 

| Language   | Speaker Name | Number of occurrences it was trained on |
|------------|--------------|-----------------------------------------|
| Dutch      | Mark         | 460066                                  |
|            | Jessica      | 4438                                    |
|            | Michelle     | 83                                      |
| French     | Daniel       | 10719                                   |
|            | Michelle     | 19                                      |
|            | Christine    | 20187                                   |
|            | Megan        | 695                                     |
| German     | Nicole       | 53964                                   |
|            | Christopher  | 1671                                    |
|            | Megan        | 41                                      |
|            | Michelle     | 12693                                   |
| Italian    | Julia        | 2616                                    |
|            | Richard      | 9640                                    |
|            | Megan        | 4                                       |
| Polish     | Alex         | 25849                                   |
|            | Natalie      | 9384                                    |
| Portuguese | Sophia       | 34182                                   |
|            | Nicholas     | 4411                                    |
| Spanish    | Steven       | 74099                                   |
|            | Olivia       | 48489                                   |
|            | Megan        | 12                                      |

**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt

## Motivation

Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively. 

Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.

## Citation

If you found this repository useful, please consider citing this work and also the original Stability AI paper:

```
@misc{lacombe-etal-2024-parler-tts,
  author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
  title = {Parler-TTS},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```

```
@misc{lyth2024natural,
      title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
      author={Dan Lyth and Simon King},
      year={2024},
      eprint={2402.01912},
      archivePrefix={arXiv},
      primaryClass={cs.SD}
}
```

## License

This model is permissively licensed under the Apache 2.0 license.