Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- rw
|
4 |
+
pipeline_tag: text-to-speech
|
5 |
+
license: cc
|
6 |
+
tags:
|
7 |
+
- TTS
|
8 |
+
- Kinyarwanda
|
9 |
+
- Text to speech
|
10 |
+
---
|
11 |
+
|
12 |
+
|
13 |
+
|
14 |
+
## Model Description
|
15 |
+
|
16 |
+
<!-- Provide a longer summary of what this model is. -->
|
17 |
+
This model is an end-to-end deep-learning-based Kinyarwanda Text-to-Speech (TTS). The model was trained using the Coqui's TTS library, and the YourTTS[1] architecture.
|
18 |
+
|
19 |
+
|
20 |
+
# Usage
|
21 |
+
|
22 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
23 |
+
Install the Coqui's TTS library:
|
24 |
+
```
|
25 |
+
pip install git+https://github.com/coqui-ai/TTS@0910cb76bcd85df56bf43654bb31427647cdfd0d#egg=TTS
|
26 |
+
```
|
27 |
+
Download the files from this repo, then run:
|
28 |
+
|
29 |
+
```
|
30 |
+
tts --text "text" --model_path model.pth --encoder_path SE_checkpoint.pth.tar --encoder_config_path config_se.json --config_path config.json --speakers_file_path speakers.pth --speaker_wav conditioning_audio.wav --out_path out.wav
|
31 |
+
```
|
32 |
+
Where the conditioning audio is a wav file(s) to condition a multi-speaker TTS model with a Speaker Encoder, you can give multiple file paths. The d_vectors is computed as their average.
|
33 |
+
# References
|
34 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information should go in this section. -->
|
35 |
+
[1] [YourTTS paper](https://arxiv.org/pdf/2112.02418.pdf)
|
36 |
+
[2] [Kinyarwanda TTS: Using a multi-speaker dataset to build a Kinyarwanda TTS model] (https://openreview.net/pdf?id=1gLgrqWnHF)
|