Moved legend to bottom
Browse files
README.md
CHANGED
@@ -10,39 +10,6 @@ _above models sorted by amount of capabilities (lazy method - character count)_
|
|
10 |
|
11 |
Cloned the GitHub repo for easier viewing and embedding the above table as once requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525
|
12 |
|
13 |
-
## Legend
|
14 |
-
|
15 |
-
for the above TTS capability table
|
16 |
-
|
17 |
-
* Processor β‘ - Inference done by
|
18 |
-
* CPU (CPU**s** = multithreaded) - All models can be run on CPU, so real-time factor should be below 2.0 to qualify for CPU tag, though some more leeway can be given if it supports audio streaming
|
19 |
-
* CUDA by *NVIDIA*β’
|
20 |
-
* ROCm by *AMD*β’
|
21 |
-
* Phonetic alphabet π€ - Phonetic transcription that allows to control pronunciation of words before inference
|
22 |
-
* [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) - International Phonetic Alphabet
|
23 |
-
* [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics
|
24 |
-
* Insta-clone π₯ - Zero-shot model for quick voice cloning
|
25 |
-
* Emotion control π - Able to force an emotional state of speaker
|
26 |
-
* π <# emotions> ( π‘ anger; π happiness; π sadness; π― surprise; π€« whispering; π friendlyness )
|
27 |
-
* strict insta-clone switch ππ₯ - cloned on sample with specific emotion; may sound different than normal speaking voice; no ability to go in-between states
|
28 |
-
* strict control through prompt ππ - prompt input parameter
|
29 |
-
* Prompting π - Also a side effect of narrator based datasets and a way to affect the emotional state
|
30 |
-
* π - Prompt as a separate input parameter
|
31 |
-
* π£π - The prompt itself is also spoken by TTS; [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion)
|
32 |
-
* Streaming support π - Can playback audio while it is still being generated
|
33 |
-
* Speech control π - Ability to change the pitch, duration etc. of generated speech
|
34 |
-
* Voice conversion / Speech-To-Speech support π¦ - Streaming support implies real-time S2S; S2T=>T2S does not count
|
35 |
-
* Longform synthesis π - Able to synthesize whole paragraphs
|
36 |
-
|
37 |
-
Example if the proprietary ElevenLabs were to be added to the capabilities table:
|
38 |
-
| Name | Processor<br>β‘ | Phonetic alphabet<br>π€ | Insta-clone<br>π₯ | Emotional control<br>π | Prompting<br>π | Speech control<br>π | Streaming support<br>π | Voice conversion<br>π¦ | Longform synthesis<br>π |
|
39 |
-
|---|---|---|---|---|---|---|---|---| --- |
|
40 |
-
|ElevenLabs|CUDA|IPA, ARPAbet|π₯|ππ|π£π|π stability, voice similarity|π|π¦|π Projects|
|
41 |
-
|
42 |
-
More info on how the capabilities table can be found within the [GitHub Issue](https://github.com/Vaibhavs10/open-tts-tracker/issues/14).
|
43 |
-
|
44 |
-
Please create pull requests to update the info on the models.
|
45 |
-
|
46 |
---
|
47 |
|
48 |
# π£οΈ Open TTS Tracker
|
@@ -90,4 +57,41 @@ This is aimed as a resource to increase awareness for these models and to make i
|
|
90 |
| xVASynth | [Repo](https://github.com/DanRuta/xVA-Synth) | [Hub](https://huggingface.co/Pendrokar/xvapitch) | [GPL-3.0](https://github.com/DanRuta/xVA-Synth/blob/master/LICENSE.md) | [Yes](https://github.com/DanRuta/xva-trainer) | Multilingual | [Papers](https://huggingface.co/Pendrokar/xvapitch) | [π€ Space](https://huggingface.co/spaces/Pendrokar/xVASynth) | Base model trained on copyrighted materials. |
|
91 |
|
92 |
* *Multilingual* - Amount of supported languages is ever changing, check the Space and Hub which specific languages are supported
|
93 |
-
* *ALL* - Supports all natural languages; may not support artificial/contructed languages
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
Cloned the GitHub repo for easier viewing and embedding the above table as once requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
---
|
14 |
|
15 |
# π£οΈ Open TTS Tracker
|
|
|
57 |
| xVASynth | [Repo](https://github.com/DanRuta/xVA-Synth) | [Hub](https://huggingface.co/Pendrokar/xvapitch) | [GPL-3.0](https://github.com/DanRuta/xVA-Synth/blob/master/LICENSE.md) | [Yes](https://github.com/DanRuta/xva-trainer) | Multilingual | [Papers](https://huggingface.co/Pendrokar/xvapitch) | [π€ Space](https://huggingface.co/spaces/Pendrokar/xVASynth) | Base model trained on copyrighted materials. |
|
58 |
|
59 |
* *Multilingual* - Amount of supported languages is ever changing, check the Space and Hub which specific languages are supported
|
60 |
+
* *ALL* - Supports all natural languages; may not support artificial/contructed languages
|
61 |
+
|
62 |
+
---
|
63 |
+
|
64 |
+
## Legend
|
65 |
+
|
66 |
+
for the [above](#) TTS capability table
|
67 |
+
|
68 |
+
* Processor β‘ - Inference done by
|
69 |
+
* CPU (CPU**s** = multithreaded) - All models can be run on CPU, so real-time factor should be below 2.0 to qualify for CPU tag, though some more leeway can be given if it supports audio streaming
|
70 |
+
* CUDA by *NVIDIA*β’
|
71 |
+
* ROCm by *AMD*β’
|
72 |
+
* Phonetic alphabet π€ - Phonetic transcription that allows to control pronunciation of words before inference
|
73 |
+
* [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) - International Phonetic Alphabet
|
74 |
+
* [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics
|
75 |
+
* Insta-clone π₯ - Zero-shot model for quick voice cloning
|
76 |
+
* Emotion control π - Able to force an emotional state of speaker
|
77 |
+
* π <# emotions> ( π‘ anger; π happiness; π sadness; π― surprise; π€« whispering; π friendlyness )
|
78 |
+
* strict insta-clone switch ππ₯ - cloned on sample with specific emotion; may sound different than normal speaking voice; no ability to go in-between states
|
79 |
+
* strict control through prompt ππ - prompt input parameter
|
80 |
+
* Prompting π - Also a side effect of narrator based datasets and a way to affect the emotional state
|
81 |
+
* π - Prompt as a separate input parameter
|
82 |
+
* π£π - The prompt itself is also spoken by TTS; [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion)
|
83 |
+
* Streaming support π - Can playback audio while it is still being generated
|
84 |
+
* Speech control π - Ability to change the pitch, duration etc. of generated speech
|
85 |
+
* Voice conversion / Speech-To-Speech support π¦ - Streaming support implies real-time S2S; S2T=>T2S does not count
|
86 |
+
* Longform synthesis π - Able to synthesize whole paragraphs
|
87 |
+
|
88 |
+
Example if the proprietary ElevenLabs were to be added to the capabilities table:
|
89 |
+
| Name | Processor<br>β‘ | Phonetic alphabet<br>π€ | Insta-clone<br>π₯ | Emotional control<br>π | Prompting<br>π | Speech control<br>π | Streaming support<br>π | Voice conversion<br>π¦ | Longform synthesis<br>π |
|
90 |
+
|---|---|---|---|---|---|---|---|---| --- |
|
91 |
+
|ElevenLabs|CUDA|IPA, ARPAbet|π₯|ππ|π£π|π stability, voice similarity|π|π¦|π Projects|
|
92 |
+
|
93 |
+
More info on how the capabilities table can be found within the [GitHub Issue](https://github.com/Vaibhavs10/open-tts-tracker/issues/14).
|
94 |
+
|
95 |
+
Please create pull requests to update the info on the models.
|
96 |
+
|
97 |
+
---
|