Datasets:

Modalities:
Text
Formats:
csv
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Pendrokar commited on
Commit
b89372e
β€’
1 Parent(s): 7450d5b

Moved legend to bottom

Browse files
Files changed (1) hide show
  1. README.md +38 -34
README.md CHANGED
@@ -10,39 +10,6 @@ _above models sorted by amount of capabilities (lazy method - character count)_
10
 
11
  Cloned the GitHub repo for easier viewing and embedding the above table as once requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525
12
 
13
- ## Legend
14
-
15
- for the above TTS capability table
16
-
17
- * Processor ⚑ - Inference done by
18
- * CPU (CPU**s** = multithreaded) - All models can be run on CPU, so real-time factor should be below 2.0 to qualify for CPU tag, though some more leeway can be given if it supports audio streaming
19
- * CUDA by *NVIDIA*β„’
20
- * ROCm by *AMD*β„’
21
- * Phonetic alphabet πŸ”€ - Phonetic transcription that allows to control pronunciation of words before inference
22
- * [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) - International Phonetic Alphabet
23
- * [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics
24
- * Insta-clone πŸ‘₯ - Zero-shot model for quick voice cloning
25
- * Emotion control 🎭 - Able to force an emotional state of speaker
26
- * 🎭 <# emotions> ( 😑 anger; πŸ˜ƒ happiness; 😭 sadness; 😯 surprise; 🀫 whispering; 😊 friendlyness )
27
- * strict insta-clone switch 🎭πŸ‘₯ - cloned on sample with specific emotion; may sound different than normal speaking voice; no ability to go in-between states
28
- * strict control through prompt πŸŽ­πŸ“– - prompt input parameter
29
- * Prompting πŸ“– - Also a side effect of narrator based datasets and a way to affect the emotional state
30
- * πŸ“– - Prompt as a separate input parameter
31
- * πŸ—£πŸ“– - The prompt itself is also spoken by TTS; [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion)
32
- * Streaming support 🌊 - Can playback audio while it is still being generated
33
- * Speech control 🎚 - Ability to change the pitch, duration etc. of generated speech
34
- * Voice conversion / Speech-To-Speech support 🦜 - Streaming support implies real-time S2S; S2T=>T2S does not count
35
- * Longform synthesis πŸ“œ - Able to synthesize whole paragraphs
36
-
37
- Example if the proprietary ElevenLabs were to be added to the capabilities table:
38
- | Name | Processor<br>⚑ | Phonetic alphabet<br>πŸ”€ | Insta-clone<br>πŸ‘₯ | Emotional control<br>🎭 | Prompting<br>πŸ“– | Speech control<br>🎚 | Streaming support<br>🌊 | Voice conversion<br>🦜 | Longform synthesis<br>πŸ“œ |
39
- |---|---|---|---|---|---|---|---|---| --- |
40
- |ElevenLabs|CUDA|IPA, ARPAbet|πŸ‘₯|πŸŽ­πŸ“–|πŸ—£πŸ“–|🎚 stability, voice similarity|🌊|🦜|πŸ“œ Projects|
41
-
42
- More info on how the capabilities table can be found within the [GitHub Issue](https://github.com/Vaibhavs10/open-tts-tracker/issues/14).
43
-
44
- Please create pull requests to update the info on the models.
45
-
46
  ---
47
 
48
  # πŸ—£οΈ Open TTS Tracker
@@ -90,4 +57,41 @@ This is aimed as a resource to increase awareness for these models and to make i
90
  | xVASynth | [Repo](https://github.com/DanRuta/xVA-Synth) | [Hub](https://huggingface.co/Pendrokar/xvapitch) | [GPL-3.0](https://github.com/DanRuta/xVA-Synth/blob/master/LICENSE.md) | [Yes](https://github.com/DanRuta/xva-trainer) | Multilingual | [Papers](https://huggingface.co/Pendrokar/xvapitch) | [πŸ€— Space](https://huggingface.co/spaces/Pendrokar/xVASynth) | Base model trained on copyrighted materials. |
91
 
92
  * *Multilingual* - Amount of supported languages is ever changing, check the Space and Hub which specific languages are supported
93
- * *ALL* - Supports all natural languages; may not support artificial/contructed languages
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  Cloned the GitHub repo for easier viewing and embedding the above table as once requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
  # πŸ—£οΈ Open TTS Tracker
 
57
  | xVASynth | [Repo](https://github.com/DanRuta/xVA-Synth) | [Hub](https://huggingface.co/Pendrokar/xvapitch) | [GPL-3.0](https://github.com/DanRuta/xVA-Synth/blob/master/LICENSE.md) | [Yes](https://github.com/DanRuta/xva-trainer) | Multilingual | [Papers](https://huggingface.co/Pendrokar/xvapitch) | [πŸ€— Space](https://huggingface.co/spaces/Pendrokar/xVASynth) | Base model trained on copyrighted materials. |
58
 
59
  * *Multilingual* - Amount of supported languages is ever changing, check the Space and Hub which specific languages are supported
60
+ * *ALL* - Supports all natural languages; may not support artificial/contructed languages
61
+
62
+ ---
63
+
64
+ ## Legend
65
+
66
+ for the [above](#) TTS capability table
67
+
68
+ * Processor ⚑ - Inference done by
69
+ * CPU (CPU**s** = multithreaded) - All models can be run on CPU, so real-time factor should be below 2.0 to qualify for CPU tag, though some more leeway can be given if it supports audio streaming
70
+ * CUDA by *NVIDIA*β„’
71
+ * ROCm by *AMD*β„’
72
+ * Phonetic alphabet πŸ”€ - Phonetic transcription that allows to control pronunciation of words before inference
73
+ * [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) - International Phonetic Alphabet
74
+ * [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics
75
+ * Insta-clone πŸ‘₯ - Zero-shot model for quick voice cloning
76
+ * Emotion control 🎭 - Able to force an emotional state of speaker
77
+ * 🎭 <# emotions> ( 😑 anger; πŸ˜ƒ happiness; 😭 sadness; 😯 surprise; 🀫 whispering; 😊 friendlyness )
78
+ * strict insta-clone switch 🎭πŸ‘₯ - cloned on sample with specific emotion; may sound different than normal speaking voice; no ability to go in-between states
79
+ * strict control through prompt πŸŽ­πŸ“– - prompt input parameter
80
+ * Prompting πŸ“– - Also a side effect of narrator based datasets and a way to affect the emotional state
81
+ * πŸ“– - Prompt as a separate input parameter
82
+ * πŸ—£πŸ“– - The prompt itself is also spoken by TTS; [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion)
83
+ * Streaming support 🌊 - Can playback audio while it is still being generated
84
+ * Speech control 🎚 - Ability to change the pitch, duration etc. of generated speech
85
+ * Voice conversion / Speech-To-Speech support 🦜 - Streaming support implies real-time S2S; S2T=>T2S does not count
86
+ * Longform synthesis πŸ“œ - Able to synthesize whole paragraphs
87
+
88
+ Example if the proprietary ElevenLabs were to be added to the capabilities table:
89
+ | Name | Processor<br>⚑ | Phonetic alphabet<br>πŸ”€ | Insta-clone<br>πŸ‘₯ | Emotional control<br>🎭 | Prompting<br>πŸ“– | Speech control<br>🎚 | Streaming support<br>🌊 | Voice conversion<br>🦜 | Longform synthesis<br>πŸ“œ |
90
+ |---|---|---|---|---|---|---|---|---| --- |
91
+ |ElevenLabs|CUDA|IPA, ARPAbet|πŸ‘₯|πŸŽ­πŸ“–|πŸ—£πŸ“–|🎚 stability, voice similarity|🌊|🦜|πŸ“œ Projects|
92
+
93
+ More info on how the capabilities table can be found within the [GitHub Issue](https://github.com/Vaibhavs10/open-tts-tracker/issues/14).
94
+
95
+ Please create pull requests to update the info on the models.
96
+
97
+ ---