Datasets:

Modalities:
Text
Formats:
csv
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Pendrokar commited on
Commit
316d708
β€’
1 Parent(s): fabd830

legend update

Browse files
Files changed (1) hide show
  1. README.md +16 -10
README.md CHANGED
@@ -6,14 +6,14 @@ configs:
6
  size_categories:
7
  - n<1K
8
  ---
9
- Cloned the GitHub repo for easier viewing and embedding the above table as requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525
10
 
11
  ## Legend
12
 
13
  for the above TTS capability table
14
 
15
  * Processor ⚑ - Inference done by
16
- * CPU (CPU**s** = multithreaded) - Real-time factor should be below 2.0 to qualify for CPU, though some more leeway can be given if it supports audio streaming
17
  * CUDA by *NVIDIA*β„’
18
  * ROCm by *AMD*β„’
19
  * Phonetic alphabet πŸ”€ - Phonetic transcription that allows to control pronunciation of words before inference
@@ -21,15 +21,21 @@ for the above TTS capability table
21
  * [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics
22
  * Insta-clone πŸ‘₯ - Zero-shot model for quick voice cloning
23
  * Emotion control 🎭 - Able to force an emotional state of speaker
24
- * Yes / 🎭 <# emotions>
25
- * strict insta-clone switch 🎭πŸ‘₯ - clone may sound different than normal speaking voice; no ability to go in-between states
 
 
 
 
 
 
26
  * strict control through prompt πŸŽ­πŸ“– - prompt input parameter
27
- * Prompting πŸ“– - Also a side effect of narrator based datasets and a way to affect the emotional state, [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion)
28
- * Yes / πŸ“– - Prompt as a separate input parameter
29
- * Yes / πŸ—£πŸ“– - Prompt is spoken by TTS
30
- * Streaming support 🌊 - If it is possible to playback audio that is still being generated
31
- * Speech control 🎚 - Ability to change the pitch, duration, energy and/or emotion of generated speech
32
- * Voice conversion / Speech-To-Speech support 🦜 - Yes/No (Streaming support implies real-time S2S; S2T=>T2S does not count)
33
  * Longform synthesis - Able to synthesis whole paragraphs
34
 
35
  A _null_ value means unfilled/unknown. πŸ€·β€β™‚οΈ Please create pull requests to update the info on the models.
 
6
  size_categories:
7
  - n<1K
8
  ---
9
+ Cloned the GitHub repo for easier viewing and embedding the above table as once requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525
10
 
11
  ## Legend
12
 
13
  for the above TTS capability table
14
 
15
  * Processor ⚑ - Inference done by
16
+ * CPU (CPU**s** = multithreaded) - All models can be run on CPU, so real-time factor should be below 2.0 to qualify for CPU tag, though some more leeway can be given if it supports audio streaming
17
  * CUDA by *NVIDIA*β„’
18
  * ROCm by *AMD*β„’
19
  * Phonetic alphabet πŸ”€ - Phonetic transcription that allows to control pronunciation of words before inference
 
21
  * [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics
22
  * Insta-clone πŸ‘₯ - Zero-shot model for quick voice cloning
23
  * Emotion control 🎭 - Able to force an emotional state of speaker
24
+ * 🎭 <# emotions>
25
+ * 😑 anger
26
+ * πŸ˜ƒ happiness
27
+ * 😭 sadness
28
+ * 😯 surprise
29
+ * 🀫 whispering
30
+ * 😊 friendlyness
31
+ * strict insta-clone switch 🎭πŸ‘₯ - cloned on sample with specific emotion; may sound different than normal speaking voice; no ability to go in-between states
32
  * strict control through prompt πŸŽ­πŸ“– - prompt input parameter
33
+ * Prompting πŸ“– - Also a side effect of narrator based datasets and a way to affect the emotional state
34
+ * πŸ“– - Prompt as a separate input parameter
35
+ * πŸ—£πŸ“– - The prompt itself is also spoken by TTS; [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion)
36
+ * Streaming support 🌊 - Can playback audio while it is still being generated
37
+ * Speech control 🎚 - Ability to change the pitch, duration etc. of generated speech
38
+ * Voice conversion / Speech-To-Speech support 🦜 - Streaming support implies real-time S2S; S2T=>T2S does not count
39
  * Longform synthesis - Able to synthesis whole paragraphs
40
 
41
  A _null_ value means unfilled/unknown. πŸ€·β€β™‚οΈ Please create pull requests to update the info on the models.