Datasets:

Modalities:
Text
Formats:
csv
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Pendrokar commited on
Commit
cea62b6
β€’
1 Parent(s): 89785fb

Legend expanded

Browse files
Files changed (1) hide show
  1. README.md +18 -8
README.md CHANGED
@@ -9,14 +9,24 @@ size_categories:
9
  Cloned the GitHub repo for easier viewing and embedding the above table. https://github.com/Vaibhavs10/open-tts-tracker
10
 
11
  Legend for the above TTS capability table:
12
- * Processor - CPU (1️⃣/β™Ύ cores)/CUDA/ROCm (single/multi used for inference; Real-time factor should be below 2.0 to qualify for CPU, though some leeway can be given if it supports audio streaming)
13
- * Phonetic alphabet - None/[IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet)/[ARPAbet](https://en.wikipedia.org/wiki/ARPABET) (Phonetic transcription that allows to control pronunciation of certain words during inference)
14
- * Insta-clone - Yes/No (Zero-shot model for quick voice clone)
15
- * Emotion control - Yes🎭/Strict (Strict, as in has no ability to go in-between states, insta-clone switch/🎭πŸ‘₯, control through prompt πŸŽ­πŸ“–)
16
- * Prompting - Yes/No (A side effect of narrator based datasets and a way to affect the emotional state, [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion))
17
- * Streaming support - Yes/No (If it is possible to playback audio that is still being generated)
18
- * Speech control - speed/pitch/ (Ability to change the pitch, duration, energy and/or emotion of generated speech)
19
- * Voice conversion / Speech-To-Speech support - Yes/No (Streaming support implies real-time S2S; S2T=>T2S does not count)
 
 
 
 
 
 
 
 
 
 
20
  * Longform synthesis - Able to synthesis whole paragraphs
21
 
22
  A _null_ value means unfilled/unknown. Please create pull requests to update the info on the models.
 
9
  Cloned the GitHub repo for easier viewing and embedding the above table. https://github.com/Vaibhavs10/open-tts-tracker
10
 
11
  Legend for the above TTS capability table:
12
+ * Processor ⚑ - Inference done by
13
+ * CPU (1️⃣/β™Ύ cores) - Real-time factor should be below 2.0 to qualify for CPU, though some leeway can be given if it supports audio streaming
14
+ * CUDA by NVIDIA (single/sli)
15
+ * ROCm by AMD (single/Crossfire)
16
+ * Phonetic alphabet πŸ”€ - Phonetic transcription that allows to control pronunciation of words before inference
17
+ * [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) - International Phonetic Alphabet
18
+ * [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics
19
+ * Insta-clone πŸ‘₯ - Zero-shot model for quick voice cloning
20
+ * Emotion control 🎭 - Able to force an emotional state of speaker
21
+ * Yes / 🎭 <# emotions>
22
+ * strict insta-clone switch / 🎭πŸ‘₯ - clone may sound different than normal speaking voice; no ability to go in-between states
23
+ * strict control through prompt πŸŽ­πŸ“– - prompt input parameter
24
+ * Prompting - Also a side effect of narrator based datasets and a way to affect the emotional state, [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion))
25
+ * Yes / πŸ“– - Prompt as a separate input parameter
26
+ * Yes / πŸ—£πŸ“– - Prompt is spoken by TTS
27
+ * Streaming support 🌊 - If it is possible to playback audio that is still being generated
28
+ * Speech control 🎚 - Ability to change the pitch, duration, energy and/or emotion of generated speech
29
+ * Voice conversion / Speech-To-Speech support 🦜 - Yes/No (Streaming support implies real-time S2S; S2T=>T2S does not count)
30
  * Longform synthesis - Able to synthesis whole paragraphs
31
 
32
  A _null_ value means unfilled/unknown. Please create pull requests to update the info on the models.