Datasets:

Modalities:
Text
Formats:
csv
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Pendrokar commited on
Commit
9b65230
1 Parent(s): 4669dba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -9,10 +9,10 @@ size_categories:
9
  Cloned the GitHub repo for easier viewing and embedding the above table. https://github.com/Vaibhavs10/open-tts-tracker
10
 
11
  Legend for the above TTS capability table:
12
- * Processor - CPU (1/♾ cores)/CUDA/ROCm (single/multi used for inference; Real-time factor should be below 2.0 to qualify for CPU, though some leeway can be given if it supports audio streaming)
13
  * Phonetic alphabet - None/[IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet)/[ARPAbet](https://en.wikipedia.org/wiki/ARPABET) (Phonetic transcription that allows to control pronunciation of certain words during inference)
14
  * Insta-clone - Yes/No (Zero-shot model for quick voice clone)
15
- * Emotional control - Yes🎭/Strict (Strict, as in has no ability to go in-between states, insta-clone switch/🎭👥)
16
  * Prompting - Yes/No (A side effect of narrator based datasets and a way to affect the emotional state, [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion))
17
  * Streaming support - Yes/No (If it is possible to playback audio that is still being generated)
18
  * Speech control - speed/pitch/ (Ability to change the pitch, duration, energy and/or emotion of generated speech)
 
9
  Cloned the GitHub repo for easier viewing and embedding the above table. https://github.com/Vaibhavs10/open-tts-tracker
10
 
11
  Legend for the above TTS capability table:
12
+ * Processor - CPU (1️⃣/♾ cores)/CUDA/ROCm (single/multi used for inference; Real-time factor should be below 2.0 to qualify for CPU, though some leeway can be given if it supports audio streaming)
13
  * Phonetic alphabet - None/[IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet)/[ARPAbet](https://en.wikipedia.org/wiki/ARPABET) (Phonetic transcription that allows to control pronunciation of certain words during inference)
14
  * Insta-clone - Yes/No (Zero-shot model for quick voice clone)
15
+ * Emotion control - Yes🎭/Strict (Strict, as in has no ability to go in-between states, insta-clone switch/🎭👥, control through prompt 🎭📖)
16
  * Prompting - Yes/No (A side effect of narrator based datasets and a way to affect the emotional state, [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion))
17
  * Streaming support - Yes/No (If it is possible to playback audio that is still being generated)
18
  * Speech control - speed/pitch/ (Ability to change the pitch, duration, energy and/or emotion of generated speech)