|
--- |
|
configs: |
|
- config_name: default |
|
data_files: '*.tsv' |
|
sep: ' ' |
|
--- |
|
_above models sorted by the amount of capabilities_; [#legend](#legend) |
|
|
|
Cloned the GitHub repo for easier viewing and embedding the above table as once requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525 |
|
|
|
--- |
|
|
|
# π£οΈ Open TTS Tracker |
|
|
|
A one stop shop to track all open-access/ source Text-To-Speech (TTS) models as they come out. Feel free to make a PR for all those that aren't linked here. |
|
|
|
This is aimed as a resource to increase awareness for these models and to make it easier for researchers, developers, and enthusiasts to stay informed about the latest advancements in the field. |
|
|
|
> [!NOTE] |
|
> This repo will only track open source/access codebase TTS models. More motivation for everyone to open-source! π€ |
|
|
|
Some of the models are also being battle tested at TTS arenas: |
|
* π [TTS Arena](https://huggingface.co/spaces/TTS-AGI/TTS-Arena) - _Battle_ tab allows to choose 2 candidates and compare them |
|
* π€π [TTS Spaces Arena](https://huggingface.co/spaces/Pendrokar/TTS-Spaces-Arena) - Uses online HuggingFace Spaces, which have Gradio API enabled |
|
|
|
| Name | GitHub | Weights | License | Fine-tune | Languages | Paper | Demo | Issues | |
|
|---|---|---|---|---|---|---|---|---| |
|
| AI4Bharat | [Repo](https://github.com/AI4Bharat/Indic-TTS) | [Hub](https://huggingface.co/ai4bharat) | [MIT](https://github.com/AI4Bharat/Indic-TTS/blob/master/LICENSE.txt) | [Yes](https://github.com/AI4Bharat/Indic-TTS?tab=readme-ov-file#training-steps) | Indic | [Paper](https://arxiv.org/abs/2211.09536) | [Demo](https://models.ai4bharat.org/#/tts) | |
|
| Amphion | [Repo](https://github.com/open-mmlab/Amphion) | [Hub](https://huggingface.co/amphion) | [MIT](https://github.com/open-mmlab/Amphion/blob/main/LICENSE) | No | Multilingual | [Paper](https://arxiv.org/abs/2312.09911) | [π€ Space](https://huggingface.co/amphion) | | |
|
| Bark | [Repo](https://github.com/huggingface/transformers/tree/main/src/transformers/models/bark) | [Hub](https://huggingface.co/suno/bark) | [MIT](https://github.com/suno-ai/bark/blob/main/LICENSE) | No | Multilingual | [Paper](https://arxiv.org/abs/2209.03143) | [π€ Space](https://huggingface.co/spaces/suno/bark) | | |
|
| EmotiVoice | [Repo](https://github.com/netease-youdao/EmotiVoice) | [GDrive](https://drive.google.com/drive/folders/1y6Xwj_GG9ulsAonca_unSGbJ4lxbNymM) | [Apache 2.0](https://github.com/netease-youdao/EmotiVoice/blob/main/LICENSE) | [Yes](https://github.com/netease-youdao/EmotiVoice/wiki/Voice-Cloning-with-your-personal-data) | ZH + EN | Not Available | Not Available | Separate [GUI agreement](https://github.com/netease-youdao/EmotiVoice/blob/main/EmotiVoice_UserAgreement_%E6%98%93%E9%AD%94%E5%A3%B0%E7%94%A8%E6%88%B7%E5%8D%8F%E8%AE%AE.pdf) | |
|
| F5-TTS | [Repo](https://github.com/SWivid/F5-TTS) | [Hub](https://huggingface.co/SWivid/F5-TTS) | [MIT](https://github.com/SWivid/F5-TTS/blob/master/LICENSE) | Yes | ZH + EN | [Paper](https://arxiv.org/abs/2410.06885) | [π€ Space](https://huggingface.co/spaces/mrfakename/E2-F5-TTS) | | |
|
| Fish Speech | [Repo](https://github.com/fishaudio/fish-speech) | [Hub](https://huggingface.co/fishaudio) | [CC-BY-NC-SA 4.0](https://github.com/fishaudio/fish-speech/blob/master/LICENSE) | Yes | Multilingual | Not Available | [π€ Space](https://huggingface.co/spaces/fishaudio/fish-speech-1) | | |
|
| Glow-TTS | [Repo](https://github.com/jaywalnut310/glow-tts) | [GDrive](https://drive.google.com/file/d/1JiCMBVTG4BMREK8cT3MYck1MgYvwASL0/view) | [MIT](https://github.com/jaywalnut310/glow-tts/blob/master/LICENSE) | [Yes](https://github.com/jaywalnut310/glow-tts?tab=readme-ov-file#2-pre-requisites) | English | [Paper](https://arxiv.org/abs/2005.11129) | [GH Pages](https://jaywalnut310.github.io/glow-tts-demo/index.html) | | |
|
| GPT-SoVITS | [Repo](https://github.com/RVC-Boss/GPT-SoVITS) | [Hub](https://huggingface.co/lj1995/GPT-SoVITS) | [MIT](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE) | [Yes](https://github.com/RVC-Boss/GPT-SoVITS?tab=readme-ov-file#pretrained-models) | Multilingual | Not Available | Not Available | | |
|
| HierSpeech++ | [Repo](https://github.com/sh-lee-prml/HierSpeechpp) | [GDrive](https://drive.google.com/drive/folders/1-L_90BlCkbPyKWWHTUjt5Fsu3kz0du0w) | [MIT](https://github.com/sh-lee-prml/HierSpeechpp/blob/main/LICENSE) | No | KR + EN | [Paper](https://arxiv.org/abs/2311.12454) | [π€ Space](https://huggingface.co/spaces/LeeSangHoon/HierSpeech_TTS) | | |
|
| IMS-Toucan | [Repo](https://github.com/DigitalPhonetics/IMS-Toucan) | [GH release](https://github.com/DigitalPhonetics/IMS-Toucan/tags) | [Apache 2.0](https://github.com/DigitalPhonetics/IMS-Toucan/blob/ToucanTTS/LICENSE) | [Yes](https://github.com/DigitalPhonetics/IMS-Toucan#build-a-toucantts-pipeline) | ALL\* | [Paper](https://arxiv.org/abs/2206.12229) | [π€ Space](https://huggingface.co/spaces/Flux9665/IMS-Toucan), [π€ Space](https://huggingface.co/spaces/Flux9665/MassivelyMultilingualTTS)\* | | |
|
| MahaTTS | [Repo](https://github.com/dubverse-ai/MahaTTS) | [Hub](https://huggingface.co/Dubverse/MahaTTS) | [Apache 2.0](https://github.com/dubverse-ai/MahaTTS/blob/main/LICENSE) | No | English + Indic | Not Available | [Recordings](https://github.com/dubverse-ai/MahaTTS/blob/main/README.md#sample-outputs), [Colab](https://colab.research.google.com/drive/1qkZz2km-PX75P0f6mUb2y5e-uzub27NW?usp=sharing) | | |
|
| MaskGCT (Amphion) | [Repo](https://github.com/open-mmlab/Amphion) | [Hub](https://huggingface.co/amphion/MaskGCT) | [CC-BY-NC 4.0](https://huggingface.co/amphion/MaskGCT) | No | Multilingual | [Paper](https://arxiv.org/abs/2409.00750) | [π€ Space](https://huggingface.co/spaces/amphion/maskgct) | | |
|
| Matcha-TTS | [Repo](https://github.com/shivammehta25/Matcha-TTS) | [GDrive](https://drive.google.com/drive/folders/17C_gYgEHOxI5ZypcfE_k1piKCtyR0isJ) | [MIT](https://github.com/shivammehta25/Matcha-TTS/blob/main/LICENSE) | [Yes](https://github.com/shivammehta25/Matcha-TTS/tree/main#train-with-your-own-dataset) | English | [Paper](https://arxiv.org/abs/2309.03199) | [π€ Space](https://huggingface.co/spaces/shivammehta25/Matcha-TTS) | GPL-licensed phonemizer | |
|
| MeloTTS | [Repo](https://github.com/myshell-ai/MeloTTS) | [Hub](https://huggingface.co/myshell-ai) | [MIT](https://github.com/myshell-ai/MeloTTS/blob/main/LICENSE) | Yes | Multilingual | Not Available | [π€ Space](https://huggingface.co/spaces/mrfakename/MeloTTS) | | |
|
| MetaVoice-1B | [Repo](https://github.com/metavoiceio/metavoice-src) | [Hub](https://huggingface.co/metavoiceio/metavoice-1B-v0.1/tree/main) | [Apache 2.0](https://github.com/metavoiceio/metavoice-src/blob/main/LICENSE) | [Yes](https://github.com/metavoiceio/metavoice-src?tab=readme-ov-file) | Multilingual | Not Available | [π€ Space](https://huggingface.co/spaces/mrfakename/MetaVoice-1B-v0.1) | | |
|
| Neural-HMM TTS | [Repo](https://github.com/shivammehta25/Neural-HMM) | [GitHub](https://github.com/shivammehta25/Neural-HMM/releases) | [MIT](https://github.com/shivammehta25/Neural-HMM/blob/main/LICENSE) | [Yes](https://github.com/shivammehta25/Neural-HMM?tab=readme-ov-file#setup-and-training-using-lj-speech) | English | [Paper](https://arxiv.org/abs/2108.13320) | [GH Pages](https://shivammehta25.github.io/Neural-HMM/) | | |
|
| OpenVoice | [Repo](https://github.com/myshell-ai/OpenVoice) | [Hub](https://huggingface.co/myshell-ai/OpenVoice) | [MIT](https://github.com/myshell-ai/OpenVoice/blob/main/LICENSE) | No | Multilingual | [Paper](https://arxiv.org/abs/2312.01479) | [π€ Space](https://huggingface.co/spaces/myshell-ai/OpenVoice) | | |
|
| OverFlow TTS | [Repo](https://github.com/shivammehta25/OverFlow) | [GitHub](https://github.com/shivammehta25/OverFlow/releases) | [MIT](https://github.com/shivammehta25/OverFlow/blob/main/LICENSE) | [Yes](https://github.com/shivammehta25/OverFlow/tree/main?tab=readme-ov-file#setup-and-training-using-lj-speech) | English | [Paper](https://arxiv.org/abs/2211.06892) | [GH Pages](https://shivammehta25.github.io/OverFlow/) | | |
|
| Parler TTS | [Repo](https://github.com/huggingface/parler-tts) | [Hub](https://huggingface.co/parler-tts/parler_tts_mini_v0.1) | [Apache 2.0](https://github.com/huggingface/parler-tts/blob/main/LICENSE) | [Yes](https://github.com/huggingface/parler-tts/tree/main/training) | English | Not Available | [π€ Space](https://huggingface.co/spaces/parler-tts/parler_tts) | | |
|
| pflowTTS | [Unofficial Repo](https://github.com/p0p4k/pflowtts_pytorch) | [GDrive](https://drive.google.com/drive/folders/1x-A2Ezmmiz01YqittO_GLYhngJXazaF0) | [MIT](https://github.com/p0p4k/pflowtts_pytorch/blob/master/LICENSE) | [Yes](https://github.com/p0p4k/pflowtts_pytorch#instructions-to-run) | English | [Paper](https://openreview.net/pdf?id=zNA7u7wtIN) | Not Available | GPL-licensed phonemizer | |
|
| Pheme | [Repo](https://github.com/PolyAI-LDN/pheme) | [Hub](https://huggingface.co/PolyAI/pheme) | [CC-BY](https://github.com/PolyAI-LDN/pheme/blob/main/LICENSE) | [Yes](https://github.com/PolyAI-LDN/pheme#training) | English | [Paper](https://arxiv.org/abs/2401.02839) | [π€ Space](https://huggingface.co/spaces/PolyAI/pheme) | | |
|
| Piper | [Repo](https://github.com/rhasspy/piper) | [Hub](https://huggingface.co/datasets/rhasspy/piper-checkpoints/) | [MIT](https://github.com/rhasspy/piper/blob/master/LICENSE.md) | [Yes](https://github.com/rhasspy/piper/blob/master/TRAINING.md) | Multilingual | Not Available | Not Available | [GPL-licensed phonemizer](https://github.com/rhasspy/piper/issues/93) | |
|
| RAD-MMM | [Repo](https://github.com/NVIDIA/RAD-MMM) | [GDrive](https://drive.google.com/file/d/1p8SEVHRlyLQpQnVP2Dc66RlqJVVRDCsJ/view) | [MIT](https://github.com/NVIDIA/RAD-MMM/blob/main/LICENSE) | [Yes](https://github.com/NVIDIA/RAD-MMM?tab=readme-ov-file#training) | Multilingual | [Paper](https://arxiv.org/pdf/2301.10335.pdf) | [Jupyter Notebook](https://github.com/NVIDIA/RAD-MMM/blob/main/inference.ipynb), [Webpage](https://research.nvidia.com/labs/adlr/projects/radmmm/) | | |
|
| RAD-TTS | [Repo](https://github.com/NVIDIA/radtts) | [GDrive](https://drive.google.com/file/d/1Rb2VMUwQahGrnpFSlAhCPh7OpDN3xgOr/view?usp=sharing) | [MIT](https://github.com/NVIDIA/radtts/blob/main/LICENSE) | [Yes](https://github.com/NVIDIA/radtts#training-radtts-without-pitch-and-energy-conditioning) | English | [Paper](https://openreview.net/pdf?id=0NQwnnwAORi) | [GH Pages](https://nv-adlr.github.io/RADTTS) | | |
|
| Silero | [Repo](https://github.com/snakers4/silero-models) | [GH links](https://github.com/snakers4/silero-models/blob/master/models.yml) | [CC BY-NC-SA](https://github.com/snakers4/silero-models/blob/master/LICENSE) | [No](https://github.com/snakers4/silero-models/discussions/78) | Multilingual | Not Available | Not Available | [Non Commercial](https://github.com/snakers4/silero-models/wiki/Licensing-and-Tiers) | |
|
| StyleTTS 2 | [Repo](https://github.com/yl4579/StyleTTS2) | [Hub](https://huggingface.co/yl4579/StyleTTS2-LibriTTS/tree/main) | [MIT](https://github.com/yl4579/StyleTTS2/blob/main/LICENSE) | [Yes](https://github.com/yl4579/StyleTTS2#finetuning) | English | [Paper](https://arxiv.org/abs/2306.07691) | [π€ Space](https://huggingface.co/spaces/styletts2/styletts2) | GPL-licensed phonemizer | |
|
| Tacotron 2 | [Unofficial Repo](https://github.com/NVIDIA/tacotron2) | [GDrive](https://drive.google.com/file/d/1c5ZTuT7J08wLUoVZ2KkUs_VdZuJ86ZqA/view) | [BSD-3](https://github.com/NVIDIA/tacotron2/blob/master/LICENSE) | [Yes](https://github.com/NVIDIA/tacotron2/tree/master?tab=readme-ov-file#training) | English | [Paper](https://arxiv.org/abs/1712.05884) | [Webpage](https://google.github.io/tacotron/publications/tacotron2/) | | |
|
| TorToiSe TTS | [Repo](https://github.com/neonbjb/tortoise-tts) | [Hub](https://huggingface.co/jbetker/tortoise-tts-v2) | [Apache 2.0](https://github.com/neonbjb/tortoise-tts/blob/main/LICENSE) | [Yes](https://git.ecker.tech/mrq/tortoise-tts) | English | [Technical report](https://arxiv.org/abs/2305.07243) | [π€ Space](https://huggingface.co/spaces/Manmay/tortoise-tts) | | |
|
| TTTS | [Repo](https://github.com/adelacvg/ttts) | [Hub](https://huggingface.co/adelacvg/TTTS) | [MPL 2.0](https://github.com/adelacvg/ttts/blob/master/LICENSE) | No | Multilingual | Not Available | [Colab](https://colab.research.google.com/github/adelacvg/ttts/blob/master/demo.ipynb), [π€ Space](https://huggingface.co/spaces/mrfakename/TTTS) | | |
|
| VALL-E | [Unofficial Repo](https://github.com/enhuiz/vall-e) | Not Available | [MIT](https://github.com/enhuiz/vall-e/blob/main/LICENSE) | [Yes](https://github.com/enhuiz/vall-e#get-started) | NA | [Paper](https://arxiv.org/abs/2301.02111) | Not Available | | |
|
| VITS/ MMS-TTS | [Repo](https://github.com/huggingface/transformers/tree/7142bdfa90a3526cfbed7483ede3afbef7b63939/src/transformers/models/vits) | [Hub](https://huggingface.co/kakao-enterprise) / [MMS](https://huggingface.co/models?search=mms-tts) | [Apache 2.0](https://github.com/huggingface/transformers/blob/main/LICENSE) | [Yes](https://github.com/ylacombe/finetune-hf-vits) | English | [Paper](https://arxiv.org/abs/2106.06103) | [π€ Space](https://huggingface.co/spaces/kakao-enterprise/vits) | GPL-licensed phonemizer | |
|
| WhisperSpeech | [Repo](https://github.com/collabora/WhisperSpeech) | [Hub](https://huggingface.co/collabora/whisperspeech) | [MIT](https://github.com/collabora/WhisperSpeech/blob/main/LICENSE) | No | Multilingual | Not Available | [π€ Space](https://huggingface.co/spaces/collabora/WhisperSpeech), [Recordings](https://github.com/collabora/WhisperSpeech/blob/main/README.md), [Colab](https://colab.research.google.com/github/collabora/WhisperSpeech/blob/8168a30f26627fcd15076d10c85d9e33c52204cf/Inference%20example.ipynb) | | |
|
| XTTS | [Repo](https://github.com/coqui-ai/TTS) | [Hub](https://huggingface.co/coqui/XTTS-v2) | [CPML](https://coqui.ai/cpml) | [Yes](https://docs.coqui.ai/en/latest/models/xtts.html#training) | Multilingual | [Paper](https://arxiv.org/abs/2406.04904) | [π€ Space](https://huggingface.co/spaces/coqui/xtts) | Non Commercial | |
|
| xVASynth | [Repo](https://github.com/DanRuta/xVA-Synth) | [Hub](https://huggingface.co/Pendrokar/xvapitch) | [GPL-3.0](https://github.com/DanRuta/xVA-Synth/blob/master/LICENSE.md) | [Yes](https://github.com/DanRuta/xva-trainer) | Multilingual | Not Available | [π€ Space](https://huggingface.co/spaces/Pendrokar/xVASynth) | Base model trained on non-permissive datasets | |
|
|
|
* *Multilingual* - Amount of supported languages is ever changing, check the Space and Hub which specific languages are supported |
|
* *ALL* - Supports all natural languages; may not support artificial/contructed languages |
|
|
|
Also to find a model for a specific language, filter out the TTS models hosted on HuggingFace: <https://huggingface.co/models?pipeline_tag=text-to-speech&language=en&sort=trending> |
|
|
|
--- |
|
|
|
## Legend |
|
|
|
For the [#above](#) TTS capability table. Open the [viewer](../../viewer/) in another window or even another monitor to keep both it and the legend in view. |
|
|
|
* Processor β‘ - Inference done by |
|
* CPU (CPU**s** = multithreaded) - All models can be run on CPU, so real-time factor should be below 2.0 to qualify for CPU tag, though some more leeway can be given if it supports audio streaming |
|
* CUDA by *NVIDIA*β’ |
|
* ROCm by *AMD*β’ |
|
* Phonetic alphabet π€ - Phonetic transcription that allows to control pronunciation of words before inference |
|
* [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) - International Phonetic Alphabet |
|
* [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics |
|
* Insta-clone π₯ - Zero-shot model for quick voice cloning |
|
* Emotion control π - Able to force an emotional state of speaker |
|
* π <# emotions> ( π‘ anger; π happiness; π sadness; π― surprise; π€« whispering; π friendlyness ) |
|
* strict insta-clone switch ππ₯ - cloned on sample with specific emotion; may sound different than normal speaking voice; no ability to go in-between states |
|
* strict control through prompt ππ - prompt input parameter |
|
* Prompting π - Also a side effect of narrator based datasets and a way to affect the emotional state |
|
* π - Prompt as a separate input parameter |
|
* π£π - The prompt itself is also spoken by TTS; [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion) |
|
* Streaming support π - Can playback audio while it is still being generated |
|
* Speech control π - Ability to change the pitch, duration, etc. for the whole and/or per-phoneme of the generated speech |
|
* Voice conversion / Speech-To-Speech π¦ - Streaming support implies real-time S2S; S2T=>T2S does not count |
|
* Longform synthesis π - Able to synthesize whole paragraphs, as some TTS models tend to break down after a certain audio length limit |
|
|
|
Example if the proprietary ElevenLabs were to be added to the capabilities table: |
|
| Name | Processor<br>β‘ | Phonetic alphabet<br>π€ | Insta-clone<br>π₯ | Emotional control<br>π | Prompting<br>π | Speech control<br>π | Streaming support<br>π | Voice conversion<br>π¦ | Longform synthesis<br>π | |
|
|---|---|---|---|---|---|---|---|---| --- | |
|
|ElevenLabs|CUDA|IPA, ARPAbet|π₯|ππ|π£π|π stability, voice similarity|π|π¦|π Projects| |
|
|
|
More info on how the capabilities table can be found within the [GitHub Issue](https://github.com/Vaibhavs10/open-tts-tracker/issues/14). |
|
|
|
Please create pull requests to update the info on the models. |
|
|
|
--- |