coqui/XTTS-v2
Text-to-Speech
β’
Updated
β’
1.71M
β’
2.14k
You guessed right :-)
There is a bug in the chat_template in the tokenizer_config.json for meta-llama/Llama-2-7b-chat-hf and meta-llama/Llama-2-70b-chat-hf which is easy to fix, but who should I inform so that it can be fixed?
NICE! Does this apply to all models in serverless and deployed endpoints, or just models that have a correct chat_template in tokenizer_config.json?