--- license: mit --- This is the full VITS checkpoint that I pre-trained myself on LJSpeech dataset to 100k steps for [OPEA](https://github.com/opea-project). This repo is trained on A100. It can do inference on Xeon CPU, A100, and Intel Gaudi2. The inference process is also optimized on single Intel Gaudi2 card to be as fast as one A100. Since LJSpeech and the original [code](https://github.com/jaywalnut310/vits) is under MIT license, this repo should also be under MIT license. Please notice: before the code is merged into [OPEA](https://github.com/opea-project), you should use this [repo](https://github.com/Spycsh/vits) to run the inference. ```bash # change the -d device to cpu/cuda/hpu python inference.py -m ./G_100000.pth -c ./configs/ljs_base.json -d "hpu" ``` How to turn this into a standard HuggingFace checkpoint like [facebook/mms-tts-eng](https://huggingface.co/facebook/mms-tts-eng) may be supported in the future but I do not have guarantee. Same to the pre-training on HPU. **Disclaimer:** This is not a official model from Intel. For more issues please contact sihan.chen@intel.com, or raise an issue on [OPEA](https://github.com/opea-project/GenAIExamples).