kokoro-base-onnx / README.md
davidmezzetti's picture
Initial version
fc0ddb5
|
raw
history blame
2.17 kB
metadata
tags:
  - audio
  - text-to-speech
  - onnx
base_model:
  - hexgrad/Kokoro-82M
inference: false
language: en
license: apache-2.0
library_name: txtai

Kokoro Base (82M) Model for ONNX

Kokoro 82M export to ONNX. This model is the same ONNX file that's in the base repository. The voices file is from this repository

Usage with txtai

txtai has a built in Text to Speech (TTS) pipeline that makes using this model easy.

Note: This requires txtai >= 8.3.0. Install from GitHub until that release.

import soundfile as sf

from txtai.pipeline import TextToSpeech

# Build pipeline
tts = TextToSpeech("NeuML/kokoro-base-onnx")

# Generate speech
speech, rate = tts("Say something here")

# Write to file
sf.write("out.wav", speech, rate)

Usage with ONNX

This model can also be run directly with ONNX provided the input text is tokenized. Tokenization can be done with ttstokenizer. ttstokenizer is a permissively licensed library with no external dependencies (such as espeak).

Note that the txtai pipeline has additional functionality such as batching large inputs together that would need to be duplicated with this method.

import json
import numpy as np
import onnxruntime
import soundfile as sf

from ttstokenizer import IPATokenizer

# This example assumes the files have been downloaded locally
with open("kokoro-base-onnx/voices.json", "r", encoding="utf-8") as f:
    voices = json.load(f)

# Create model
model = onnxruntime.InferenceSession(
    "kokoro-base-onnx/model.onnx",
    providers=["CPUExecutionProvider"]
)

# Create tokenizer
tokenizer = IPATokenizer()

# Tokenize inputs
inputs = tokenizer("Say something here")

# Get speaker array
speaker = np.array(self.voices["af"], dtype=np.float32)

# Generate speech
outputs = model.run(None, {
    "tokens": [[0, *inputs, 0]],
    "style": speaker[len(inputs)],
    "speed": np.ones(1, dtype=np.float32) * 1.0
})

# Write to file
sf.write("out.wav", outputs[0], 24000)