kokoro-82M-voices / README.md
ecyht2's picture
feat: Added parquet format description in README
e0f9b88 verified
---
license: apache-2.0
task_categories:
- text-to-speech
language:
- en
tags:
- kokoro-82M
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: voices
dtype: string
- name: tensor
sequence:
sequence:
sequence: float64
splits:
- name: train
num_bytes: 11556949
num_examples: 11
download_size: 6344291
dataset_size: 11556949
---
# Kokoro-82M Voices
This dataset contains all the voices available in [hexgrad/Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M).
This dataset provides the voices in 3 different formats.
1. Individual voices embeddings in different JSON file
2. Single JSON which contains all the voices in a JSON object.
3. Parquet format for usage via `datasets`
The voices name is the same as the `.pth` file names shown below.
```python
voices = [
"af",
"af_bella",
"af_nicole",
"af_sarah",
"af_sky",
"am_adam",
"am_michael",
"bf_emma",
"bf_isabella",
"bm_george",
"bm_lewis",
]
```
For example, the "af" voices embeddings can be obtained using the "af.json" file which contains the embeddings as a JSON array.
The same "af" voices can be obtain in the single file by accessing the "af" name in the JSON object.
See `Usage` for more information on how to retreive the embeddings.
# Usage
Some example usage of downloading the "af" voice embedding.
## Downloading the JSON All
```python
# pip install -U -q huggingface_hub numpy
import json
import numpy as np
from huggingface_hub import hf_hub_download
file = hf_hub_download(
repo_id="ecyht2/kokoro-82M-voices",
repo_type="dataset",
filename="voices.json"
)
with open(file, "r", encoding="utf-8") as f:
voices = json.load(f)
voice = np.array(voices.get("af")) # Change "af" to the voice that you want
print(voice.shape)
# (511, 1, 256)
```
## Downloading the JSON Single
```python
# pip install -U -q huggingface_hub numpy
import json
import numpy as np
from huggingface_hub import hf_hub_download
file = hf_hub_download(
repo_id="ecyht2/kokoro-82M-voices",
repo_type="dataset",
filename="af.json" # Change "af" to the voice that you want
)
with open(file, "r", encoding="utf-8") as f:
voice = json.load(f)
voice = np.array(voice)
print(voice.shape)
# (511, 1, 256)
```
## Using Huggingface Datasets
```python
# pip install -U -q datasets numpy
from datasets import load_dataset
ds = load_dataset(repo, split="train")
for i in ds:
if i["voices"] == "af":
voice = np.array(i["tensor"])
break
print(voice.shape)
# (511, 1, 256)
```