|
--- |
|
dataset_info: |
|
features: |
|
- name: name |
|
dtype: string |
|
- name: speaker_embeddings |
|
sequence: float32 |
|
splits: |
|
- name: validation |
|
num_bytes: 634175 |
|
num_examples: 305 |
|
download_size: 979354 |
|
dataset_size: 634175 |
|
license: mit |
|
language: |
|
- ar |
|
size_categories: |
|
- n<1K |
|
task_categories: |
|
- text-to-speech |
|
- audio-to-audio |
|
pretty_name: Arabic(M) Speaker Embeddings |
|
--- |
|
|
|
# Arabic Speaker Embeddings extracted from ASC and ClArTTS |
|
|
|
There is one speaker embedding for each utterance in the validation set of both datasets. The speaker embeddings are 512-element X-vectors. |
|
|
|
[Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus) has 100 files for a single male speaker and [ClArTTS](https://huggingface.co/datasets/MBZUAI/ClArTTS) has 205 files for a single male speaker. |
|
|
|
The X-vectors were extracted using [this script](https://huggingface.co/mechanicalsea/speecht5-vc/blob/main/manifest/utils/prep_cmu_arctic_spkemb.py), which uses the `speechbrain/spkrec-xvect-voxceleb` model. |
|
|
|
Usage: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
embeddings_dataset = load_dataset("herwoww/arabic_xvector_embeddings", split="validation") |
|
speaker_embedding = torch.tensor(embeddings_dataset[1]["speaker_embeddings"]).unsqueeze(0) |
|
``` |