Update README.md
Browse files
README.md
CHANGED
@@ -16,4 +16,26 @@ configs:
|
|
16 |
data_files:
|
17 |
- split: validation
|
18 |
path: data/validation-*
|
|
|
|
|
|
|
|
|
|
|
19 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
data_files:
|
17 |
- split: validation
|
18 |
path: data/validation-*
|
19 |
+
license: mit
|
20 |
+
language:
|
21 |
+
- ar
|
22 |
+
size_categories:
|
23 |
+
- n<1K
|
24 |
---
|
25 |
+
|
26 |
+
# Arabic Speaker Embeddings extracted from ASC and ClArTTS
|
27 |
+
|
28 |
+
There is one speaker embedding for each utterance in the validation set of both datasets. The speaker embeddings are 512-element X-vectors.
|
29 |
+
|
30 |
+
[Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus) has 100 files for a single male speaker and [ClArTTS](https://www.clartts.com/) has 205 files for a single male speaker.
|
31 |
+
|
32 |
+
The X-vectors were extracted using [this script](https://huggingface.co/mechanicalsea/speecht5-vc/blob/main/manifest/utils/prep_cmu_arctic_spkemb.py), which uses the `speechbrain/spkrec-xvect-voxceleb` model.
|
33 |
+
|
34 |
+
Usage:
|
35 |
+
|
36 |
+
```python
|
37 |
+
from datasets import load_dataset
|
38 |
+
|
39 |
+
embeddings_dataset = load_dataset("herwoww/arabic_xvect_embeddings", split="validation")
|
40 |
+
speaker_embedding = torch.tensor(embeddings_dataset[1]["speaker_embeddings"]).unsqueeze(0)
|
41 |
+
```
|