File size: 1,388 Bytes
cd16d36 20ea1cc f2ee77a 20ea1cc f2ee77a 20ea1cc f2ee77a 20ea1cc f2ee77a cd16d36 77ad35d d6bbc47 77ad35d 80a8b88 eb5f2e5 77ad35d 5bce317 720aeed |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
language:
- sw
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper medium Sw2 - Kiazi Bora
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: sw
split: test
args: 'config: sw, split: test'
metrics:
- name: Wer
type: wer
value: 30.7
---
## Model
* Name: Whisper Large-v2 Swahili
* Description: Whisper weights for speech-to-text task, fine-tuned and evaluated on normalized data.
* Dataset:
- Train and validation splits for Swahili subsets of [Common Voice 11.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0).
- Train, validation and test splits for Swahili subsets of [Google Fleurs](https://huggingface.co/datasets/google/fleurs/).
* Performance: **30.7 WER**
## Weights
* Date of release: 12.09.2022
* License: MIT
## Usage
To use these weights in HuggingFace's `transformers` library, you can do the following:
```python
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("hedronstone/whisper-large-v2-sw")
```
|