whisper-large-v2-sw / README.md
hedronstone's picture
Update README.md
d43d1e0
---
language:
- sw
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper-large-v2-sw
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: sw
split: test
args: 'config: sw, split: test'
metrics:
- name: Wer
type: wer
value: 30.7
---
## Model
* Name: Whisper Large-v2 Swahili
* Description: Whisper weights for speech-to-text task, fine-tuned and evaluated on normalized data.
* Dataset:
- Train and validation splits for Swahili subsets of [Common Voice 11.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0).
- Train, validation and test splits for Swahili subsets of [Google Fleurs](https://huggingface.co/datasets/google/fleurs/).
* Performance: **30.7 WER**
## Weights
* Date of release: 12.09.2022
* License: MIT
## Usage
To use these weights in HuggingFace's `transformers` library, you can do the following:
```python
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("hedronstone/whisper-large-v2-sw")
```