File size: 2,132 Bytes
acf3391
18a01e2
 
 
acf3391
18a01e2
 
 
acf3391
18a01e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
acf3391
 
18a01e2
 
acf3391
18a01e2
acf3391
18a01e2
 
 
 
acf3391
18a01e2
acf3391
18a01e2
acf3391
18a01e2
acf3391
18a01e2
acf3391
18a01e2
acf3391
18a01e2
acf3391
18a01e2
acf3391
18a01e2
acf3391
18a01e2
 
 
 
 
 
 
 
 
 
acf3391
18a01e2
acf3391
18a01e2
 
 
 
 
 
 
acf3391
 
 
 
18a01e2
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
language:
- sn
license: apache-2.0
library_name: peft
tags:
- whisper-event and peft-lora
- generated_from_trainer
base_model: openai/whisper-tiny
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Tiny Sn - Bright Chirindo
  results:
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: Google Fleurs
      type: google/fleurs
      config: sn_zw
      split: None
      args: sn_zw
    metrics:
    - type: wer
      value: 95.27619047619048
      name: Wer
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Whisper Tiny Sn - Bright Chirindo

This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9909
- Wer: 95.2762

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch   | Step | Validation Loss | Wer      |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 2.087         | 3.0164  | 1000 | 2.4026          | 109.4095 |
| 1.8305        | 6.0328  | 2000 | 2.1613          | 101.2419 |
| 1.7145        | 9.0492  | 3000 | 2.0536          | 99.8705  |
| 1.6314        | 13.0044 | 4000 | 2.0050          | 99.0095  |
| 1.665         | 16.0208 | 5000 | 1.9909          | 95.2762  |


### Framework versions

- PEFT 0.10.1.dev0
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.2.dev0
- Tokenizers 0.19.1