File size: 2,055 Bytes
13f3b77
e4c91bf
 
 
13f3b77
e4c91bf
 
 
13f3b77
e4c91bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13f3b77
 
e4c91bf
 
13f3b77
e4c91bf
13f3b77
e4c91bf
 
 
 
13f3b77
e4c91bf
13f3b77
e4c91bf
13f3b77
e4c91bf
13f3b77
e4c91bf
13f3b77
e4c91bf
13f3b77
e4c91bf
13f3b77
e4c91bf
13f3b77
e4c91bf
13f3b77
e4c91bf
 
 
 
 
 
 
 
 
 
13f3b77
e4c91bf
13f3b77
e4c91bf
 
 
 
13f3b77
 
 
 
e4c91bf
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
language:
- sn
license: apache-2.0
library_name: peft
tags:
- whisper-event and peft-lora
- generated_from_trainer
base_model: openai/whisper-tiny
datasets:
- Kittech/kittech_shona_dataset
metrics:
- wer
model-index:
- name: Whisper Small Sn - Bright Chirindo
  results:
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: Kingdom Truth International Ministries sermons
      type: Kittech/kittech_shona_dataset
      config: sn_zw
      split: test
      args: sn_zw
    metrics:
    - type: wer
      value: 208.88577256501785
      name: Wer
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Whisper Small Sn - Bright Chirindo

This model is a fine-tuned version of [kittech/whisper-tiny-sn](https://huggingface.co/kittech/whisper-tiny-sn) on the Kingdom Truth International Ministries sermons dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9243
- Wer: 208.8858

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch    | Step | Validation Loss | Wer      |
|:-------------:|:--------:|:----:|:---------------:|:--------:|
| 2.5155        | 357.1429 | 2500 | 2.9250          | 212.4044 |
| 2.1292        | 714.2857 | 5000 | 2.9243          | 208.8858 |


### Framework versions

- PEFT 0.10.1.dev0
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.2.dev0
- Tokenizers 0.19.1