Ailurus commited on
Commit
9e19927
·
verified ·
1 Parent(s): 9bff9b8

End of training

Browse files
Files changed (3) hide show
  1. README.md +6 -30
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -6,26 +6,9 @@ license: apache-2.0
6
  base_model: openai/whisper-tiny
7
  tags:
8
  - generated_from_trainer
9
- datasets:
10
- - mozilla-foundation/common_voice_17_0
11
- metrics:
12
- - wer
13
  model-index:
14
  - name: Whisper Tiny finetuned RU
15
- results:
16
- - task:
17
- name: Automatic Speech Recognition
18
- type: automatic-speech-recognition
19
- dataset:
20
- name: Common Voice 17.0
21
- type: mozilla-foundation/common_voice_17_0
22
- config: ru
23
- split: None
24
- args: 'config: ru, split: test'
25
- metrics:
26
- - name: Wer
27
- type: wer
28
- value: 41.04365802952346
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,10 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  # Whisper Tiny finetuned RU
35
 
36
- This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 17.0 dataset.
37
- It achieves the following results on the evaluation set:
38
- - Loss: 0.5156
39
- - Wer: 41.0437
40
 
41
  ## Model description
42
 
@@ -55,27 +35,23 @@ More information needed
55
  ### Training hyperparameters
56
 
57
  The following hyperparameters were used during training:
58
- - learning_rate: 1e-05
59
  - train_batch_size: 4
60
  - eval_batch_size: 4
61
  - seed: 42
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_steps: 250
65
- - training_steps: 4000
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
- | Training Loss | Epoch | Step | Validation Loss | Wer |
71
- |:-------------:|:------:|:----:|:---------------:|:-------:|
72
- | 0.511 | 0.2187 | 2000 | 0.5548 | 43.4606 |
73
- | 0.4345 | 0.4374 | 4000 | 0.5156 | 41.0437 |
74
 
75
 
76
  ### Framework versions
77
 
78
  - Transformers 4.45.2
79
  - Pytorch 2.4.0
80
- - Datasets 3.0.1
81
- - Tokenizers 0.20.0
 
6
  base_model: openai/whisper-tiny
7
  tags:
8
  - generated_from_trainer
 
 
 
 
9
  model-index:
10
  - name: Whisper Tiny finetuned RU
11
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
 
17
  # Whisper Tiny finetuned RU
18
 
19
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
 
 
 
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 5e-06
39
  - train_batch_size: 4
40
  - eval_batch_size: 4
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 250
45
+ - training_steps: 12000
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
 
 
 
 
50
 
51
 
52
  ### Framework versions
53
 
54
  - Transformers 4.45.2
55
  - Pytorch 2.4.0
56
+ - Datasets 3.1.0
57
+ - Tokenizers 0.20.3
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:482df5aa6f2f66530207ac3357e3a71b4330a64ba36bf1507e05d62b86c1cb38
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4a4a55fd78a57a381489928ff8a6f751912687729c843bfcb3f56fd44772ba6
3
  size 151061672
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f44d5ff5d183f289afa6db06e2040fe23a00c7b75425dac87aecae9eb631dc83
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b61c5d7519a277e7f07cdb3680a9afb81504f270bbf1b810aa31a68686cef36
3
  size 5368