theothertom commited on
Commit
c25ff1c
·
verified ·
1 Parent(s): e8af8ed

End of training

Browse files
README.md CHANGED
@@ -4,26 +4,11 @@ base_model: openai/whisper-small
4
  tags:
5
  - hf-asr-leaderboard
6
  - generated_from_trainer
7
- datasets:
8
- - arrow
9
  metrics:
10
  - wer
11
  model-index:
12
  - name: whisper-small-indian_eng
13
- results:
14
- - task:
15
- name: Automatic Speech Recognition
16
- type: automatic-speech-recognition
17
- dataset:
18
- name: arrow
19
- type: arrow
20
- config: default
21
- split: validation
22
- args: default
23
- metrics:
24
- - name: Wer
25
- type: wer
26
- value: 19.101123595505616
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  # whisper-small-indian_eng
33
 
34
- This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the arrow dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 1.0717
37
- - Wer: 19.1011
38
 
39
  ## Model description
40
 
@@ -53,34 +38,50 @@ More information needed
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
56
- - learning_rate: 1e-06
57
- - train_batch_size: 8
58
- - eval_batch_size: 4
59
  - seed: 42
60
- - gradient_accumulation_steps: 2
61
- - total_train_batch_size: 16
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
- - lr_scheduler_warmup_steps: 25
65
- - training_steps: 608
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
- | Training Loss | Epoch | Step | Validation Loss | Wer |
71
- |:-------------:|:-----:|:----:|:---------------:|:-------:|
72
- | 2.4039 | 10.0 | 50 | 2.6140 | 22.4719 |
73
- | 0.8763 | 20.0 | 100 | 1.2662 | 15.7303 |
74
- | 0.652 | 30.0 | 150 | 1.1479 | 13.4831 |
75
- | 0.5727 | 40.0 | 200 | 1.1299 | 16.8539 |
76
- | 0.5206 | 50.0 | 250 | 1.1230 | 17.9775 |
77
- | 0.4886 | 60.0 | 300 | 1.1166 | 16.8539 |
78
- | 0.4644 | 70.0 | 350 | 1.1085 | 14.6067 |
79
- | 0.4439 | 80.0 | 400 | 1.0987 | 14.6067 |
80
- | 0.4293 | 90.0 | 450 | 1.0887 | 15.7303 |
81
- | 0.4162 | 100.0 | 500 | 1.0808 | 17.9775 |
82
- | 0.416 | 110.0 | 550 | 1.0735 | 19.1011 |
83
- | 0.4101 | 120.0 | 600 | 1.0717 | 19.1011 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
 
85
 
86
  ### Framework versions
 
4
  tags:
5
  - hf-asr-leaderboard
6
  - generated_from_trainer
 
 
7
  metrics:
8
  - wer
9
  model-index:
10
  - name: whisper-small-indian_eng
11
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
 
17
  # whisper-small-indian_eng
18
 
19
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.3169
22
+ - Wer: 10.6061
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 1e-05
42
+ - train_batch_size: 16
43
+ - eval_batch_size: 8
44
  - seed: 42
 
 
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - lr_scheduler_warmup_steps: 100
48
+ - training_steps: 3000
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
54
+ |:-------------:|:-------:|:----:|:---------------:|:-------:|
55
+ | 0.5504 | 1.5873 | 100 | 0.4581 | 10.6061 |
56
+ | 0.058 | 3.1746 | 200 | 0.2114 | 8.8154 |
57
+ | 0.0126 | 4.7619 | 300 | 0.2365 | 8.1267 |
58
+ | 0.0078 | 6.3492 | 400 | 0.2513 | 7.9890 |
59
+ | 0.0026 | 7.9365 | 500 | 0.2596 | 7.1625 |
60
+ | 0.0027 | 9.5238 | 600 | 0.2676 | 8.1267 |
61
+ | 0.0031 | 11.1111 | 700 | 0.2842 | 7.3003 |
62
+ | 0.0013 | 12.6984 | 800 | 0.2707 | 18.7328 |
63
+ | 0.0004 | 14.2857 | 900 | 0.2814 | 13.6364 |
64
+ | 0.0006 | 15.8730 | 1000 | 0.2806 | 11.1570 |
65
+ | 0.0009 | 17.4603 | 1100 | 0.2906 | 11.0193 |
66
+ | 0.0004 | 19.0476 | 1200 | 0.2948 | 11.1570 |
67
+ | 0.0003 | 20.6349 | 1300 | 0.2984 | 10.8815 |
68
+ | 0.0002 | 22.2222 | 1400 | 0.3008 | 10.8815 |
69
+ | 0.0003 | 23.8095 | 1500 | 0.2992 | 10.7438 |
70
+ | 0.0001 | 25.3968 | 1600 | 0.3040 | 10.8815 |
71
+ | 0.0002 | 26.9841 | 1700 | 0.3088 | 10.7438 |
72
+ | 0.0001 | 28.5714 | 1800 | 0.3077 | 10.6061 |
73
+ | 0.0001 | 30.1587 | 1900 | 0.3098 | 10.4683 |
74
+ | 0.0001 | 31.7460 | 2000 | 0.3111 | 10.4683 |
75
+ | 0.0001 | 33.3333 | 2100 | 0.3119 | 10.4683 |
76
+ | 0.0001 | 34.9206 | 2200 | 0.3129 | 10.4683 |
77
+ | 0.0001 | 36.5079 | 2300 | 0.3142 | 10.4683 |
78
+ | 0.0001 | 38.0952 | 2400 | 0.3146 | 10.4683 |
79
+ | 0.0001 | 39.6825 | 2500 | 0.3153 | 10.6061 |
80
+ | 0.0001 | 41.2698 | 2600 | 0.3158 | 10.6061 |
81
+ | 0.0001 | 42.8571 | 2700 | 0.3162 | 10.6061 |
82
+ | 0.0001 | 44.4444 | 2800 | 0.3167 | 10.6061 |
83
+ | 0.0001 | 46.0317 | 2900 | 0.3169 | 10.6061 |
84
+ | 0.0001 | 47.6190 | 3000 | 0.3169 | 10.6061 |
85
 
86
 
87
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:473109521bc4f90d307b06bbf87110862346e38445192f5d37f1b31842dba965
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d38b64817454b247b6a49fad1fd61bfd4f442eb5229f8b4397edfebeb3749dc
3
  size 966995080
runs/Jun17_14-32-21_5c1a47f82491/events.out.tfevents.1718634744.5c1a47f82491.25.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3248e751bd787b4382b171c6f535c88e4bfb4e89b20168e1e76fcf5973d827d5
3
- size 27881
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57ee7cf4dde4f8eb4b853d6f44cf043d2344dbc941d1cbe5dc93e1b9789cc21b
3
+ size 28235