Baselhany commited on
Commit
ca34136
·
verified ·
1 Parent(s): 35d930c

Training finished

Browse files
README.md CHANGED
@@ -20,9 +20,9 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the quran-ayat-speech-to-text dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.0092
24
- - Wer: 0.0825
25
- - Cer: 0.0329
26
 
27
  ## Model description
28
 
@@ -41,37 +41,31 @@ More information needed
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
- - learning_rate: 5e-06
45
  - train_batch_size: 16
46
  - eval_batch_size: 16
47
  - seed: 42
48
  - gradient_accumulation_steps: 2
49
  - total_train_batch_size: 32
50
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_steps: 500
53
- - num_epochs: 10
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Training results
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
59
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
60
- | 0.0106 | 1.0 | 344 | 0.0116 | 0.0901 | 0.0326 |
61
- | 0.0083 | 2.0 | 688 | 0.0080 | 0.0916 | 0.0348 |
62
- | 0.0067 | 3.0 | 1032 | 0.0079 | 0.0880 | 0.0340 |
63
- | 0.004 | 4.0 | 1376 | 0.0080 | 0.0894 | 0.0344 |
64
- | 0.0033 | 5.0 | 1720 | 0.0084 | 0.0871 | 0.0326 |
65
- | 0.0034 | 6.0 | 2064 | 0.0087 | 0.0863 | 0.0326 |
66
- | 0.002 | 7.0 | 2408 | 0.0091 | 0.0865 | 0.0324 |
67
- | 0.0021 | 8.0 | 2752 | 0.0094 | 0.0852 | 0.0325 |
68
- | 0.0018 | 9.0 | 3096 | 0.0092 | 0.0825 | 0.0329 |
69
- | 0.0015 | 10.0 | 3440 | 0.0097 | 0.0843 | 0.0324 |
70
 
71
 
72
  ### Framework versions
73
 
74
- - Transformers 4.44.2
75
- - Pytorch 2.4.1+cu121
76
  - Datasets 3.2.0
77
- - Tokenizers 0.19.1
 
20
 
21
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the quran-ayat-speech-to-text dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.0071
24
+ - Wer: 0.0856
25
+ - Cer: 0.0333
26
 
27
  ## Model description
28
 
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
+ - learning_rate: 0.0001
45
  - train_batch_size: 16
46
  - eval_batch_size: 16
47
  - seed: 42
48
  - gradient_accumulation_steps: 2
49
  - total_train_batch_size: 32
50
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_steps: 500
53
+ - num_epochs: 25
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Training results
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
59
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
60
+ | 0.0071 | 1.0 | 157 | 0.0073 | 0.0863 | 0.0323 |
61
+ | 0.0044 | 2.0 | 314 | 0.0087 | 0.0992 | 0.0348 |
62
+ | 0.0028 | 3.0 | 471 | 0.0102 | 0.1111 | 0.0399 |
63
+ | 0.0038 | 4.0 | 628 | 0.0116 | 0.1209 | 0.0419 |
 
 
 
 
 
 
64
 
65
 
66
  ### Framework versions
67
 
68
+ - Transformers 4.47.0
69
+ - Pytorch 2.5.1+cu121
70
  - Datasets 3.2.0
71
+ - Tokenizers 0.21.0
generation_config.json CHANGED
@@ -244,5 +244,5 @@
244
  "transcribe": 50359,
245
  "translate": 50358
246
  },
247
- "transformers_version": "4.44.2"
248
  }
 
244
  "transcribe": 50359,
245
  "translate": 50358
246
  },
247
+ "transformers_version": "4.47.0"
248
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:71f7c9d3720fc46abecaa18a5658948d75504200297ba0c5114e0cda1624f1e1
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e39284a2070116bc4b8165f23c2d522585f1c0f76479bfe0a7e9a366406cfefa
3
  size 151061672
runs/Jan16_10-56-11_b646aa6c2161/events.out.tfevents.1737028396.b646aa6c2161.31.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd422f34cbeb4ddfda7941c91cdf44ea27d2ab3bdc9814efab1dbd4703671b33
3
+ size 453