jmainz commited on
Commit
79bd202
1 Parent(s): ceeda07

End of training

Browse files
README.md CHANGED
@@ -17,9 +17,9 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [facebook/mms-1b-l1107](https://huggingface.co/facebook/mms-1b-l1107) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 2.4292
21
- - Wer: 0.7031
22
- - Cer: 0.1778
23
 
24
  ## Model description
25
 
@@ -44,15 +44,16 @@ The following hyperparameters were used during training:
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
- - lr_scheduler_warmup_steps: 100
48
- - num_epochs: 4
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
54
  |:-------------:|:------:|:----:|:---------------:|:------:|:------:|
55
- | 6.5188 | 3.7037 | 100 | 2.4292 | 0.7031 | 0.1778 |
 
56
 
57
 
58
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [facebook/mms-1b-l1107](https://huggingface.co/facebook/mms-1b-l1107) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 2.1048
21
+ - Wer: 0.5156
22
+ - Cer: 0.1267
23
 
24
  ## Model description
25
 
 
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - lr_scheduler_warmup_ratio: 0.4
48
+ - num_epochs: 8
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
54
  |:-------------:|:------:|:----:|:---------------:|:------:|:------:|
55
+ | 6.1504 | 3.2222 | 87 | 2.3410 | 0.6562 | 0.1556 |
56
+ | 0.6961 | 6.4444 | 174 | 2.1048 | 0.5156 | 0.1267 |
57
 
58
 
59
  ### Framework versions
adapter.mus.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b81946aadc188c5bd02d581eb8bf2daebe0f6160775c2eae02154c52c0f96922
3
  size 8834400
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e5363a257afbd6ec1bd4f7b4f86c063db81676f201725dc3c4557291bd9f13e
3
  size 8834400
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1cb178a2d1d847202ebe05c8b0d04dd909d89851a0fd335cdfeb6463a4de2bcb
3
  size 3858926736
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:daa91e1366570c28f4b9ba0a1450c63516b61ca9a75b0542521c31705f639a85
3
  size 3858926736
runs/May20_01-27-59_patas-gn3/events.out.tfevents.1716193907.patas-gn3.738442.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70ba706bd4432055235418e6d29fdb3acda6077eb5a830a5ba6a2f78c8ab5c5b
3
+ size 446
runs/May20_02-31-10_patas-gn2/events.out.tfevents.1716197480.patas-gn2.1341188.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a04c9d40c62d58e345f35b7625da603bf3cb60a5faef4a62c9b1097c38695d7
3
+ size 7028
runs/May20_03-07-07_patas-gn2/events.out.tfevents.1716199637.patas-gn2.1343358.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:616b8dee4784c612bc064b665b224643926c67f1ee13eb77b79b7f907afc9d0d
3
+ size 7591
runs/May20_03-34-02_patas-gn2/events.out.tfevents.1716201252.patas-gn2.1345008.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b922495dea595eef1f172c84e1ee4f6ff793301180ee768c08639623577e5623
3
+ size 7608
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7c18323b1c85697ccfd6dd986c4bea42d730d63d92373661caf88c4e5cf234a3
3
- size 4527
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f0abe526aa23abe8e059ec16a9dc94c29ed4cb94d1646592a4206bfde72936f
3
+ size 4591