Aomsin commited on
Commit
6815ae5
1 Parent(s): 9f7a556

End of training

Browse files
Files changed (3) hide show
  1. README.md +49 -1
  2. adapter.tha.safetensors +1 -1
  3. pytorch_model.bin +1 -1
README.md CHANGED
@@ -5,9 +5,24 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - common_voice_6_1
 
 
8
  model-index:
9
  - name: wav2vec2-large-mms-1b-thai-colab
10
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,6 +31,9 @@ should probably proofread and complete it, then remove this comment. -->
16
  # wav2vec2-large-mms-1b-thai-colab
17
 
18
  This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_6_1 dataset.
 
 
 
19
 
20
  ## Model description
21
 
@@ -44,6 +62,36 @@ The following hyperparameters were used during training:
44
  - num_epochs: 4
45
  - mixed_precision_training: Native AMP
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ### Framework versions
48
 
49
  - Transformers 4.35.0.dev0
 
5
  - generated_from_trainer
6
  datasets:
7
  - common_voice_6_1
8
+ metrics:
9
+ - wer
10
  model-index:
11
  - name: wav2vec2-large-mms-1b-thai-colab
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: common_voice_6_1
18
+ type: common_voice_6_1
19
+ config: th
20
+ split: test
21
+ args: th
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 0.7234125438254773
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
  # wav2vec2-large-mms-1b-thai-colab
32
 
33
  This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_6_1 dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.2452
36
+ - Wer: 0.7234
37
 
38
  ## Model description
39
 
 
62
  - num_epochs: 4
63
  - mixed_precision_training: Native AMP
64
 
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
68
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
69
+ | 8.0794 | 0.17 | 100 | 0.3832 | 0.8329 |
70
+ | 0.561 | 0.33 | 200 | 0.3162 | 0.8099 |
71
+ | 0.5132 | 0.5 | 300 | 0.2907 | 0.7842 |
72
+ | 0.5015 | 0.66 | 400 | 0.2954 | 0.7998 |
73
+ | 0.5126 | 0.83 | 500 | 0.2812 | 0.7924 |
74
+ | 0.5182 | 0.99 | 600 | 0.2782 | 0.7631 |
75
+ | 0.4459 | 1.16 | 700 | 0.2735 | 0.7526 |
76
+ | 0.4694 | 1.32 | 800 | 0.2716 | 0.7628 |
77
+ | 0.4576 | 1.49 | 900 | 0.2649 | 0.7538 |
78
+ | 0.4749 | 1.65 | 1000 | 0.2614 | 0.7503 |
79
+ | 0.4282 | 1.82 | 1100 | 0.2687 | 0.7464 |
80
+ | 0.4009 | 1.98 | 1200 | 0.2622 | 0.7480 |
81
+ | 0.3976 | 2.15 | 1300 | 0.2619 | 0.7421 |
82
+ | 0.4306 | 2.31 | 1400 | 0.2620 | 0.7538 |
83
+ | 0.4413 | 2.48 | 1500 | 0.2551 | 0.7515 |
84
+ | 0.3888 | 2.64 | 1600 | 0.2545 | 0.7339 |
85
+ | 0.4213 | 2.81 | 1700 | 0.2541 | 0.7316 |
86
+ | 0.3945 | 2.98 | 1800 | 0.2507 | 0.7246 |
87
+ | 0.3765 | 3.14 | 1900 | 0.2495 | 0.7234 |
88
+ | 0.3859 | 3.31 | 2000 | 0.2498 | 0.7269 |
89
+ | 0.3931 | 3.47 | 2100 | 0.2469 | 0.7250 |
90
+ | 0.3737 | 3.64 | 2200 | 0.2470 | 0.7242 |
91
+ | 0.3716 | 3.8 | 2300 | 0.2454 | 0.7219 |
92
+ | 0.3582 | 3.97 | 2400 | 0.2452 | 0.7234 |
93
+
94
+
95
  ### Framework versions
96
 
97
  - Transformers 4.35.0.dev0
adapter.tha.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:84b70bbeff712f4381331d0caa7588137ad25b3dcb9ec88d5d0858c1856514d4
3
  size 9008632
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:715d0417de64dc187585235a14f390175b159e5ea0655c1b8d3038c190a991e8
3
  size 9008632
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf526b03d8bbe0c7ecf167e0538d27c04670e58c4fcbf468777260b4a6931f59
3
  size 3859344653
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9cac86f5f9fb72bac8879f7511056ccbd992592174621e0f78ec8595ff6cd7e
3
  size 3859344653