kiranpantha commited on
Commit
c88fd38
·
verified ·
1 Parent(s): a9baa69

End of training

Browse files
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - ne
5
+ license: mit
6
+ base_model: kiranpantha/w2v-bert-2.0-nepali
7
+ tags:
8
+ - generated_from_trainer
9
+ datasets:
10
+ - kiranpantha/OpenSLR54-Balanced-Nepali
11
+ metrics:
12
+ - wer
13
+ model-index:
14
+ - name: Wave2Vec2-Bert2.0 - Kiran Pantha
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: kiranpantha/OpenSLR54-Balanced-Nepali
21
+ type: kiranpantha/OpenSLR54-Balanced-Nepali
22
+ args: 'config: ne, split: train,test'
23
+ metrics:
24
+ - name: Wer
25
+ type: wer
26
+ value: 0.4058169375534645
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # Wave2Vec2-Bert2.0 - Kiran Pantha
33
+
34
+ This model is a fine-tuned version of [kiranpantha/w2v-bert-2.0-nepali](https://huggingface.co/kiranpantha/w2v-bert-2.0-nepali) on the kiranpantha/OpenSLR54-Balanced-Nepali dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: 0.3739
37
+ - Wer: 0.4058
38
+ - Cer: 0.0951
39
+
40
+ ## Model description
41
+
42
+ More information needed
43
+
44
+ ## Intended uses & limitations
45
+
46
+ More information needed
47
+
48
+ ## Training and evaluation data
49
+
50
+ More information needed
51
+
52
+ ## Training procedure
53
+
54
+ ### Training hyperparameters
55
+
56
+ The following hyperparameters were used during training:
57
+ - learning_rate: 5e-05
58
+ - train_batch_size: 8
59
+ - eval_batch_size: 8
60
+ - seed: 42
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_steps: 500
64
+ - num_epochs: 2
65
+ - mixed_precision_training: Native AMP
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
70
+ |:-------------:|:------:|:----:|:---------------:|:------:|:------:|
71
+ | 0.643 | 0.24 | 300 | 0.4271 | 0.4079 | 0.0919 |
72
+ | 0.6342 | 0.48 | 600 | 0.4928 | 0.4902 | 0.1245 |
73
+ | 0.6421 | 0.72 | 900 | 0.4251 | 0.4595 | 0.1112 |
74
+ | 0.5773 | 0.96 | 1200 | 0.4170 | 0.4342 | 0.1069 |
75
+ | 0.5107 | 1.2 | 1500 | 0.4487 | 0.4469 | 0.1089 |
76
+ | 0.4639 | 1.44 | 1800 | 0.3823 | 0.4157 | 0.0973 |
77
+ | 0.4369 | 1.6800 | 2100 | 0.3792 | 0.4145 | 0.0984 |
78
+ | 0.449 | 1.92 | 2400 | 0.3739 | 0.4058 | 0.0951 |
79
+
80
+
81
+ ### Framework versions
82
+
83
+ - Transformers 4.45.2
84
+ - Pytorch 2.5.0+cu124
85
+ - Datasets 3.0.2
86
+ - Tokenizers 0.20.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:88e336edde6541b89c7f03fa1c8b936390c8e7928f9bafb5f04e460190c8d4b5
3
  size 2423081060
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f129f26203372bec249a8dc8ecb04d1de0e7b565e3a3651a6af8be4ae72b9a1
3
  size 2423081060
runs/Oct24_11-03-37_ml/events.out.tfevents.1729747191.ml.6497.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8555f94eea2de25c7a1c51c95e28e961b631390b1c8d024557cc26fa1a9cd6b6
3
- size 10900
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98f899c9884238d61563e6fe95bc51483558e6e52275b28288731131165472d9
3
+ size 11254