kiranpantha commited on
Commit
10736ea
1 Parent(s): 33d7498

End of training

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - ne
5
+ license: mit
6
+ base_model: facebook/w2v-bert-2.0
7
+ tags:
8
+ - generated_from_trainer
9
+ datasets:
10
+ - kiranpantha/OpenSLR54-Balanced-Nepali
11
+ metrics:
12
+ - wer
13
+ model-index:
14
+ - name: Wave2Vec2-Bert2.0 - Kiran Pantha
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: kiranpantha/OpenSLR54-Balanced-Nepali
21
+ type: kiranpantha/OpenSLR54-Balanced-Nepali
22
+ args: 'config: ne, split: train,test'
23
+ metrics:
24
+ - name: Wer
25
+ type: wer
26
+ value: 0.45372112917023094
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # Wave2Vec2-Bert2.0 - Kiran Pantha
33
+
34
+ This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the kiranpantha/OpenSLR54-Balanced-Nepali dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: 0.5146
37
+ - Wer: 0.4537
38
+ - Cer: 0.1137
39
+
40
+ ## Model description
41
+
42
+ More information needed
43
+
44
+ ## Intended uses & limitations
45
+
46
+ More information needed
47
+
48
+ ## Training and evaluation data
49
+
50
+ More information needed
51
+
52
+ ## Training procedure
53
+
54
+ ### Training hyperparameters
55
+
56
+ The following hyperparameters were used during training:
57
+ - learning_rate: 5e-05
58
+ - train_batch_size: 8
59
+ - eval_batch_size: 8
60
+ - seed: 42
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_steps: 500
64
+ - num_epochs: 2
65
+ - mixed_precision_training: Native AMP
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
70
+ |:-------------:|:------:|:----:|:---------------:|:------:|:------:|
71
+ | 0.3129 | 0.24 | 300 | 0.5021 | 0.4484 | 0.1119 |
72
+ | 0.3868 | 0.48 | 600 | 0.5117 | 0.4686 | 0.1193 |
73
+ | 0.368 | 0.72 | 900 | 0.5399 | 0.4674 | 0.1291 |
74
+ | 0.3462 | 0.96 | 1200 | 0.4893 | 0.4506 | 0.1131 |
75
+ | 0.3009 | 1.2 | 1500 | 0.5081 | 0.4505 | 0.1134 |
76
+ | 0.2721 | 1.44 | 1800 | 0.5146 | 0.4681 | 0.1159 |
77
+ | 0.2499 | 1.6800 | 2100 | 0.5128 | 0.4549 | 0.1128 |
78
+ | 0.2366 | 1.92 | 2400 | 0.5146 | 0.4537 | 0.1137 |
79
+
80
+
81
+ ### Framework versions
82
+
83
+ - Transformers 4.44.2
84
+ - Pytorch 2.4.1+cu121
85
+ - Datasets 3.0.1
86
+ - Tokenizers 0.19.1