End of training
Browse files- README.md +120 -0
- model.safetensors +1 -1
README.md
ADDED
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: ylacombe/w2v-bert-2.0
|
3 |
+
tags:
|
4 |
+
- generated_from_trainer
|
5 |
+
datasets:
|
6 |
+
- common_voice_16_0
|
7 |
+
metrics:
|
8 |
+
- wer
|
9 |
+
model-index:
|
10 |
+
- name: w2v-fine-tune-test-no-ws2
|
11 |
+
results:
|
12 |
+
- task:
|
13 |
+
name: Automatic Speech Recognition
|
14 |
+
type: automatic-speech-recognition
|
15 |
+
dataset:
|
16 |
+
name: common_voice_16_0
|
17 |
+
type: common_voice_16_0
|
18 |
+
config: tr
|
19 |
+
split: test
|
20 |
+
args: tr
|
21 |
+
metrics:
|
22 |
+
- name: Wer
|
23 |
+
type: wer
|
24 |
+
value: 0.11088339984899148
|
25 |
+
---
|
26 |
+
|
27 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
28 |
+
should probably proofread and complete it, then remove this comment. -->
|
29 |
+
|
30 |
+
# w2v-fine-tune-test-no-ws2
|
31 |
+
|
32 |
+
This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the common_voice_16_0 dataset.
|
33 |
+
It achieves the following results on the evaluation set:
|
34 |
+
- Loss: 0.1513
|
35 |
+
- Wer: 0.1109
|
36 |
+
|
37 |
+
## Model description
|
38 |
+
|
39 |
+
More information needed
|
40 |
+
|
41 |
+
## Intended uses & limitations
|
42 |
+
|
43 |
+
More information needed
|
44 |
+
|
45 |
+
## Training and evaluation data
|
46 |
+
|
47 |
+
More information needed
|
48 |
+
|
49 |
+
## Training procedure
|
50 |
+
|
51 |
+
### Training hyperparameters
|
52 |
+
|
53 |
+
The following hyperparameters were used during training:
|
54 |
+
- learning_rate: 5e-05
|
55 |
+
- train_batch_size: 32
|
56 |
+
- eval_batch_size: 8
|
57 |
+
- seed: 42
|
58 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
59 |
+
- lr_scheduler_type: linear
|
60 |
+
- lr_scheduler_warmup_steps: 500
|
61 |
+
- num_epochs: 10
|
62 |
+
- mixed_precision_training: Native AMP
|
63 |
+
|
64 |
+
### Training results
|
65 |
+
|
66 |
+
| Training Loss | Epoch | Step | Validation Loss | Wer |
|
67 |
+
|:-------------:|:-----:|:-----:|:---------------:|:------:|
|
68 |
+
| 2.192 | 0.22 | 300 | 0.2797 | 0.2985 |
|
69 |
+
| 0.2226 | 0.44 | 600 | 0.2989 | 0.3491 |
|
70 |
+
| 0.1941 | 0.66 | 900 | 0.2558 | 0.2451 |
|
71 |
+
| 0.1659 | 0.88 | 1200 | 0.2320 | 0.2289 |
|
72 |
+
| 0.1332 | 1.1 | 1500 | 0.2063 | 0.1971 |
|
73 |
+
| 0.1129 | 1.31 | 1800 | 0.1873 | 0.2029 |
|
74 |
+
| 0.1044 | 1.53 | 2100 | 0.1765 | 0.1856 |
|
75 |
+
| 0.1026 | 1.75 | 2400 | 0.1719 | 0.1752 |
|
76 |
+
| 0.0982 | 1.97 | 2700 | 0.1927 | 0.2023 |
|
77 |
+
| 0.0769 | 2.19 | 3000 | 0.1776 | 0.1671 |
|
78 |
+
| 0.0715 | 2.41 | 3300 | 0.1626 | 0.1634 |
|
79 |
+
| 0.0695 | 2.63 | 3600 | 0.1666 | 0.1654 |
|
80 |
+
| 0.0612 | 2.85 | 3900 | 0.1760 | 0.1609 |
|
81 |
+
| 0.0614 | 3.07 | 4200 | 0.1645 | 0.1593 |
|
82 |
+
| 0.0476 | 3.29 | 4500 | 0.1685 | 0.1593 |
|
83 |
+
| 0.048 | 3.51 | 4800 | 0.1790 | 0.1583 |
|
84 |
+
| 0.0489 | 3.73 | 5100 | 0.1578 | 0.1535 |
|
85 |
+
| 0.0456 | 3.94 | 5400 | 0.1610 | 0.1617 |
|
86 |
+
| 0.041 | 4.16 | 5700 | 0.1559 | 0.1439 |
|
87 |
+
| 0.0367 | 4.38 | 6000 | 0.1536 | 0.1436 |
|
88 |
+
| 0.0321 | 4.6 | 6300 | 0.1591 | 0.1449 |
|
89 |
+
| 0.0349 | 4.82 | 6600 | 0.1616 | 0.1419 |
|
90 |
+
| 0.0308 | 5.04 | 6900 | 0.1501 | 0.1401 |
|
91 |
+
| 0.0233 | 5.26 | 7200 | 0.1588 | 0.1394 |
|
92 |
+
| 0.0253 | 5.48 | 7500 | 0.1633 | 0.1356 |
|
93 |
+
| 0.0254 | 5.7 | 7800 | 0.1522 | 0.1339 |
|
94 |
+
| 0.0245 | 5.92 | 8100 | 0.1598 | 0.1371 |
|
95 |
+
| 0.0189 | 6.14 | 8400 | 0.1497 | 0.1324 |
|
96 |
+
| 0.0174 | 6.36 | 8700 | 0.1487 | 0.1270 |
|
97 |
+
| 0.0178 | 6.57 | 9000 | 0.1397 | 0.1286 |
|
98 |
+
| 0.0173 | 6.79 | 9300 | 0.1495 | 0.1281 |
|
99 |
+
| 0.0178 | 7.01 | 9600 | 0.1462 | 0.1222 |
|
100 |
+
| 0.0124 | 7.23 | 9900 | 0.1516 | 0.1225 |
|
101 |
+
| 0.0121 | 7.45 | 10200 | 0.1554 | 0.1190 |
|
102 |
+
| 0.0128 | 7.67 | 10500 | 0.1453 | 0.1228 |
|
103 |
+
| 0.0113 | 7.89 | 10800 | 0.1468 | 0.1178 |
|
104 |
+
| 0.0086 | 8.11 | 11100 | 0.1556 | 0.1186 |
|
105 |
+
| 0.0085 | 8.33 | 11400 | 0.1507 | 0.1154 |
|
106 |
+
| 0.0073 | 8.55 | 11700 | 0.1494 | 0.1169 |
|
107 |
+
| 0.0079 | 8.77 | 12000 | 0.1507 | 0.1152 |
|
108 |
+
| 0.0089 | 8.98 | 12300 | 0.1456 | 0.1137 |
|
109 |
+
| 0.0062 | 9.2 | 12600 | 0.1518 | 0.1127 |
|
110 |
+
| 0.005 | 9.42 | 12900 | 0.1534 | 0.1115 |
|
111 |
+
| 0.005 | 9.64 | 13200 | 0.1514 | 0.1110 |
|
112 |
+
| 0.0048 | 9.86 | 13500 | 0.1513 | 0.1109 |
|
113 |
+
|
114 |
+
|
115 |
+
### Framework versions
|
116 |
+
|
117 |
+
- Transformers 4.37.0.dev0
|
118 |
+
- Pytorch 2.1.2+cu121
|
119 |
+
- Datasets 2.16.1
|
120 |
+
- Tokenizers 0.15.1
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 2422990860
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d617e0d6fb4593e792db825b4644989ae7bab8555062bce062ebb83a36613749
|
3 |
size 2422990860
|