Samuael/ethipic-sec2sec-tiginya
Browse files- README.md +10 -19
- model.safetensors +1 -1
README.md
CHANGED
@@ -2,9 +2,6 @@
|
|
2 |
library_name: transformers
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
-
metrics:
|
6 |
-
- wer
|
7 |
-
- bleu
|
8 |
model-index:
|
9 |
- name: ethipic-sec2sec-tigre
|
10 |
results: []
|
@@ -17,10 +14,15 @@ should probably proofread and complete it, then remove this comment. -->
|
|
17 |
|
18 |
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
-
-
|
21 |
-
-
|
22 |
-
-
|
23 |
-
-
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
## Model description
|
26 |
|
@@ -45,20 +47,9 @@ The following hyperparameters were used during training:
|
|
45 |
- seed: 42
|
46 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
47 |
- lr_scheduler_type: linear
|
48 |
-
- num_epochs:
|
49 |
- mixed_precision_training: Native AMP
|
50 |
|
51 |
-
### Training results
|
52 |
-
|
53 |
-
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Bleu |
|
54 |
-
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|
|
55 |
-
| 0.1039 | 1.0 | 391 | 0.1998 | 0.1848 | 0.1126 | 76.8906 |
|
56 |
-
| 0.1197 | 2.0 | 782 | 0.1984 | 0.1025 | 0.0368 | 83.6572 |
|
57 |
-
| 0.0705 | 3.0 | 1173 | 0.2049 | 0.1312 | 0.0613 | 81.4318 |
|
58 |
-
| 0.0735 | 4.0 | 1564 | 0.2007 | 0.1803 | 0.1081 | 77.3169 |
|
59 |
-
| 0.0545 | 5.0 | 1955 | 0.2052 | 0.1880 | 0.1223 | 76.9650 |
|
60 |
-
|
61 |
-
|
62 |
### Framework versions
|
63 |
|
64 |
- Transformers 4.44.2
|
|
|
2 |
library_name: transformers
|
3 |
tags:
|
4 |
- generated_from_trainer
|
|
|
|
|
|
|
5 |
model-index:
|
6 |
- name: ethipic-sec2sec-tigre
|
7 |
results: []
|
|
|
14 |
|
15 |
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
|
16 |
It achieves the following results on the evaluation set:
|
17 |
+
- eval_loss: 0.1009
|
18 |
+
- eval_wer: 0.0416
|
19 |
+
- eval_cer: 0.0113
|
20 |
+
- eval_bleu: 91.6015
|
21 |
+
- eval_runtime: 30.3787
|
22 |
+
- eval_samples_per_second: 9.842
|
23 |
+
- eval_steps_per_second: 0.099
|
24 |
+
- epoch: 4.0
|
25 |
+
- step: 51000
|
26 |
|
27 |
## Model description
|
28 |
|
|
|
47 |
- seed: 42
|
48 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
49 |
- lr_scheduler_type: linear
|
50 |
+
- num_epochs: 10
|
51 |
- mixed_precision_training: Native AMP
|
52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
### Framework versions
|
54 |
|
55 |
- Transformers 4.44.2
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 240738368
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fce730e13d4366c1abc82c1bdfa9e541d103c184a0a813cff0a4e24c61d7d5d6
|
3 |
size 240738368
|