w11wo commited on
Commit
020cb5e
1 Parent(s): 027bc2c

End of training

Browse files
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: LazarusNLP/IndoNanoT5-base
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - indonlg
8
+ metrics:
9
+ - f1
10
+ model-index:
11
+ - name: IndoNanoT5-base-TyDiQA
12
+ results:
13
+ - task:
14
+ name: Sequence-to-sequence Language Modeling
15
+ type: text2text-generation
16
+ dataset:
17
+ name: indonlg
18
+ type: indonlg
19
+ config: question_answering
20
+ split: validation
21
+ args: question_answering
22
+ metrics:
23
+ - name: F1
24
+ type: f1
25
+ value: 72.19688326266134
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # IndoNanoT5-base-TyDiQA
32
+
33
+ This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on the indonlg dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Exact: 58.9474
36
+ - F1: 72.1969
37
+ - Total: 855
38
+ - Hasans Exact: 58.9474
39
+ - Hasans F1: 72.1969
40
+ - Hasans Total: 855
41
+ - Best Exact: 58.9474
42
+ - Best Exact Thresh: 0.0
43
+ - Best F1: 72.1969
44
+ - Best F1 Thresh: 0.0
45
+ - Loss: 0.1283
46
+
47
+ ## Model description
48
+
49
+ More information needed
50
+
51
+ ## Intended uses & limitations
52
+
53
+ More information needed
54
+
55
+ ## Training and evaluation data
56
+
57
+ More information needed
58
+
59
+ ## Training procedure
60
+
61
+ ### Training hyperparameters
62
+
63
+ The following hyperparameters were used during training:
64
+ - learning_rate: 1e-05
65
+ - train_batch_size: 8
66
+ - eval_batch_size: 16
67
+ - seed: 42
68
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
69
+ - lr_scheduler_type: linear
70
+ - num_epochs: 50
71
+
72
+ ### Training results
73
+
74
+ | Training Loss | Epoch | Step | Exact | F1 | Total | Hasans Exact | Hasans F1 | Hasans Total | Best Exact | Best Exact Thresh | Best F1 | Best F1 Thresh | Validation Loss |
75
+ |:-------------:|:-----:|:----:|:-------:|:-------:|:-----:|:------------:|:---------:|:------------:|:----------:|:-----------------:|:-------:|:--------------:|:---------------:|
76
+ | 1.9173 | 1.0 | 606 | 45.1327 | 63.8499 | 565 | 45.1327 | 63.8499 | 565 | 45.1327 | 0.0 | 63.8499 | 0.0 | 0.1147 |
77
+ | 0.1971 | 2.0 | 1212 | 50.4425 | 68.7240 | 565 | 50.4425 | 68.7240 | 565 | 50.4425 | 0.0 | 68.7240 | 0.0 | 0.1025 |
78
+ | 0.1475 | 3.0 | 1818 | 53.8053 | 71.0124 | 565 | 53.8053 | 71.0124 | 565 | 53.8053 | 0.0 | 71.0124 | 0.0 | 0.0992 |
79
+ | 0.1175 | 4.0 | 2424 | 53.6283 | 71.1353 | 565 | 53.6283 | 71.1353 | 565 | 53.6283 | 0.0 | 71.1353 | 0.0 | 0.1008 |
80
+ | 0.0814 | 5.0 | 3030 | 53.4513 | 71.0439 | 565 | 53.4513 | 71.0439 | 565 | 53.4513 | 0.0 | 71.0439 | 0.0 | 0.1040 |
81
+ | 0.0665 | 6.0 | 3636 | 54.1593 | 71.5788 | 565 | 54.1593 | 71.5788 | 565 | 54.1593 | 0.0 | 71.5788 | 0.0 | 0.1051 |
82
+ | 0.0555 | 7.0 | 4242 | 54.8673 | 72.4372 | 565 | 54.8673 | 72.4372 | 565 | 54.8673 | 0.0 | 72.4372 | 0.0 | 0.1137 |
83
+ | 0.0483 | 8.0 | 4848 | 56.2832 | 72.3749 | 565 | 56.2832 | 72.3749 | 565 | 56.2832 | 0.0 | 72.3749 | 0.0 | 0.1188 |
84
+ | 0.0416 | 9.0 | 5454 | 55.5752 | 72.2892 | 565 | 55.5752 | 72.2892 | 565 | 55.5752 | 0.0 | 72.2892 | 0.0 | 0.1154 |
85
+ | 0.031 | 10.0 | 6060 | 55.0442 | 71.8127 | 565 | 55.0442 | 71.8127 | 565 | 55.0442 | 0.0 | 71.8127 | 0.0 | 0.1312 |
86
+ | 0.0278 | 11.0 | 6666 | 55.7522 | 73.4756 | 565 | 55.7522 | 73.4756 | 565 | 55.7522 | 0.0 | 73.4756 | 0.0 | 0.1253 |
87
+ | 0.0257 | 12.0 | 7272 | 55.7522 | 73.0958 | 565 | 55.7522 | 73.0958 | 565 | 55.7522 | 0.0 | 73.0958 | 0.0 | 0.1292 |
88
+ | 0.023 | 13.0 | 7878 | 56.2832 | 73.3269 | 565 | 56.2832 | 73.3269 | 565 | 56.2832 | 0.0 | 73.3269 | 0.0 | 0.1271 |
89
+
90
+
91
+ ### Framework versions
92
+
93
+ - Transformers 4.37.2
94
+ - Pytorch 2.2.0+cu118
95
+ - Datasets 2.16.1
96
+ - Tokenizers 0.15.1
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "decoder_start_token_id": 0,
3
+ "eos_token_id": 1,
4
+ "pad_token_id": 0,
5
+ "transformers_version": "4.37.2"
6
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:810aae94d447a55feedde55382495cd33b74181e683247e1af024a8b34892bb4
3
  size 990345064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd677e1c44174fd849c8f602e837f2fea03f077ecf1050e1f324dbd36e45efca
3
  size 990345064
runs/Feb06_14-27-41_bookbot-h100/events.out.tfevents.1707229662.bookbot-h100 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2cecd0e4f59e28fd2c3f787e5718d31da1a65db1cbc2e7f13e5dfac58a314be6
3
- size 15973
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ac4fa652b3c312f4f817071a06f385739c99bc2fd5e3d1aea98422453fdbb1b
3
+ size 17839
runs/Feb06_14-27-41_bookbot-h100/events.out.tfevents.1707230663.bookbot-h100 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9360e59b67d78cf46d0ac4038a30e3690f26d44917e2337493e684e921d4ffab
3
+ size 573