apwic commited on
Commit
865cb95
1 Parent(s): c968f2e

Model save

Browse files
Files changed (1) hide show
  1. README.md +16 -18
README.md CHANGED
@@ -1,6 +1,4 @@
1
  ---
2
- language:
3
- - id
4
  license: apache-2.0
5
  base_model: LazarusNLP/IndoNanoT5-base
6
  tags:
@@ -19,11 +17,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.4923
23
- - Rouge1: 0.3905
24
  - Rouge2: 0.0
25
- - Rougel: 0.3898
26
- - Rougelsum: 0.3916
27
  - Gen Len: 1.0
28
 
29
  ## Model description
@@ -43,9 +41,9 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - learning_rate: 5e-05
47
- - train_batch_size: 4
48
- - eval_batch_size: 8
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
@@ -53,18 +51,18 @@ The following hyperparameters were used during training:
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
57
- |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
58
- | 0.6327 | 1.0 | 3567 | 0.4634 | 0.3795 | 0.0 | 0.3806 | 0.3802 | 1.0 |
59
- | 0.4349 | 2.0 | 7134 | 0.4541 | 0.3596 | 0.0 | 0.3615 | 0.3631 | 1.0 |
60
- | 0.3376 | 3.0 | 10701 | 0.4562 | 0.4121 | 0.0 | 0.4123 | 0.4116 | 1.0 |
61
- | 0.2683 | 4.0 | 14268 | 0.4744 | 0.3886 | 0.0 | 0.3869 | 0.388 | 1.0 |
62
- | 0.2208 | 5.0 | 17835 | 0.4923 | 0.3905 | 0.0 | 0.3898 | 0.3916 | 1.0 |
63
 
64
 
65
  ### Framework versions
66
 
67
  - Transformers 4.40.2
68
- - Pytorch 2.3.0+cu121
69
- - Datasets 2.19.1
70
  - Tokenizers 0.19.1
 
1
  ---
 
 
2
  license: apache-2.0
3
  base_model: LazarusNLP/IndoNanoT5-base
4
  tags:
 
17
 
18
  This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.6844
21
+ - Rouge1: 0.7295
22
  - Rouge2: 0.0
23
+ - Rougel: 0.7315
24
+ - Rougelsum: 0.7272
25
  - Gen Len: 1.0
26
 
27
  ## Model description
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
+ - learning_rate: 0.001
45
+ - train_batch_size: 16
46
+ - eval_batch_size: 32
47
  - seed: 42
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
 
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
55
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
56
+ | 1.2322 | 1.0 | 892 | 0.7490 | 0.7095 | 0.0 | 0.7102 | 0.7047 | 1.0 |
57
+ | 0.6879 | 2.0 | 1784 | 0.6347 | 0.7342 | 0.0 | 0.7339 | 0.7326 | 1.0 |
58
+ | 0.4916 | 3.0 | 2676 | 0.6007 | 0.7021 | 0.0 | 0.7036 | 0.7024 | 1.0 |
59
+ | 0.3445 | 4.0 | 3568 | 0.6270 | 0.7205 | 0.0 | 0.7213 | 0.7183 | 1.0 |
60
+ | 0.2065 | 5.0 | 4460 | 0.6844 | 0.7295 | 0.0 | 0.7315 | 0.7272 | 1.0 |
61
 
62
 
63
  ### Framework versions
64
 
65
  - Transformers 4.40.2
66
+ - Pytorch 2.3.1+cu121
67
+ - Datasets 2.20.0
68
  - Tokenizers 0.19.1