apwic commited on
Commit
a62d2bf
1 Parent(s): 09ecb79

Model save

Browse files
Files changed (1) hide show
  1. README.md +16 -18
README.md CHANGED
@@ -1,6 +1,4 @@
1
  ---
2
- language:
3
- - id
4
  license: apache-2.0
5
  base_model: LazarusNLP/IndoNanoT5-base
6
  tags:
@@ -19,11 +17,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.4862
23
- - Rouge1: 0.3867
24
  - Rouge2: 0.0
25
- - Rougel: 0.3833
26
- - Rougelsum: 0.386
27
  - Gen Len: 1.0
28
 
29
  ## Model description
@@ -43,9 +41,9 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - learning_rate: 5e-05
47
- - train_batch_size: 4
48
- - eval_batch_size: 8
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
@@ -53,18 +51,18 @@ The following hyperparameters were used during training:
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
57
- |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
58
- | 0.6352 | 1.0 | 3573 | 0.4614 | 0.3871 | 0.0 | 0.3846 | 0.3884 | 1.0 |
59
- | 0.4361 | 2.0 | 7146 | 0.4357 | 0.3574 | 0.0 | 0.3543 | 0.3544 | 1.0 |
60
- | 0.3391 | 3.0 | 10719 | 0.4479 | 0.3973 | 0.0 | 0.3975 | 0.4009 | 1.0 |
61
- | 0.2686 | 4.0 | 14292 | 0.4639 | 0.4113 | 0.0 | 0.4102 | 0.4115 | 1.0 |
62
- | 0.2221 | 5.0 | 17865 | 0.4862 | 0.3867 | 0.0 | 0.3833 | 0.386 | 1.0 |
63
 
64
 
65
  ### Framework versions
66
 
67
  - Transformers 4.40.2
68
- - Pytorch 2.3.0+cu121
69
- - Datasets 2.19.1
70
  - Tokenizers 0.19.1
 
1
  ---
 
 
2
  license: apache-2.0
3
  base_model: LazarusNLP/IndoNanoT5-base
4
  tags:
 
17
 
18
  This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.6973
21
+ - Rouge1: 0.6972
22
  - Rouge2: 0.0
23
+ - Rougel: 0.6971
24
+ - Rougelsum: 0.6984
25
  - Gen Len: 1.0
26
 
27
  ## Model description
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
+ - learning_rate: 0.001
45
+ - train_batch_size: 16
46
+ - eval_batch_size: 32
47
  - seed: 42
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
 
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
55
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
56
+ | 1.21 | 1.0 | 894 | 0.7570 | 0.6899 | 0.0 | 0.6953 | 0.6878 | 1.0 |
57
+ | 0.6826 | 2.0 | 1788 | 0.6250 | 0.6779 | 0.0 | 0.6777 | 0.6768 | 1.0 |
58
+ | 0.4899 | 3.0 | 2682 | 0.5915 | 0.6825 | 0.0 | 0.681 | 0.6837 | 1.0 |
59
+ | 0.3413 | 4.0 | 3576 | 0.6194 | 0.7341 | 0.0 | 0.7341 | 0.7373 | 1.0 |
60
+ | 0.2044 | 5.0 | 4470 | 0.6973 | 0.6972 | 0.0 | 0.6971 | 0.6984 | 1.0 |
61
 
62
 
63
  ### Framework versions
64
 
65
  - Transformers 4.40.2
66
+ - Pytorch 2.3.1+cu121
67
+ - Datasets 2.20.0
68
  - Tokenizers 0.19.1