apwic commited on
Commit
178f66f
1 Parent(s): 1ffe613

Model save

Browse files
README.md CHANGED
@@ -1,6 +1,4 @@
1
  ---
2
- language:
3
- - id
4
  license: apache-2.0
5
  base_model: LazarusNLP/IndoNanoT5-base
6
  tags:
@@ -19,11 +17,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.5652
23
- - Rouge1: 0.5057
24
  - Rouge2: 0.0
25
- - Rougel: 0.5089
26
- - Rougelsum: 0.5041
27
  - Gen Len: 1.0
28
 
29
  ## Model description
@@ -43,8 +41,8 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - learning_rate: 5e-05
47
- - train_batch_size: 8
48
  - eval_batch_size: 32
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -55,16 +53,16 @@ The following hyperparameters were used during training:
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
57
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
58
- | 1.2298 | 1.0 | 1783 | 0.6183 | 0.5036 | 0.0 | 0.5051 | 0.5002 | 1.0 |
59
- | 0.7893 | 2.0 | 3566 | 0.5936 | 0.5166 | 0.0 | 0.5199 | 0.5139 | 1.0 |
60
- | 0.7368 | 3.0 | 5349 | 0.5787 | 0.517 | 0.0 | 0.5231 | 0.516 | 1.0 |
61
- | 0.7107 | 4.0 | 7132 | 0.5670 | 0.5105 | 0.0 | 0.5148 | 0.5089 | 1.0 |
62
- | 0.6989 | 5.0 | 8915 | 0.5652 | 0.5057 | 0.0 | 0.5089 | 0.5041 | 1.0 |
63
 
64
 
65
  ### Framework versions
66
 
67
  - Transformers 4.40.2
68
- - Pytorch 2.3.0+cu121
69
- - Datasets 2.19.1
70
  - Tokenizers 0.19.1
 
1
  ---
 
 
2
  license: apache-2.0
3
  base_model: LazarusNLP/IndoNanoT5-base
4
  tags:
 
17
 
18
  This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.5140
21
+ - Rouge1: 0.7665
22
  - Rouge2: 0.0
23
+ - Rougel: 0.7672
24
+ - Rougelsum: 0.7666
25
  - Gen Len: 1.0
26
 
27
  ## Model description
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
+ - learning_rate: 0.001
45
+ - train_batch_size: 16
46
  - eval_batch_size: 32
47
  - seed: 42
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
55
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
56
+ | 0.8212 | 1.0 | 892 | 0.5616 | 0.77 | 0.0 | 0.7703 | 0.7685 | 1.0 |
57
+ | 0.6185 | 2.0 | 1784 | 0.5593 | 0.7457 | 0.0 | 0.7436 | 0.7441 | 1.0 |
58
+ | 0.5776 | 3.0 | 2676 | 0.5283 | 0.7664 | 0.0 | 0.7649 | 0.7639 | 1.0 |
59
+ | 0.5535 | 4.0 | 3568 | 0.5193 | 0.7475 | 0.0 | 0.747 | 0.745 | 1.0 |
60
+ | 0.5357 | 5.0 | 4460 | 0.5140 | 0.7665 | 0.0 | 0.7672 | 0.7666 | 1.0 |
61
 
62
 
63
  ### Framework versions
64
 
65
  - Transformers 4.40.2
66
+ - Pytorch 2.3.1+cu121
67
+ - Datasets 2.20.0
68
  - Tokenizers 0.19.1
adapter-summarization/adapter_config.json CHANGED
@@ -12,11 +12,11 @@
12
  "intermediate_lora": false,
13
  "leave_out": [],
14
  "output_lora": false,
15
- "r": 16,
16
  "selfattn_lora": true,
17
  "use_gating": false
18
  },
19
- "config_id": "141b248112091265",
20
  "hidden_size": 768,
21
  "model_class": "T5ForConditionalGeneration",
22
  "model_name": "LazarusNLP/IndoNanoT5-base",
 
12
  "intermediate_lora": false,
13
  "leave_out": [],
14
  "output_lora": false,
15
+ "r": 8,
16
  "selfattn_lora": true,
17
  "use_gating": false
18
  },
19
+ "config_id": "625403edad0bf919",
20
  "hidden_size": 768,
21
  "model_class": "T5ForConditionalGeneration",
22
  "model_name": "LazarusNLP/IndoNanoT5-base",
adapter-summarization/pytorch_adapter.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:73dc2313e48c9cbc6e2f07773e7c6dd2ac34052c433072cedfdaeb708aaef248
3
- size 7131954
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fa48e6286fea6effa6af94facca483000733e15f876076bdc4b88156e16ba54
3
+ size 3593010