pkbiswas commited on
Commit
19c2ce8
1 Parent(s): fa3f0d9

End of training

Browse files
README.md CHANGED
@@ -2,8 +2,6 @@
2
  license: other
3
  library_name: peft
4
  tags:
5
- - trl
6
- - sft
7
  - generated_from_trainer
8
  base_model: meta-llama/Meta-Llama-3-8B
9
  datasets:
@@ -20,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the scitldr dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 2.4126
24
 
25
  ## Model description
26
 
@@ -53,9 +51,9 @@ The following hyperparameters were used during training:
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:------:|:----:|:---------------:|
56
- | 2.2246 | 0.5020 | 500 | 2.3259 |
57
- | 2.2211 | 1.0040 | 1000 | 2.3181 |
58
- | 1.684 | 1.5060 | 1500 | 2.4126 |
59
 
60
 
61
  ### Framework versions
 
2
  license: other
3
  library_name: peft
4
  tags:
 
 
5
  - generated_from_trainer
6
  base_model: meta-llama/Meta-Llama-3-8B
7
  datasets:
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the scitldr dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 2.4051
22
 
23
  ## Model description
24
 
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
+ | 2.226 | 0.5020 | 500 | 2.3232 |
55
+ | 2.2207 | 1.0040 | 1000 | 2.3130 |
56
+ | 1.6901 | 1.5060 | 1500 | 2.4051 |
57
 
58
 
59
  ### Framework versions
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "k_proj",
24
  "up_proj",
25
- "o_proj",
26
  "v_proj",
 
27
  "q_proj",
28
- "gate_proj",
29
- "down_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "gate_proj",
24
  "up_proj",
25
+ "down_proj",
26
  "v_proj",
27
+ "k_proj",
28
  "q_proj",
29
+ "o_proj"
 
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b0aaa9ac7a872e1ae5c60c4412ffb59d83417802336915a9a36097ebe85a05f7
3
  size 167832240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9b7955f3697b07107f191eb54ad1b4421f1bd91a41ef72583b046fe42e9a517
3
  size 167832240
runs/Apr25_21-32-23_4c1f1b88f73d/events.out.tfevents.1714080820.4c1f1b88f73d.55510.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e4d82767bc1bb1a7c882ae1ac75de756ca0ec189f444eba6a350775db5fa5bd
3
+ size 8317
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d373a0484ed51cd82ead25f7eba9fcb483902b51f996ada263529c79449ce92d
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5e30dfb176d6a15fce26b0121eaf0b249e3d5a959bcb2304b365ade09850a5a
3
  size 4984