NanQiangHF commited on
Commit
c44ad56
1 Parent(s): 267ed6f

llama3.1_8b_dpo_bwgenerator_test2

Browse files
README.md CHANGED
@@ -18,15 +18,15 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.0426
22
- - Rewards/chosen: -9.1334
23
- - Rewards/rejected: -41.2562
24
- - Rewards/accuracies: 0.9968
25
- - Rewards/margins: 32.1228
26
- - Logps/rejected: -519.8535
27
- - Logps/chosen: -173.9601
28
- - Logits/rejected: -1.2773
29
- - Logits/chosen: -1.8503
30
 
31
  ## Model description
32
 
@@ -45,7 +45,7 @@ More information needed
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
- - learning_rate: 5e-05
49
  - train_batch_size: 4
50
  - eval_batch_size: 4
51
  - seed: 42
@@ -57,19 +57,19 @@ The following hyperparameters were used during training:
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
59
  |:-------------:|:------:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
60
- | 0.0892 | 0.0719 | 1000 | 0.1265 | -30.3353 | -68.4770 | 0.9900 | 38.1417 | -792.0620 | -385.9789 | -1.2593 | -1.8522 |
61
- | 0.0747 | 0.1438 | 2000 | 0.0675 | -15.0547 | -42.3699 | 0.9932 | 27.3152 | -530.9907 | -233.1734 | -1.2465 | -1.8204 |
62
- | 0.0508 | 0.2157 | 3000 | 0.0497 | -10.9548 | -36.2994 | 0.9951 | 25.3446 | -470.2862 | -192.1746 | -1.2443 | -1.8188 |
63
- | 0.0342 | 0.2876 | 4000 | 0.0521 | -11.1133 | -39.6413 | 0.9945 | 28.5280 | -503.7046 | -193.7589 | -1.2760 | -1.8503 |
64
- | 0.0413 | 0.3595 | 5000 | 0.0489 | -10.1717 | -41.3975 | 0.9958 | 31.2258 | -521.2673 | -184.3437 | -1.2904 | -1.8675 |
65
- | 0.0359 | 0.4313 | 6000 | 0.0465 | -10.0814 | -40.8284 | 0.9961 | 30.7471 | -515.5765 | -183.4398 | -1.2872 | -1.8619 |
66
- | 0.0431 | 0.5032 | 7000 | 0.0509 | -12.0232 | -46.5198 | 0.9961 | 34.4966 | -572.4903 | -202.8584 | -1.2039 | -1.7967 |
67
- | 0.0549 | 0.5751 | 8000 | 0.0440 | -9.5094 | -40.3277 | 0.9958 | 30.8183 | -510.5694 | -177.7208 | -1.2813 | -1.8498 |
68
- | 0.0424 | 0.6470 | 9000 | 0.0440 | -9.7038 | -41.6696 | 0.9964 | 31.9658 | -523.9880 | -179.6640 | -1.2597 | -1.8373 |
69
- | 0.0231 | 0.7189 | 10000 | 0.0450 | -9.8079 | -43.3146 | 0.9964 | 33.5067 | -540.4382 | -180.7056 | -1.2646 | -1.8409 |
70
- | 0.0225 | 0.7908 | 11000 | 0.0436 | -9.1686 | -41.6580 | 0.9968 | 32.4894 | -523.8720 | -174.3121 | -1.2758 | -1.8486 |
71
- | 0.0309 | 0.8627 | 12000 | 0.0432 | -9.4044 | -41.9421 | 0.9968 | 32.5377 | -526.7131 | -176.6707 | -1.2731 | -1.8472 |
72
- | 0.0567 | 0.9346 | 13000 | 0.0426 | -9.1334 | -41.2562 | 0.9968 | 32.1228 | -519.8535 | -173.9601 | -1.2773 | -1.8503 |
73
 
74
 
75
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.0900
22
+ - Rewards/chosen: -9.2458
23
+ - Rewards/rejected: -18.5064
24
+ - Rewards/accuracies: 0.9799
25
+ - Rewards/margins: 9.2605
26
+ - Logps/rejected: -295.2113
27
+ - Logps/chosen: -177.0069
28
+ - Logits/rejected: -1.0648
29
+ - Logits/chosen: -1.6755
30
 
31
  ## Model description
32
 
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - learning_rate: 5e-06
49
  - train_batch_size: 4
50
  - eval_batch_size: 4
51
  - seed: 42
 
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
59
  |:-------------:|:------:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
60
+ | 0.122 | 0.0719 | 1000 | 0.1055 | -6.4463 | -12.7595 | 0.9689 | 6.3132 | -237.7425 | -149.0121 | -1.0551 | -1.6809 |
61
+ | 0.1018 | 0.1438 | 2000 | 0.0928 | -8.3841 | -16.7138 | 0.9760 | 8.3297 | -277.2856 | -168.3895 | -1.0613 | -1.6756 |
62
+ | 0.0975 | 0.2157 | 3000 | 0.0914 | -9.0349 | -17.9922 | 0.9773 | 8.9574 | -290.0698 | -174.8974 | -1.0675 | -1.6787 |
63
+ | 0.0861 | 0.2876 | 4000 | 0.0911 | -9.1503 | -18.2788 | 0.9786 | 9.1285 | -292.9356 | -176.0516 | -1.0649 | -1.6760 |
64
+ | 0.0957 | 0.3595 | 5000 | 0.0904 | -9.2383 | -18.4646 | 0.9786 | 9.2263 | -294.7940 | -176.9318 | -1.0621 | -1.6732 |
65
+ | 0.079 | 0.4313 | 6000 | 0.0900 | -9.1569 | -18.3683 | 0.9806 | 9.2114 | -293.8309 | -176.1181 | -1.0645 | -1.6758 |
66
+ | 0.0692 | 0.5032 | 7000 | 0.0901 | -9.2211 | -18.4391 | 0.9802 | 9.2179 | -294.5381 | -176.7600 | -1.0652 | -1.6760 |
67
+ | 0.0931 | 0.5751 | 8000 | 0.0901 | -9.2306 | -18.4876 | 0.9802 | 9.2570 | -295.0236 | -176.8544 | -1.0630 | -1.6740 |
68
+ | 0.0863 | 0.6470 | 9000 | 0.0902 | -9.2159 | -18.4436 | 0.9799 | 9.2277 | -294.5839 | -176.7078 | -1.0635 | -1.6746 |
69
+ | 0.0942 | 0.7189 | 10000 | 0.0902 | -9.1872 | -18.4035 | 0.9802 | 9.2163 | -294.1824 | -176.4204 | -1.0647 | -1.6760 |
70
+ | 0.0771 | 0.7908 | 11000 | 0.0902 | -9.2250 | -18.4541 | 0.9796 | 9.2290 | -294.6884 | -176.7990 | -1.0629 | -1.6739 |
71
+ | 0.0916 | 0.8627 | 12000 | 0.0903 | -9.2340 | -18.4770 | 0.9799 | 9.2430 | -294.9172 | -176.8884 | -1.0633 | -1.6744 |
72
+ | 0.0999 | 0.9346 | 13000 | 0.0900 | -9.2458 | -18.5064 | 0.9799 | 9.2605 | -295.2113 | -177.0069 | -1.0648 | -1.6755 |
73
 
74
 
75
  ### Framework versions
adapter_config.json CHANGED
@@ -22,8 +22,8 @@
22
  "rank_pattern": {},
23
  "revision": null,
24
  "target_modules": [
25
- "v_proj",
26
- "q_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
 
22
  "rank_pattern": {},
23
  "revision": null,
24
  "target_modules": [
25
+ "q_proj",
26
+ "v_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b88ca0cc6903f09fd2fd040177567dd3a979a6a86b8120561138f3b480b2a7bd
3
  size 6832728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd2be6dacec1990ba4282386a6e9a6a706726002f06e7e21bebc97fb5a6d2806
3
  size 6832728
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:31df9dfb872337e9eed8594b960a9549b0b9e038ace062b07f394d56d573d7ad
3
  size 6008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6eb6532f05f74dfd24506264fd2171a59f4782bfeee71427eeb32e03351dda9
3
  size 6008