anthonyrathe commited on
Commit
d08e0f7
1 Parent(s): f651968

Model save

Browse files
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ base_model: llama-2-nl/Llama-2-7b-hf-lora-original-sft
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: Llama-2-7b-hf-lora-original-it
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # Llama-2-7b-hf-lora-original-it
17
+
18
+ This model is a fine-tuned version of [llama-2-nl/Llama-2-7b-hf-lora-original-sft](https://huggingface.co/llama-2-nl/Llama-2-7b-hf-lora-original-sft) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.3540
21
+ - Rewards/chosen: 0.1166
22
+ - Rewards/rejected: -0.9259
23
+ - Rewards/accuracies: 0.9396
24
+ - Rewards/margins: 1.0425
25
+ - Logps/rejected: -547.4224
26
+ - Logps/chosen: -600.8121
27
+ - Logits/rejected: -0.8736
28
+ - Logits/chosen: -0.9600
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 5e-07
48
+ - train_batch_size: 4
49
+ - eval_batch_size: 4
50
+ - seed: 42
51
+ - distributed_type: multi-GPU
52
+ - num_devices: 4
53
+ - gradient_accumulation_steps: 4
54
+ - total_train_batch_size: 64
55
+ - total_eval_batch_size: 16
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: cosine
58
+ - lr_scheduler_warmup_ratio: 0.1
59
+ - num_epochs: 1
60
+
61
+ ### Training results
62
+
63
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
64
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
65
+ | 0.5984 | 0.1327 | 100 | 0.5904 | 0.0549 | -0.1735 | 0.9030 | 0.2283 | -539.8975 | -601.4293 | -1.1606 | -1.1395 |
66
+ | 0.4622 | 0.2653 | 200 | 0.4581 | 0.1134 | -0.4980 | 0.9351 | 0.6113 | -543.1426 | -600.8441 | -1.2714 | -1.2180 |
67
+ | 0.3934 | 0.3980 | 300 | 0.3959 | 0.1263 | -0.7212 | 0.9366 | 0.8475 | -545.3747 | -600.7144 | -1.0528 | -1.0755 |
68
+ | 0.3629 | 0.5307 | 400 | 0.3674 | 0.1170 | -0.8608 | 0.9381 | 0.9777 | -546.7705 | -600.8080 | -1.1109 | -1.1154 |
69
+ | 0.3556 | 0.6633 | 500 | 0.3561 | 0.1136 | -0.9146 | 0.9388 | 1.0282 | -547.3090 | -600.8419 | -0.8266 | -0.9289 |
70
+ | 0.3488 | 0.7960 | 600 | 0.3540 | 0.1104 | -0.9310 | 0.9410 | 1.0415 | -547.4734 | -600.8737 | -1.0676 | -1.0877 |
71
+ | 0.3563 | 0.9287 | 700 | 0.3540 | 0.1166 | -0.9259 | 0.9396 | 1.0425 | -547.4224 | -600.8121 | -0.8736 | -0.9600 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - Transformers 4.40.1
77
+ - Pytorch 2.1.2+cu121
78
+ - Datasets 2.19.0
79
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9990049751243781,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.4291752041731856,
5
+ "train_runtime": 22849.6211,
6
+ "train_samples": 48227,
7
+ "train_samples_per_second": 2.111,
8
+ "train_steps_per_second": 0.033
9
+ }
runs/May20_00-51-42_gpu530.dodrio.os/events.out.tfevents.1716160119.gpu530.dodrio.os.1926607.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0badc297c389d1afc937f90e62c04c07a6f9b29e2b6cc64de4413c902cbab612
3
- size 62492
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d202074ab4ae56d82d901450a52c38c647e371ba23392ab84221fbd7bfab1d5
3
+ size 62846
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9990049751243781,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.4291752041731856,
5
+ "train_runtime": 22849.6211,
6
+ "train_samples": 48227,
7
+ "train_samples_per_second": 2.111,
8
+ "train_steps_per_second": 0.033
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1282 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9990049751243781,
5
+ "eval_steps": 100,
6
+ "global_step": 753,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0013266998341625207,
13
+ "grad_norm": 40.25,
14
+ "learning_rate": 6.578947368421052e-09,
15
+ "logits/chosen": -1.2802138328552246,
16
+ "logits/rejected": -1.3739961385726929,
17
+ "logps/chosen": -584.777587890625,
18
+ "logps/rejected": -533.882080078125,
19
+ "loss": 0.6931,
20
+ "rewards/accuracies": 0.0,
21
+ "rewards/chosen": 0.0,
22
+ "rewards/margins": 0.0,
23
+ "rewards/rejected": 0.0,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.013266998341625208,
28
+ "grad_norm": 54.5,
29
+ "learning_rate": 6.578947368421052e-08,
30
+ "logits/chosen": -1.1525533199310303,
31
+ "logits/rejected": -1.1556764841079712,
32
+ "logps/chosen": -577.8804321289062,
33
+ "logps/rejected": -498.16986083984375,
34
+ "loss": 0.6903,
35
+ "rewards/accuracies": 0.4930555522441864,
36
+ "rewards/chosen": 0.00347831379622221,
37
+ "rewards/margins": 0.00828113965690136,
38
+ "rewards/rejected": -0.004802825395017862,
39
+ "step": 10
40
+ },
41
+ {
42
+ "epoch": 0.026533996683250415,
43
+ "grad_norm": 44.75,
44
+ "learning_rate": 1.3157894736842104e-07,
45
+ "logits/chosen": -1.1788235902786255,
46
+ "logits/rejected": -1.2242963314056396,
47
+ "logps/chosen": -611.204833984375,
48
+ "logps/rejected": -543.866455078125,
49
+ "loss": 0.6928,
50
+ "rewards/accuracies": 0.4937500059604645,
51
+ "rewards/chosen": 0.0023336124140769243,
52
+ "rewards/margins": 0.003125081304460764,
53
+ "rewards/rejected": -0.0007914667949080467,
54
+ "step": 20
55
+ },
56
+ {
57
+ "epoch": 0.03980099502487562,
58
+ "grad_norm": 36.25,
59
+ "learning_rate": 1.9736842105263157e-07,
60
+ "logits/chosen": -1.158151388168335,
61
+ "logits/rejected": -1.1601974964141846,
62
+ "logps/chosen": -633.5345458984375,
63
+ "logps/rejected": -536.1189575195312,
64
+ "loss": 0.6905,
65
+ "rewards/accuracies": 0.581250011920929,
66
+ "rewards/chosen": 0.013027493841946125,
67
+ "rewards/margins": 0.015905674546957016,
68
+ "rewards/rejected": -0.0028781811706721783,
69
+ "step": 30
70
+ },
71
+ {
72
+ "epoch": 0.05306799336650083,
73
+ "grad_norm": 42.5,
74
+ "learning_rate": 2.631578947368421e-07,
75
+ "logits/chosen": -1.19637930393219,
76
+ "logits/rejected": -1.22651207447052,
77
+ "logps/chosen": -540.5403442382812,
78
+ "logps/rejected": -552.4425048828125,
79
+ "loss": 0.687,
80
+ "rewards/accuracies": 0.518750011920929,
81
+ "rewards/chosen": 0.0009165151277557015,
82
+ "rewards/margins": 0.011455372907221317,
83
+ "rewards/rejected": -0.01053885743021965,
84
+ "step": 40
85
+ },
86
+ {
87
+ "epoch": 0.06633499170812604,
88
+ "grad_norm": 48.25,
89
+ "learning_rate": 3.2894736842105264e-07,
90
+ "logits/chosen": -1.198271632194519,
91
+ "logits/rejected": -1.2518165111541748,
92
+ "logps/chosen": -579.4686279296875,
93
+ "logps/rejected": -571.4375,
94
+ "loss": 0.6818,
95
+ "rewards/accuracies": 0.5874999761581421,
96
+ "rewards/chosen": 0.005964647978544235,
97
+ "rewards/margins": 0.019765758886933327,
98
+ "rewards/rejected": -0.013801109977066517,
99
+ "step": 50
100
+ },
101
+ {
102
+ "epoch": 0.07960199004975124,
103
+ "grad_norm": 39.25,
104
+ "learning_rate": 3.9473684210526315e-07,
105
+ "logits/chosen": -1.1785616874694824,
106
+ "logits/rejected": -1.226552963256836,
107
+ "logps/chosen": -595.6961059570312,
108
+ "logps/rejected": -572.9510498046875,
109
+ "loss": 0.6691,
110
+ "rewards/accuracies": 0.731249988079071,
111
+ "rewards/chosen": 0.022173848003149033,
112
+ "rewards/margins": 0.06000928208231926,
113
+ "rewards/rejected": -0.03783543407917023,
114
+ "step": 60
115
+ },
116
+ {
117
+ "epoch": 0.09286898839137644,
118
+ "grad_norm": 44.5,
119
+ "learning_rate": 4.6052631578947365e-07,
120
+ "logits/chosen": -1.1982749700546265,
121
+ "logits/rejected": -1.236537218093872,
122
+ "logps/chosen": -634.6461181640625,
123
+ "logps/rejected": -673.8761596679688,
124
+ "loss": 0.6543,
125
+ "rewards/accuracies": 0.7749999761581421,
126
+ "rewards/chosen": 0.02825814113020897,
127
+ "rewards/margins": 0.08534505218267441,
128
+ "rewards/rejected": -0.05708691477775574,
129
+ "step": 70
130
+ },
131
+ {
132
+ "epoch": 0.10613598673300166,
133
+ "grad_norm": 36.75,
134
+ "learning_rate": 4.999569334646955e-07,
135
+ "logits/chosen": -1.0726072788238525,
136
+ "logits/rejected": -1.1171576976776123,
137
+ "logps/chosen": -614.9038696289062,
138
+ "logps/rejected": -572.2459716796875,
139
+ "loss": 0.638,
140
+ "rewards/accuracies": 0.875,
141
+ "rewards/chosen": 0.04197516664862633,
142
+ "rewards/margins": 0.1271333396434784,
143
+ "rewards/rejected": -0.08515818417072296,
144
+ "step": 80
145
+ },
146
+ {
147
+ "epoch": 0.11940298507462686,
148
+ "grad_norm": 38.25,
149
+ "learning_rate": 4.994726053293702e-07,
150
+ "logits/chosen": -1.1955012083053589,
151
+ "logits/rejected": -1.2350232601165771,
152
+ "logps/chosen": -590.61376953125,
153
+ "logps/rejected": -564.5113525390625,
154
+ "loss": 0.6237,
155
+ "rewards/accuracies": 0.8812500238418579,
156
+ "rewards/chosen": 0.031416941434144974,
157
+ "rewards/margins": 0.15016348659992218,
158
+ "rewards/rejected": -0.1187465563416481,
159
+ "step": 90
160
+ },
161
+ {
162
+ "epoch": 0.13266998341625208,
163
+ "grad_norm": 39.0,
164
+ "learning_rate": 4.984511621268102e-07,
165
+ "logits/chosen": -1.162690281867981,
166
+ "logits/rejected": -1.1824209690093994,
167
+ "logps/chosen": -573.8049926757812,
168
+ "logps/rejected": -528.6422119140625,
169
+ "loss": 0.5984,
170
+ "rewards/accuracies": 0.90625,
171
+ "rewards/chosen": 0.038635846227407455,
172
+ "rewards/margins": 0.2009139508008957,
173
+ "rewards/rejected": -0.16227811574935913,
174
+ "step": 100
175
+ },
176
+ {
177
+ "epoch": 0.13266998341625208,
178
+ "eval_logits/chosen": -1.139477252960205,
179
+ "eval_logits/rejected": -1.160577416419983,
180
+ "eval_logps/chosen": -601.4292602539062,
181
+ "eval_logps/rejected": -539.8974609375,
182
+ "eval_loss": 0.5903807878494263,
183
+ "eval_rewards/accuracies": 0.9029850959777832,
184
+ "eval_rewards/chosen": 0.05485348403453827,
185
+ "eval_rewards/margins": 0.22831708192825317,
186
+ "eval_rewards/rejected": -0.1734635829925537,
187
+ "eval_runtime": 685.2004,
188
+ "eval_samples_per_second": 7.821,
189
+ "eval_steps_per_second": 0.489,
190
+ "step": 100
191
+ },
192
+ {
193
+ "epoch": 0.14593698175787728,
194
+ "grad_norm": 33.5,
195
+ "learning_rate": 4.968948030264742e-07,
196
+ "logits/chosen": -1.1363273859024048,
197
+ "logits/rejected": -1.1610157489776611,
198
+ "logps/chosen": -592.44482421875,
199
+ "logps/rejected": -573.2866821289062,
200
+ "loss": 0.582,
201
+ "rewards/accuracies": 0.918749988079071,
202
+ "rewards/chosen": 0.07824783772230148,
203
+ "rewards/margins": 0.2655600905418396,
204
+ "rewards/rejected": -0.1873122602701187,
205
+ "step": 110
206
+ },
207
+ {
208
+ "epoch": 0.15920398009950248,
209
+ "grad_norm": 33.25,
210
+ "learning_rate": 4.948068788729238e-07,
211
+ "logits/chosen": -1.1630527973175049,
212
+ "logits/rejected": -1.202096939086914,
213
+ "logps/chosen": -579.3135375976562,
214
+ "logps/rejected": -578.4791259765625,
215
+ "loss": 0.5691,
216
+ "rewards/accuracies": 0.8999999761581421,
217
+ "rewards/chosen": 0.07454140484333038,
218
+ "rewards/margins": 0.28431838750839233,
219
+ "rewards/rejected": -0.20977696776390076,
220
+ "step": 120
221
+ },
222
+ {
223
+ "epoch": 0.1724709784411277,
224
+ "grad_norm": 35.25,
225
+ "learning_rate": 4.921918849714475e-07,
226
+ "logits/chosen": -1.185011625289917,
227
+ "logits/rejected": -1.1927728652954102,
228
+ "logps/chosen": -621.1232299804688,
229
+ "logps/rejected": -598.525634765625,
230
+ "loss": 0.5436,
231
+ "rewards/accuracies": 0.918749988079071,
232
+ "rewards/chosen": 0.07992725074291229,
233
+ "rewards/margins": 0.3480888307094574,
234
+ "rewards/rejected": -0.2681615948677063,
235
+ "step": 130
236
+ },
237
+ {
238
+ "epoch": 0.1857379767827529,
239
+ "grad_norm": 31.375,
240
+ "learning_rate": 4.890554514096591e-07,
241
+ "logits/chosen": -1.1601734161376953,
242
+ "logits/rejected": -1.1828594207763672,
243
+ "logps/chosen": -589.795654296875,
244
+ "logps/rejected": -544.245849609375,
245
+ "loss": 0.5303,
246
+ "rewards/accuracies": 0.9437500238418579,
247
+ "rewards/chosen": 0.06376471370458603,
248
+ "rewards/margins": 0.3611634075641632,
249
+ "rewards/rejected": -0.29739871621131897,
250
+ "step": 140
251
+ },
252
+ {
253
+ "epoch": 0.19900497512437812,
254
+ "grad_norm": 28.25,
255
+ "learning_rate": 4.854043309359063e-07,
256
+ "logits/chosen": -1.2025436162948608,
257
+ "logits/rejected": -1.2480312585830688,
258
+ "logps/chosen": -550.0576782226562,
259
+ "logps/rejected": -483.212646484375,
260
+ "loss": 0.5173,
261
+ "rewards/accuracies": 0.9312499761581421,
262
+ "rewards/chosen": 0.08968071639537811,
263
+ "rewards/margins": 0.39301761984825134,
264
+ "rewards/rejected": -0.30333688855171204,
265
+ "step": 150
266
+ },
267
+ {
268
+ "epoch": 0.21227197346600332,
269
+ "grad_norm": 28.125,
270
+ "learning_rate": 4.812463844205884e-07,
271
+ "logits/chosen": -1.1907384395599365,
272
+ "logits/rejected": -1.218056321144104,
273
+ "logps/chosen": -612.6536865234375,
274
+ "logps/rejected": -552.7808837890625,
275
+ "loss": 0.5032,
276
+ "rewards/accuracies": 0.9375,
277
+ "rewards/chosen": 0.12153647094964981,
278
+ "rewards/margins": 0.4728039801120758,
279
+ "rewards/rejected": -0.3512675166130066,
280
+ "step": 160
281
+ },
282
+ {
283
+ "epoch": 0.22553897180762852,
284
+ "grad_norm": 27.125,
285
+ "learning_rate": 4.7659056393168604e-07,
286
+ "logits/chosen": -1.2418904304504395,
287
+ "logits/rejected": -1.3286497592926025,
288
+ "logps/chosen": -532.3972778320312,
289
+ "logps/rejected": -481.52459716796875,
290
+ "loss": 0.5031,
291
+ "rewards/accuracies": 0.887499988079071,
292
+ "rewards/chosen": 0.10282758623361588,
293
+ "rewards/margins": 0.44313424825668335,
294
+ "rewards/rejected": -0.3403066396713257,
295
+ "step": 170
296
+ },
297
+ {
298
+ "epoch": 0.23880597014925373,
299
+ "grad_norm": 27.0,
300
+ "learning_rate": 4.714468934609381e-07,
301
+ "logits/chosen": -1.2157742977142334,
302
+ "logits/rejected": -1.2327635288238525,
303
+ "logps/chosen": -541.45361328125,
304
+ "logps/rejected": -482.06103515625,
305
+ "loss": 0.4824,
306
+ "rewards/accuracies": 0.96875,
307
+ "rewards/chosen": 0.1126769408583641,
308
+ "rewards/margins": 0.5232059359550476,
309
+ "rewards/rejected": -0.4105289876461029,
310
+ "step": 180
311
+ },
312
+ {
313
+ "epoch": 0.25207296849087896,
314
+ "grad_norm": 23.875,
315
+ "learning_rate": 4.658264473421659e-07,
316
+ "logits/chosen": -1.2209118604660034,
317
+ "logits/rejected": -1.2742892503738403,
318
+ "logps/chosen": -600.0318603515625,
319
+ "logps/rejected": -533.5223388671875,
320
+ "loss": 0.4874,
321
+ "rewards/accuracies": 0.893750011920929,
322
+ "rewards/chosen": 0.09032230079174042,
323
+ "rewards/margins": 0.5253477096557617,
324
+ "rewards/rejected": -0.4350253939628601,
325
+ "step": 190
326
+ },
327
+ {
328
+ "epoch": 0.26533996683250416,
329
+ "grad_norm": 30.125,
330
+ "learning_rate": 4.597413264082086e-07,
331
+ "logits/chosen": -1.2209361791610718,
332
+ "logits/rejected": -1.295668125152588,
333
+ "logps/chosen": -573.05859375,
334
+ "logps/rejected": -572.3590087890625,
335
+ "loss": 0.4622,
336
+ "rewards/accuracies": 0.9125000238418579,
337
+ "rewards/chosen": 0.08147934824228287,
338
+ "rewards/margins": 0.5580233931541443,
339
+ "rewards/rejected": -0.4765440821647644,
340
+ "step": 200
341
+ },
342
+ {
343
+ "epoch": 0.26533996683250416,
344
+ "eval_logits/chosen": -1.2179902791976929,
345
+ "eval_logits/rejected": -1.271428108215332,
346
+ "eval_logps/chosen": -600.8441162109375,
347
+ "eval_logps/rejected": -543.1426391601562,
348
+ "eval_loss": 0.45805710554122925,
349
+ "eval_rewards/accuracies": 0.9350746273994446,
350
+ "eval_rewards/chosen": 0.11337064951658249,
351
+ "eval_rewards/margins": 0.6113449931144714,
352
+ "eval_rewards/rejected": -0.49797430634498596,
353
+ "eval_runtime": 676.1331,
354
+ "eval_samples_per_second": 7.926,
355
+ "eval_steps_per_second": 0.495,
356
+ "step": 200
357
+ },
358
+ {
359
+ "epoch": 0.27860696517412936,
360
+ "grad_norm": 25.25,
361
+ "learning_rate": 4.5320463193780256e-07,
362
+ "logits/chosen": -1.1879713535308838,
363
+ "logits/rejected": -1.234440565109253,
364
+ "logps/chosen": -570.4061279296875,
365
+ "logps/rejected": -540.0577392578125,
366
+ "loss": 0.4485,
367
+ "rewards/accuracies": 0.918749988079071,
368
+ "rewards/chosen": 0.1254458725452423,
369
+ "rewards/margins": 0.5999458432197571,
370
+ "rewards/rejected": -0.4744999408721924,
371
+ "step": 210
372
+ },
373
+ {
374
+ "epoch": 0.29187396351575456,
375
+ "grad_norm": 22.625,
376
+ "learning_rate": 4.4623043744850044e-07,
377
+ "logits/chosen": -1.1740987300872803,
378
+ "logits/rejected": -1.2075783014297485,
379
+ "logps/chosen": -588.6426391601562,
380
+ "logps/rejected": -534.1394653320312,
381
+ "loss": 0.4551,
382
+ "rewards/accuracies": 0.925000011920929,
383
+ "rewards/chosen": 0.11971279233694077,
384
+ "rewards/margins": 0.642697811126709,
385
+ "rewards/rejected": -0.5229849815368652,
386
+ "step": 220
387
+ },
388
+ {
389
+ "epoch": 0.30514096185737977,
390
+ "grad_norm": 24.875,
391
+ "learning_rate": 4.388337583963563e-07,
392
+ "logits/chosen": -1.1734439134597778,
393
+ "logits/rejected": -1.1642463207244873,
394
+ "logps/chosen": -630.2869873046875,
395
+ "logps/rejected": -650.8348388671875,
396
+ "loss": 0.4326,
397
+ "rewards/accuracies": 0.9125000238418579,
398
+ "rewards/chosen": 0.11086304485797882,
399
+ "rewards/margins": 0.7283642888069153,
400
+ "rewards/rejected": -0.6175012588500977,
401
+ "step": 230
402
+ },
403
+ {
404
+ "epoch": 0.31840796019900497,
405
+ "grad_norm": 28.375,
406
+ "learning_rate": 4.31030519847616e-07,
407
+ "logits/chosen": -1.170459508895874,
408
+ "logits/rejected": -1.1728956699371338,
409
+ "logps/chosen": -568.0635986328125,
410
+ "logps/rejected": -495.91094970703125,
411
+ "loss": 0.4293,
412
+ "rewards/accuracies": 0.956250011920929,
413
+ "rewards/chosen": 0.09963791817426682,
414
+ "rewards/margins": 0.654728889465332,
415
+ "rewards/rejected": -0.5550910234451294,
416
+ "step": 240
417
+ },
418
+ {
419
+ "epoch": 0.33167495854063017,
420
+ "grad_norm": 22.875,
421
+ "learning_rate": 4.2283752219201464e-07,
422
+ "logits/chosen": -1.0885179042816162,
423
+ "logits/rejected": -1.133748173713684,
424
+ "logps/chosen": -556.9310913085938,
425
+ "logps/rejected": -509.1935119628906,
426
+ "loss": 0.4244,
427
+ "rewards/accuracies": 0.9375,
428
+ "rewards/chosen": 0.10411699116230011,
429
+ "rewards/margins": 0.6922882795333862,
430
+ "rewards/rejected": -0.5881712436676025,
431
+ "step": 250
432
+ },
433
+ {
434
+ "epoch": 0.3449419568822554,
435
+ "grad_norm": 24.75,
436
+ "learning_rate": 4.1427240497150047e-07,
437
+ "logits/chosen": -1.132869839668274,
438
+ "logits/rejected": -1.1097866296768188,
439
+ "logps/chosen": -604.4608154296875,
440
+ "logps/rejected": -561.1189575195312,
441
+ "loss": 0.4112,
442
+ "rewards/accuracies": 0.9437500238418579,
443
+ "rewards/chosen": 0.10400505363941193,
444
+ "rewards/margins": 0.708010196685791,
445
+ "rewards/rejected": -0.6040050983428955,
446
+ "step": 260
447
+ },
448
+ {
449
+ "epoch": 0.3582089552238806,
450
+ "grad_norm": 27.125,
451
+ "learning_rate": 4.053536089022623e-07,
452
+ "logits/chosen": -1.1613821983337402,
453
+ "logits/rejected": -1.195441722869873,
454
+ "logps/chosen": -502.41607666015625,
455
+ "logps/rejected": -479.5301208496094,
456
+ "loss": 0.4202,
457
+ "rewards/accuracies": 0.8999999761581421,
458
+ "rewards/chosen": 0.11941847950220108,
459
+ "rewards/margins": 0.7337124347686768,
460
+ "rewards/rejected": -0.6142939329147339,
461
+ "step": 270
462
+ },
463
+ {
464
+ "epoch": 0.3714759535655058,
465
+ "grad_norm": 27.625,
466
+ "learning_rate": 3.9610033617182715e-07,
467
+ "logits/chosen": -1.1407119035720825,
468
+ "logits/rejected": -1.1306835412979126,
469
+ "logps/chosen": -632.7389526367188,
470
+ "logps/rejected": -621.2080078125,
471
+ "loss": 0.4085,
472
+ "rewards/accuracies": 0.9624999761581421,
473
+ "rewards/chosen": 0.15029093623161316,
474
+ "rewards/margins": 0.8855623006820679,
475
+ "rewards/rejected": -0.7352713346481323,
476
+ "step": 280
477
+ },
478
+ {
479
+ "epoch": 0.38474295190713104,
480
+ "grad_norm": 20.75,
481
+ "learning_rate": 3.865325090967081e-07,
482
+ "logits/chosen": -1.1717865467071533,
483
+ "logits/rejected": -1.1872893571853638,
484
+ "logps/chosen": -561.0062255859375,
485
+ "logps/rejected": -548.43603515625,
486
+ "loss": 0.4078,
487
+ "rewards/accuracies": 0.9125000238418579,
488
+ "rewards/chosen": 0.11881165206432343,
489
+ "rewards/margins": 0.7872866988182068,
490
+ "rewards/rejected": -0.6684750318527222,
491
+ "step": 290
492
+ },
493
+ {
494
+ "epoch": 0.39800995024875624,
495
+ "grad_norm": 23.25,
496
+ "learning_rate": 3.7667072722961357e-07,
497
+ "logits/chosen": -1.1743600368499756,
498
+ "logits/rejected": -1.1946338415145874,
499
+ "logps/chosen": -590.4188232421875,
500
+ "logps/rejected": -515.8514404296875,
501
+ "loss": 0.3934,
502
+ "rewards/accuracies": 0.925000011920929,
503
+ "rewards/chosen": 0.14561176300048828,
504
+ "rewards/margins": 0.7691020369529724,
505
+ "rewards/rejected": -0.6234902739524841,
506
+ "step": 300
507
+ },
508
+ {
509
+ "epoch": 0.39800995024875624,
510
+ "eval_logits/chosen": -1.0754693746566772,
511
+ "eval_logits/rejected": -1.0528287887573242,
512
+ "eval_logps/chosen": -600.71435546875,
513
+ "eval_logps/rejected": -545.3746948242188,
514
+ "eval_loss": 0.39592820405960083,
515
+ "eval_rewards/accuracies": 0.9365671873092651,
516
+ "eval_rewards/chosen": 0.12633956968784332,
517
+ "eval_rewards/margins": 0.8475195169448853,
518
+ "eval_rewards/rejected": -0.7211799621582031,
519
+ "eval_runtime": 694.2423,
520
+ "eval_samples_per_second": 7.719,
521
+ "eval_steps_per_second": 0.483,
522
+ "step": 300
523
+ },
524
+ {
525
+ "epoch": 0.41127694859038144,
526
+ "grad_norm": 22.375,
527
+ "learning_rate": 3.6653622300856457e-07,
528
+ "logits/chosen": -1.154953956604004,
529
+ "logits/rejected": -1.1766210794448853,
530
+ "logps/chosen": -573.0366821289062,
531
+ "logps/rejected": -538.422119140625,
532
+ "loss": 0.3901,
533
+ "rewards/accuracies": 0.925000011920929,
534
+ "rewards/chosen": 0.10485813766717911,
535
+ "rewards/margins": 0.8158906698226929,
536
+ "rewards/rejected": -0.711032509803772,
537
+ "step": 310
538
+ },
539
+ {
540
+ "epoch": 0.42454394693200664,
541
+ "grad_norm": 22.0,
542
+ "learning_rate": 3.5615081604340903e-07,
543
+ "logits/chosen": -1.196800708770752,
544
+ "logits/rejected": -1.248241662979126,
545
+ "logps/chosen": -629.1947631835938,
546
+ "logps/rejected": -599.6900634765625,
547
+ "loss": 0.3898,
548
+ "rewards/accuracies": 0.9624999761581421,
549
+ "rewards/chosen": 0.13255654275417328,
550
+ "rewards/margins": 0.8788650631904602,
551
+ "rewards/rejected": -0.7463085055351257,
552
+ "step": 320
553
+ },
554
+ {
555
+ "epoch": 0.43781094527363185,
556
+ "grad_norm": 22.125,
557
+ "learning_rate": 3.455368661381543e-07,
558
+ "logits/chosen": -1.1678781509399414,
559
+ "logits/rejected": -1.178554892539978,
560
+ "logps/chosen": -493.3885803222656,
561
+ "logps/rejected": -474.68402099609375,
562
+ "loss": 0.3899,
563
+ "rewards/accuracies": 0.925000011920929,
564
+ "rewards/chosen": 0.07870842516422272,
565
+ "rewards/margins": 0.7927115559577942,
566
+ "rewards/rejected": -0.7140030860900879,
567
+ "step": 330
568
+ },
569
+ {
570
+ "epoch": 0.45107794361525705,
571
+ "grad_norm": 22.625,
572
+ "learning_rate": 3.347172251502598e-07,
573
+ "logits/chosen": -1.1612517833709717,
574
+ "logits/rejected": -1.1822433471679688,
575
+ "logps/chosen": -608.53271484375,
576
+ "logps/rejected": -532.9712524414062,
577
+ "loss": 0.3763,
578
+ "rewards/accuracies": 0.9437500238418579,
579
+ "rewards/chosen": 0.12583430111408234,
580
+ "rewards/margins": 0.9212196469306946,
581
+ "rewards/rejected": -0.7953853607177734,
582
+ "step": 340
583
+ },
584
+ {
585
+ "epoch": 0.46434494195688225,
586
+ "grad_norm": 21.25,
587
+ "learning_rate": 3.2371518779053744e-07,
588
+ "logits/chosen": -1.101665735244751,
589
+ "logits/rejected": -1.0791598558425903,
590
+ "logps/chosen": -651.3994750976562,
591
+ "logps/rejected": -633.3742065429688,
592
+ "loss": 0.3678,
593
+ "rewards/accuracies": 0.9312499761581421,
594
+ "rewards/chosen": 0.12759700417518616,
595
+ "rewards/margins": 0.9944127798080444,
596
+ "rewards/rejected": -0.8668158650398254,
597
+ "step": 350
598
+ },
599
+ {
600
+ "epoch": 0.47761194029850745,
601
+ "grad_norm": 24.75,
602
+ "learning_rate": 3.1255444146958844e-07,
603
+ "logits/chosen": -1.1323697566986084,
604
+ "logits/rejected": -1.118276596069336,
605
+ "logps/chosen": -565.9261474609375,
606
+ "logps/rejected": -552.2655639648438,
607
+ "loss": 0.3684,
608
+ "rewards/accuracies": 0.90625,
609
+ "rewards/chosen": 0.06775705516338348,
610
+ "rewards/margins": 0.8904596567153931,
611
+ "rewards/rejected": -0.8227025270462036,
612
+ "step": 360
613
+ },
614
+ {
615
+ "epoch": 0.49087893864013266,
616
+ "grad_norm": 20.875,
617
+ "learning_rate": 3.012590152987561e-07,
618
+ "logits/chosen": -1.1285905838012695,
619
+ "logits/rejected": -1.1064956188201904,
620
+ "logps/chosen": -601.677734375,
621
+ "logps/rejected": -547.2380981445312,
622
+ "loss": 0.3663,
623
+ "rewards/accuracies": 0.9437500238418579,
624
+ "rewards/chosen": 0.12407927215099335,
625
+ "rewards/margins": 0.9496960639953613,
626
+ "rewards/rejected": -0.825616717338562,
627
+ "step": 370
628
+ },
629
+ {
630
+ "epoch": 0.5041459369817579,
631
+ "grad_norm": 21.125,
632
+ "learning_rate": 2.8985322835539626e-07,
633
+ "logits/chosen": -1.0900777578353882,
634
+ "logits/rejected": -1.0679134130477905,
635
+ "logps/chosen": -622.616455078125,
636
+ "logps/rejected": -570.1490478515625,
637
+ "loss": 0.3705,
638
+ "rewards/accuracies": 0.956250011920929,
639
+ "rewards/chosen": 0.15115661919116974,
640
+ "rewards/margins": 0.9384227991104126,
641
+ "rewards/rejected": -0.7872661352157593,
642
+ "step": 380
643
+ },
644
+ {
645
+ "epoch": 0.5174129353233831,
646
+ "grad_norm": 21.375,
647
+ "learning_rate": 2.7836163732385063e-07,
648
+ "logits/chosen": -1.146226167678833,
649
+ "logits/rejected": -1.131203293800354,
650
+ "logps/chosen": -637.7056274414062,
651
+ "logps/rejected": -580.0550537109375,
652
+ "loss": 0.3568,
653
+ "rewards/accuracies": 0.9750000238418579,
654
+ "rewards/chosen": 0.09901280701160431,
655
+ "rewards/margins": 0.9571696519851685,
656
+ "rewards/rejected": -0.8581568598747253,
657
+ "step": 390
658
+ },
659
+ {
660
+ "epoch": 0.5306799336650083,
661
+ "grad_norm": 21.125,
662
+ "learning_rate": 2.6680898362485124e-07,
663
+ "logits/chosen": -1.0712168216705322,
664
+ "logits/rejected": -1.0648881196975708,
665
+ "logps/chosen": -538.2913208007812,
666
+ "logps/rejected": -526.31689453125,
667
+ "loss": 0.3629,
668
+ "rewards/accuracies": 0.9624999761581421,
669
+ "rewards/chosen": 0.10081170499324799,
670
+ "rewards/margins": 0.889560341835022,
671
+ "rewards/rejected": -0.788748562335968,
672
+ "step": 400
673
+ },
674
+ {
675
+ "epoch": 0.5306799336650083,
676
+ "eval_logits/chosen": -1.1154277324676514,
677
+ "eval_logits/rejected": -1.110862135887146,
678
+ "eval_logps/chosen": -600.8080444335938,
679
+ "eval_logps/rejected": -546.7705078125,
680
+ "eval_loss": 0.3673515021800995,
681
+ "eval_rewards/accuracies": 0.9380596876144409,
682
+ "eval_rewards/chosen": 0.11697468906641006,
683
+ "eval_rewards/margins": 0.9777337312698364,
684
+ "eval_rewards/rejected": -0.8607590198516846,
685
+ "eval_runtime": 712.8834,
686
+ "eval_samples_per_second": 7.517,
687
+ "eval_steps_per_second": 0.47,
688
+ "step": 400
689
+ },
690
+ {
691
+ "epoch": 0.5439469320066335,
692
+ "grad_norm": 19.625,
693
+ "learning_rate": 2.5522014014718697e-07,
694
+ "logits/chosen": -1.0688056945800781,
695
+ "logits/rejected": -1.0452687740325928,
696
+ "logps/chosen": -548.5653076171875,
697
+ "logps/rejected": -500.2899475097656,
698
+ "loss": 0.3686,
699
+ "rewards/accuracies": 0.9375,
700
+ "rewards/chosen": 0.09097462147474289,
701
+ "rewards/margins": 0.8869258761405945,
702
+ "rewards/rejected": -0.7959513664245605,
703
+ "step": 410
704
+ },
705
+ {
706
+ "epoch": 0.5572139303482587,
707
+ "grad_norm": 20.75,
708
+ "learning_rate": 2.436200576963198e-07,
709
+ "logits/chosen": -1.1284773349761963,
710
+ "logits/rejected": -1.0750479698181152,
711
+ "logps/chosen": -570.462890625,
712
+ "logps/rejected": -484.501708984375,
713
+ "loss": 0.3638,
714
+ "rewards/accuracies": 0.956250011920929,
715
+ "rewards/chosen": 0.1417674571275711,
716
+ "rewards/margins": 0.9661204218864441,
717
+ "rewards/rejected": -0.8243529200553894,
718
+ "step": 420
719
+ },
720
+ {
721
+ "epoch": 0.5704809286898839,
722
+ "grad_norm": 26.625,
723
+ "learning_rate": 2.3203371127524588e-07,
724
+ "logits/chosen": -1.142064094543457,
725
+ "logits/rejected": -1.0941470861434937,
726
+ "logps/chosen": -540.7872314453125,
727
+ "logps/rejected": -460.7454528808594,
728
+ "loss": 0.3751,
729
+ "rewards/accuracies": 0.90625,
730
+ "rewards/chosen": 0.09556931257247925,
731
+ "rewards/margins": 0.9223111271858215,
732
+ "rewards/rejected": -0.8267418742179871,
733
+ "step": 430
734
+ },
735
+ {
736
+ "epoch": 0.5837479270315091,
737
+ "grad_norm": 21.75,
738
+ "learning_rate": 2.2048604631325892e-07,
739
+ "logits/chosen": -1.0380522012710571,
740
+ "logits/rejected": -1.036592721939087,
741
+ "logps/chosen": -568.5135498046875,
742
+ "logps/rejected": -558.4591064453125,
743
+ "loss": 0.3722,
744
+ "rewards/accuracies": 0.9375,
745
+ "rewards/chosen": 0.05709138512611389,
746
+ "rewards/margins": 0.89503014087677,
747
+ "rewards/rejected": -0.837938666343689,
748
+ "step": 440
749
+ },
750
+ {
751
+ "epoch": 0.5970149253731343,
752
+ "grad_norm": 20.5,
753
+ "learning_rate": 2.0900192495838615e-07,
754
+ "logits/chosen": -1.0975573062896729,
755
+ "logits/rejected": -1.0408273935317993,
756
+ "logps/chosen": -531.0595703125,
757
+ "logps/rejected": -478.84222412109375,
758
+ "loss": 0.3591,
759
+ "rewards/accuracies": 0.925000011920929,
760
+ "rewards/chosen": 0.07921108603477478,
761
+ "rewards/margins": 0.9134254455566406,
762
+ "rewards/rejected": -0.8342143893241882,
763
+ "step": 450
764
+ },
765
+ {
766
+ "epoch": 0.6102819237147595,
767
+ "grad_norm": 18.5,
768
+ "learning_rate": 1.9760607254912926e-07,
769
+ "logits/chosen": -1.0876163244247437,
770
+ "logits/rejected": -1.0495961904525757,
771
+ "logps/chosen": -616.9581298828125,
772
+ "logps/rejected": -538.6895751953125,
773
+ "loss": 0.3562,
774
+ "rewards/accuracies": 0.9437500238418579,
775
+ "rewards/chosen": 0.09119832515716553,
776
+ "rewards/margins": 0.9315482974052429,
777
+ "rewards/rejected": -0.8403499722480774,
778
+ "step": 460
779
+ },
780
+ {
781
+ "epoch": 0.6235489220563848,
782
+ "grad_norm": 20.75,
783
+ "learning_rate": 1.8632302438075613e-07,
784
+ "logits/chosen": -1.1088566780090332,
785
+ "logits/rejected": -1.1191766262054443,
786
+ "logps/chosen": -589.3919067382812,
787
+ "logps/rejected": -567.9808349609375,
788
+ "loss": 0.3629,
789
+ "rewards/accuracies": 0.956250011920929,
790
+ "rewards/chosen": 0.1344141662120819,
791
+ "rewards/margins": 1.0434377193450928,
792
+ "rewards/rejected": -0.9090234637260437,
793
+ "step": 470
794
+ },
795
+ {
796
+ "epoch": 0.6368159203980099,
797
+ "grad_norm": 18.625,
798
+ "learning_rate": 1.7517707288075614e-07,
799
+ "logits/chosen": -1.106209635734558,
800
+ "logits/rejected": -1.1109434366226196,
801
+ "logps/chosen": -565.6036987304688,
802
+ "logps/rejected": -519.3636474609375,
803
+ "loss": 0.3464,
804
+ "rewards/accuracies": 0.9375,
805
+ "rewards/chosen": 0.09805373102426529,
806
+ "rewards/margins": 0.9947841763496399,
807
+ "rewards/rejected": -0.8967304229736328,
808
+ "step": 480
809
+ },
810
+ {
811
+ "epoch": 0.6500829187396352,
812
+ "grad_norm": 20.5,
813
+ "learning_rate": 1.641922153071906e-07,
814
+ "logits/chosen": -1.0548484325408936,
815
+ "logits/rejected": -1.0250844955444336,
816
+ "logps/chosen": -573.2247924804688,
817
+ "logps/rejected": -566.44970703125,
818
+ "loss": 0.365,
819
+ "rewards/accuracies": 0.956250011920929,
820
+ "rewards/chosen": 0.09521574527025223,
821
+ "rewards/margins": 1.005110740661621,
822
+ "rewards/rejected": -0.9098949432373047,
823
+ "step": 490
824
+ },
825
+ {
826
+ "epoch": 0.6633499170812603,
827
+ "grad_norm": 19.125,
828
+ "learning_rate": 1.5339210208254344e-07,
829
+ "logits/chosen": -1.060248613357544,
830
+ "logits/rejected": -1.0458314418792725,
831
+ "logps/chosen": -541.8770751953125,
832
+ "logps/rejected": -513.5958251953125,
833
+ "loss": 0.3556,
834
+ "rewards/accuracies": 0.925000011920929,
835
+ "rewards/chosen": 0.11755643784999847,
836
+ "rewards/margins": 0.9871570467948914,
837
+ "rewards/rejected": -0.8696004748344421,
838
+ "step": 500
839
+ },
840
+ {
841
+ "epoch": 0.6633499170812603,
842
+ "eval_logits/chosen": -0.9289145469665527,
843
+ "eval_logits/rejected": -0.8265557885169983,
844
+ "eval_logps/chosen": -600.8418579101562,
845
+ "eval_logps/rejected": -547.3089599609375,
846
+ "eval_loss": 0.3561394512653351,
847
+ "eval_rewards/accuracies": 0.9388059973716736,
848
+ "eval_rewards/chosen": 0.11359576135873795,
849
+ "eval_rewards/margins": 1.0282028913497925,
850
+ "eval_rewards/rejected": -0.9146071672439575,
851
+ "eval_runtime": 731.2224,
852
+ "eval_samples_per_second": 7.329,
853
+ "eval_steps_per_second": 0.458,
854
+ "step": 500
855
+ },
856
+ {
857
+ "epoch": 0.6766169154228856,
858
+ "grad_norm": 19.75,
859
+ "learning_rate": 1.4279998587430943e-07,
860
+ "logits/chosen": -1.0720138549804688,
861
+ "logits/rejected": -1.0440585613250732,
862
+ "logps/chosen": -576.8155517578125,
863
+ "logps/rejected": -474.7138671875,
864
+ "loss": 0.3397,
865
+ "rewards/accuracies": 0.9312499761581421,
866
+ "rewards/chosen": 0.12137231975793839,
867
+ "rewards/margins": 1.0447041988372803,
868
+ "rewards/rejected": -0.9233318567276001,
869
+ "step": 510
870
+ },
871
+ {
872
+ "epoch": 0.6898839137645107,
873
+ "grad_norm": 19.375,
874
+ "learning_rate": 1.324386715319503e-07,
875
+ "logits/chosen": -1.0745595693588257,
876
+ "logits/rejected": -1.0517549514770508,
877
+ "logps/chosen": -547.5264282226562,
878
+ "logps/rejected": -513.98974609375,
879
+ "loss": 0.3567,
880
+ "rewards/accuracies": 0.949999988079071,
881
+ "rewards/chosen": 0.12890011072158813,
882
+ "rewards/margins": 0.9725991487503052,
883
+ "rewards/rejected": -0.8436989784240723,
884
+ "step": 520
885
+ },
886
+ {
887
+ "epoch": 0.703150912106136,
888
+ "grad_norm": 20.125,
889
+ "learning_rate": 1.2233046698800343e-07,
890
+ "logits/chosen": -1.0820659399032593,
891
+ "logits/rejected": -1.0528825521469116,
892
+ "logps/chosen": -623.875,
893
+ "logps/rejected": -623.9129638671875,
894
+ "loss": 0.3541,
895
+ "rewards/accuracies": 0.9312499761581421,
896
+ "rewards/chosen": 0.141106516122818,
897
+ "rewards/margins": 1.074299931526184,
898
+ "rewards/rejected": -0.9331933856010437,
899
+ "step": 530
900
+ },
901
+ {
902
+ "epoch": 0.7164179104477612,
903
+ "grad_norm": 20.625,
904
+ "learning_rate": 1.124971352290545e-07,
905
+ "logits/chosen": -1.108722448348999,
906
+ "logits/rejected": -1.0873199701309204,
907
+ "logps/chosen": -593.7824096679688,
908
+ "logps/rejected": -559.232421875,
909
+ "loss": 0.337,
910
+ "rewards/accuracies": 0.9437500238418579,
911
+ "rewards/chosen": 0.1328240931034088,
912
+ "rewards/margins": 1.019162654876709,
913
+ "rewards/rejected": -0.8863385915756226,
914
+ "step": 540
915
+ },
916
+ {
917
+ "epoch": 0.7296849087893864,
918
+ "grad_norm": 21.75,
919
+ "learning_rate": 1.0295984743997909e-07,
920
+ "logits/chosen": -1.085311770439148,
921
+ "logits/rejected": -1.0750799179077148,
922
+ "logps/chosen": -602.3040161132812,
923
+ "logps/rejected": -558.0685424804688,
924
+ "loss": 0.3544,
925
+ "rewards/accuracies": 0.9375,
926
+ "rewards/chosen": 0.08248591423034668,
927
+ "rewards/margins": 1.015809178352356,
928
+ "rewards/rejected": -0.9333232641220093,
929
+ "step": 550
930
+ },
931
+ {
932
+ "epoch": 0.7429519071310116,
933
+ "grad_norm": 20.375,
934
+ "learning_rate": 9.37391374223355e-08,
935
+ "logits/chosen": -1.1596343517303467,
936
+ "logits/rejected": -1.1693814992904663,
937
+ "logps/chosen": -583.8175659179688,
938
+ "logps/rejected": -574.1356811523438,
939
+ "loss": 0.3416,
940
+ "rewards/accuracies": 0.9375,
941
+ "rewards/chosen": 0.1034403070807457,
942
+ "rewards/margins": 1.0356991291046143,
943
+ "rewards/rejected": -0.9322587251663208,
944
+ "step": 560
945
+ },
946
+ {
947
+ "epoch": 0.7562189054726368,
948
+ "grad_norm": 17.875,
949
+ "learning_rate": 8.485485738504488e-08,
950
+ "logits/chosen": -1.1387842893600464,
951
+ "logits/rejected": -1.108246922492981,
952
+ "logps/chosen": -627.6776123046875,
953
+ "logps/rejected": -469.2587890625,
954
+ "loss": 0.3468,
955
+ "rewards/accuracies": 0.956250011920929,
956
+ "rewards/chosen": 0.1544940024614334,
957
+ "rewards/margins": 1.0551023483276367,
958
+ "rewards/rejected": -0.900608241558075,
959
+ "step": 570
960
+ },
961
+ {
962
+ "epoch": 0.7694859038142621,
963
+ "grad_norm": 19.25,
964
+ "learning_rate": 7.632613520254158e-08,
965
+ "logits/chosen": -1.0649652481079102,
966
+ "logits/rejected": -1.064888834953308,
967
+ "logps/chosen": -601.84033203125,
968
+ "logps/rejected": -529.4444580078125,
969
+ "loss": 0.359,
970
+ "rewards/accuracies": 0.9312499761581421,
971
+ "rewards/chosen": 0.10679004341363907,
972
+ "rewards/margins": 1.0746941566467285,
973
+ "rewards/rejected": -0.9679039716720581,
974
+ "step": 580
975
+ },
976
+ {
977
+ "epoch": 0.7827529021558872,
978
+ "grad_norm": 20.75,
979
+ "learning_rate": 6.817133323241755e-08,
980
+ "logits/chosen": -1.1302725076675415,
981
+ "logits/rejected": -1.1106232404708862,
982
+ "logps/chosen": -675.71533203125,
983
+ "logps/rejected": -525.1083984375,
984
+ "loss": 0.3478,
985
+ "rewards/accuracies": 0.918749988079071,
986
+ "rewards/chosen": 0.09434429556131363,
987
+ "rewards/margins": 1.0678186416625977,
988
+ "rewards/rejected": -0.9734743237495422,
989
+ "step": 590
990
+ },
991
+ {
992
+ "epoch": 0.7960199004975125,
993
+ "grad_norm": 22.25,
994
+ "learning_rate": 6.040800878122654e-08,
995
+ "logits/chosen": -1.1192970275878906,
996
+ "logits/rejected": -1.1343142986297607,
997
+ "logps/chosen": -567.078125,
998
+ "logps/rejected": -525.439697265625,
999
+ "loss": 0.3488,
1000
+ "rewards/accuracies": 0.9375,
1001
+ "rewards/chosen": 0.09638460725545883,
1002
+ "rewards/margins": 0.9892334938049316,
1003
+ "rewards/rejected": -0.8928488492965698,
1004
+ "step": 600
1005
+ },
1006
+ {
1007
+ "epoch": 0.7960199004975125,
1008
+ "eval_logits/chosen": -1.0876879692077637,
1009
+ "eval_logits/rejected": -1.067589521408081,
1010
+ "eval_logps/chosen": -600.8737182617188,
1011
+ "eval_logps/rejected": -547.4734497070312,
1012
+ "eval_loss": 0.35399559140205383,
1013
+ "eval_rewards/accuracies": 0.9410447478294373,
1014
+ "eval_rewards/chosen": 0.11041063815355301,
1015
+ "eval_rewards/margins": 1.0414601564407349,
1016
+ "eval_rewards/rejected": -0.9310495853424072,
1017
+ "eval_runtime": 680.93,
1018
+ "eval_samples_per_second": 7.87,
1019
+ "eval_steps_per_second": 0.492,
1020
+ "step": 600
1021
+ },
1022
+ {
1023
+ "epoch": 0.8092868988391376,
1024
+ "grad_norm": 20.625,
1025
+ "learning_rate": 5.305287630356362e-08,
1026
+ "logits/chosen": -1.1514161825180054,
1027
+ "logits/rejected": -1.1489306688308716,
1028
+ "logps/chosen": -603.4745483398438,
1029
+ "logps/rejected": -562.2152709960938,
1030
+ "loss": 0.3556,
1031
+ "rewards/accuracies": 0.9375,
1032
+ "rewards/chosen": 0.08816297352313995,
1033
+ "rewards/margins": 1.0151373147964478,
1034
+ "rewards/rejected": -0.9269744157791138,
1035
+ "step": 610
1036
+ },
1037
+ {
1038
+ "epoch": 0.8225538971807629,
1039
+ "grad_norm": 23.375,
1040
+ "learning_rate": 4.612177141580875e-08,
1041
+ "logits/chosen": -1.0614503622055054,
1042
+ "logits/rejected": -1.0462639331817627,
1043
+ "logps/chosen": -600.1735229492188,
1044
+ "logps/rejected": -563.9249877929688,
1045
+ "loss": 0.3555,
1046
+ "rewards/accuracies": 0.925000011920929,
1047
+ "rewards/chosen": 0.1291816681623459,
1048
+ "rewards/margins": 1.0133174657821655,
1049
+ "rewards/rejected": -0.8841358423233032,
1050
+ "step": 620
1051
+ },
1052
+ {
1053
+ "epoch": 0.835820895522388,
1054
+ "grad_norm": 19.25,
1055
+ "learning_rate": 3.962961680200927e-08,
1056
+ "logits/chosen": -1.154007911682129,
1057
+ "logits/rejected": -1.1668691635131836,
1058
+ "logps/chosen": -587.5554809570312,
1059
+ "logps/rejected": -579.425537109375,
1060
+ "loss": 0.3591,
1061
+ "rewards/accuracies": 0.9312499761581421,
1062
+ "rewards/chosen": 0.1071285754442215,
1063
+ "rewards/margins": 1.01383376121521,
1064
+ "rewards/rejected": -0.9067050814628601,
1065
+ "step": 630
1066
+ },
1067
+ {
1068
+ "epoch": 0.8490878938640133,
1069
+ "grad_norm": 19.625,
1070
+ "learning_rate": 3.359039008530845e-08,
1071
+ "logits/chosen": -1.1280542612075806,
1072
+ "logits/rejected": -1.1074917316436768,
1073
+ "logps/chosen": -638.248779296875,
1074
+ "logps/rejected": -570.1997680664062,
1075
+ "loss": 0.3504,
1076
+ "rewards/accuracies": 0.925000011920929,
1077
+ "rewards/chosen": 0.10942580550909042,
1078
+ "rewards/margins": 1.0872418880462646,
1079
+ "rewards/rejected": -0.9778162240982056,
1080
+ "step": 640
1081
+ },
1082
+ {
1083
+ "epoch": 0.8623548922056384,
1084
+ "grad_norm": 21.125,
1085
+ "learning_rate": 2.8017093734092474e-08,
1086
+ "logits/chosen": -1.0559157133102417,
1087
+ "logits/rejected": -0.9880287051200867,
1088
+ "logps/chosen": -617.8060302734375,
1089
+ "logps/rejected": -542.3763427734375,
1090
+ "loss": 0.3563,
1091
+ "rewards/accuracies": 0.949999988079071,
1092
+ "rewards/chosen": 0.06925593316555023,
1093
+ "rewards/margins": 1.0108643770217896,
1094
+ "rewards/rejected": -0.9416083097457886,
1095
+ "step": 650
1096
+ },
1097
+ {
1098
+ "epoch": 0.8756218905472637,
1099
+ "grad_norm": 20.25,
1100
+ "learning_rate": 2.292172706764703e-08,
1101
+ "logits/chosen": -1.0475237369537354,
1102
+ "logits/rejected": -1.0124943256378174,
1103
+ "logps/chosen": -625.7453002929688,
1104
+ "logps/rejected": -627.2447509765625,
1105
+ "loss": 0.3567,
1106
+ "rewards/accuracies": 0.9375,
1107
+ "rewards/chosen": 0.11004464328289032,
1108
+ "rewards/margins": 1.0592918395996094,
1109
+ "rewards/rejected": -0.9492471814155579,
1110
+ "step": 660
1111
+ },
1112
+ {
1113
+ "epoch": 0.8888888888888888,
1114
+ "grad_norm": 19.875,
1115
+ "learning_rate": 1.8315260421596924e-08,
1116
+ "logits/chosen": -1.16936457157135,
1117
+ "logits/rejected": -1.1426036357879639,
1118
+ "logps/chosen": -555.6038818359375,
1119
+ "logps/rejected": -494.2486267089844,
1120
+ "loss": 0.3509,
1121
+ "rewards/accuracies": 0.9437500238418579,
1122
+ "rewards/chosen": 0.12268821895122528,
1123
+ "rewards/margins": 0.9934048652648926,
1124
+ "rewards/rejected": -0.8707167506217957,
1125
+ "step": 670
1126
+ },
1127
+ {
1128
+ "epoch": 0.9021558872305141,
1129
+ "grad_norm": 20.125,
1130
+ "learning_rate": 1.4207611528748997e-08,
1131
+ "logits/chosen": -1.122236967086792,
1132
+ "logits/rejected": -1.0928575992584229,
1133
+ "logps/chosen": -568.34765625,
1134
+ "logps/rejected": -548.4368896484375,
1135
+ "loss": 0.3548,
1136
+ "rewards/accuracies": 0.9437500238418579,
1137
+ "rewards/chosen": 0.10156118869781494,
1138
+ "rewards/margins": 1.01285719871521,
1139
+ "rewards/rejected": -0.9112960696220398,
1140
+ "step": 680
1141
+ },
1142
+ {
1143
+ "epoch": 0.9154228855721394,
1144
+ "grad_norm": 20.875,
1145
+ "learning_rate": 1.0607624166191958e-08,
1146
+ "logits/chosen": -1.102480411529541,
1147
+ "logits/rejected": -1.097570776939392,
1148
+ "logps/chosen": -670.6092529296875,
1149
+ "logps/rejected": -724.0338134765625,
1150
+ "loss": 0.3508,
1151
+ "rewards/accuracies": 0.9437500238418579,
1152
+ "rewards/chosen": 0.122245192527771,
1153
+ "rewards/margins": 1.0727375745773315,
1154
+ "rewards/rejected": -0.9504923820495605,
1155
+ "step": 690
1156
+ },
1157
+ {
1158
+ "epoch": 0.9286898839137645,
1159
+ "grad_norm": 20.75,
1160
+ "learning_rate": 7.523049114624647e-09,
1161
+ "logits/chosen": -1.067058801651001,
1162
+ "logits/rejected": -1.0042006969451904,
1163
+ "logps/chosen": -610.7342529296875,
1164
+ "logps/rejected": -569.3170776367188,
1165
+ "loss": 0.3563,
1166
+ "rewards/accuracies": 0.9624999761581421,
1167
+ "rewards/chosen": 0.13328425586223602,
1168
+ "rewards/margins": 1.0865224599838257,
1169
+ "rewards/rejected": -0.9532381296157837,
1170
+ "step": 700
1171
+ },
1172
+ {
1173
+ "epoch": 0.9286898839137645,
1174
+ "eval_logits/chosen": -0.9600119590759277,
1175
+ "eval_logits/rejected": -0.8735809922218323,
1176
+ "eval_logps/chosen": -600.8121337890625,
1177
+ "eval_logps/rejected": -547.42236328125,
1178
+ "eval_loss": 0.3540438711643219,
1179
+ "eval_rewards/accuracies": 0.9395522475242615,
1180
+ "eval_rewards/chosen": 0.11656844615936279,
1181
+ "eval_rewards/margins": 1.0425076484680176,
1182
+ "eval_rewards/rejected": -0.92593914270401,
1183
+ "eval_runtime": 733.9957,
1184
+ "eval_samples_per_second": 7.301,
1185
+ "eval_steps_per_second": 0.456,
1186
+ "step": 700
1187
+ },
1188
+ {
1189
+ "epoch": 0.9419568822553898,
1190
+ "grad_norm": 20.25,
1191
+ "learning_rate": 4.960527470908277e-09,
1192
+ "logits/chosen": -0.9644180536270142,
1193
+ "logits/rejected": -0.860200047492981,
1194
+ "logps/chosen": -622.1219482421875,
1195
+ "logps/rejected": -567.1380615234375,
1196
+ "loss": 0.3555,
1197
+ "rewards/accuracies": 0.956250011920929,
1198
+ "rewards/chosen": 0.11110372841358185,
1199
+ "rewards/margins": 1.0618221759796143,
1200
+ "rewards/rejected": -0.9507185220718384,
1201
+ "step": 710
1202
+ },
1203
+ {
1204
+ "epoch": 0.9552238805970149,
1205
+ "grad_norm": 19.75,
1206
+ "learning_rate": 2.925576349770337e-09,
1207
+ "logits/chosen": -0.9986553192138672,
1208
+ "logits/rejected": -0.8984715342521667,
1209
+ "logps/chosen": -605.7318725585938,
1210
+ "logps/rejected": -542.7632446289062,
1211
+ "loss": 0.359,
1212
+ "rewards/accuracies": 0.956250011920929,
1213
+ "rewards/chosen": 0.10133838653564453,
1214
+ "rewards/margins": 0.9883183240890503,
1215
+ "rewards/rejected": -0.886979877948761,
1216
+ "step": 720
1217
+ },
1218
+ {
1219
+ "epoch": 0.9684908789386402,
1220
+ "grad_norm": 21.25,
1221
+ "learning_rate": 1.4225770054443197e-09,
1222
+ "logits/chosen": -0.9282068014144897,
1223
+ "logits/rejected": -0.8550642132759094,
1224
+ "logps/chosen": -571.7738037109375,
1225
+ "logps/rejected": -500.0634765625,
1226
+ "loss": 0.3571,
1227
+ "rewards/accuracies": 0.925000011920929,
1228
+ "rewards/chosen": 0.10189428180456161,
1229
+ "rewards/margins": 1.0126664638519287,
1230
+ "rewards/rejected": -0.9107722043991089,
1231
+ "step": 730
1232
+ },
1233
+ {
1234
+ "epoch": 0.9817578772802653,
1235
+ "grad_norm": 19.625,
1236
+ "learning_rate": 4.547653988198619e-10,
1237
+ "logits/chosen": -0.9236332774162292,
1238
+ "logits/rejected": -0.8542205095291138,
1239
+ "logps/chosen": -632.85546875,
1240
+ "logps/rejected": -597.6421508789062,
1241
+ "loss": 0.3493,
1242
+ "rewards/accuracies": 0.949999988079071,
1243
+ "rewards/chosen": 0.147782564163208,
1244
+ "rewards/margins": 1.067484736442566,
1245
+ "rewards/rejected": -0.9197022318840027,
1246
+ "step": 740
1247
+ },
1248
+ {
1249
+ "epoch": 0.9950248756218906,
1250
+ "grad_norm": 20.5,
1251
+ "learning_rate": 2.4225230411789588e-11,
1252
+ "logits/chosen": -0.9963258504867554,
1253
+ "logits/rejected": -0.8823927044868469,
1254
+ "logps/chosen": -617.5396728515625,
1255
+ "logps/rejected": -596.9856567382812,
1256
+ "loss": 0.36,
1257
+ "rewards/accuracies": 0.90625,
1258
+ "rewards/chosen": 0.14549708366394043,
1259
+ "rewards/margins": 1.0541255474090576,
1260
+ "rewards/rejected": -0.9086285829544067,
1261
+ "step": 750
1262
+ },
1263
+ {
1264
+ "epoch": 0.9990049751243781,
1265
+ "step": 753,
1266
+ "total_flos": 0.0,
1267
+ "train_loss": 0.4291752041731856,
1268
+ "train_runtime": 22849.6211,
1269
+ "train_samples_per_second": 2.111,
1270
+ "train_steps_per_second": 0.033
1271
+ }
1272
+ ],
1273
+ "logging_steps": 10,
1274
+ "max_steps": 753,
1275
+ "num_input_tokens_seen": 0,
1276
+ "num_train_epochs": 1,
1277
+ "save_steps": 100,
1278
+ "total_flos": 0.0,
1279
+ "train_batch_size": 4,
1280
+ "trial_name": null,
1281
+ "trial_params": null
1282
+ }