taicheng commited on
Commit
121e1c2
1 Parent(s): 50520bf

Model save

Browse files
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: alignment-handbook/zephyr-7b-sft-full
5
+ tags:
6
+ - trl
7
+ - dpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: zephyr-7b-align-scan-9e-07-0.86-linear-2.0
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # zephyr-7b-align-scan-9e-07-0.86-linear-2.0
18
+
19
+ This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 1.2247
22
+ - Rewards/chosen: 0.1937
23
+ - Rewards/rejected: -1.8484
24
+ - Rewards/accuracies: 0.3214
25
+ - Rewards/margins: 2.0420
26
+ - Logps/rejected: -83.2776
27
+ - Logps/chosen: -74.2660
28
+ - Logits/rejected: -2.5909
29
+ - Logits/chosen: -2.6070
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 9e-07
49
+ - train_batch_size: 8
50
+ - eval_batch_size: 8
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - num_devices: 4
54
+ - gradient_accumulation_steps: 2
55
+ - total_train_batch_size: 64
56
+ - total_eval_batch_size: 32
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: linear
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - num_epochs: 2
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
65
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
66
+ | 0.9734 | 0.3484 | 100 | 0.9203 | 2.0652 | 1.3351 | 0.3294 | 0.7302 | -79.5760 | -72.0898 | -2.5695 | -2.5853 |
67
+ | 0.9883 | 0.6969 | 200 | 1.0967 | 2.4271 | 1.1885 | 0.3373 | 1.2386 | -79.7464 | -71.6691 | -2.5708 | -2.5875 |
68
+ | 0.4215 | 1.0453 | 300 | 1.1234 | 3.0876 | 1.7905 | 0.3313 | 1.2970 | -79.0463 | -70.9010 | -2.6403 | -2.6560 |
69
+ | 0.393 | 1.3937 | 400 | 1.2234 | -0.1343 | -1.8934 | 0.3234 | 1.7591 | -83.3299 | -74.6474 | -2.6093 | -2.6250 |
70
+ | 0.3986 | 1.7422 | 500 | 1.2247 | 0.1937 | -1.8484 | 0.3214 | 2.0420 | -83.2776 | -74.2660 | -2.5909 | -2.6070 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.44.2
76
+ - Pytorch 2.4.0
77
+ - Datasets 2.21.0
78
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.6794358088995106,
5
+ "train_runtime": 6413.3344,
6
+ "train_samples": 18340,
7
+ "train_samples_per_second": 5.719,
8
+ "train_steps_per_second": 0.09
9
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.44.2"
6
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.6794358088995106,
5
+ "train_runtime": 6413.3344,
6
+ "train_samples": 18340,
7
+ "train_samples_per_second": 5.719,
8
+ "train_steps_per_second": 0.09
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,992 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.0,
5
+ "eval_steps": 100,
6
+ "global_step": 574,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.003484320557491289,
13
+ "grad_norm": 497.3575626103625,
14
+ "learning_rate": 1.5517241379310344e-08,
15
+ "logits/chosen": -2.5345611572265625,
16
+ "logits/rejected": -2.581700563430786,
17
+ "logps/chosen": -60.002105712890625,
18
+ "logps/rejected": -99.98374938964844,
19
+ "loss": 0.6931,
20
+ "rewards/accuracies": 0.0,
21
+ "rewards/chosen": 0.0,
22
+ "rewards/margins": 0.0,
23
+ "rewards/rejected": 0.0,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.03484320557491289,
28
+ "grad_norm": 462.4792154122565,
29
+ "learning_rate": 1.5517241379310344e-07,
30
+ "logits/chosen": -2.563889503479004,
31
+ "logits/rejected": -2.562492609024048,
32
+ "logps/chosen": -59.64662551879883,
33
+ "logps/rejected": -73.37532043457031,
34
+ "loss": 0.6974,
35
+ "rewards/accuracies": 0.2083333283662796,
36
+ "rewards/chosen": 0.011204251088202,
37
+ "rewards/margins": 0.00639230664819479,
38
+ "rewards/rejected": 0.004811946302652359,
39
+ "step": 10
40
+ },
41
+ {
42
+ "epoch": 0.06968641114982578,
43
+ "grad_norm": 537.1826669031543,
44
+ "learning_rate": 3.103448275862069e-07,
45
+ "logits/chosen": -2.6071362495422363,
46
+ "logits/rejected": -2.565964698791504,
47
+ "logps/chosen": -103.9787826538086,
48
+ "logps/rejected": -94.84336853027344,
49
+ "loss": 0.6775,
50
+ "rewards/accuracies": 0.3375000059604645,
51
+ "rewards/chosen": 0.12751302123069763,
52
+ "rewards/margins": 0.08386950194835663,
53
+ "rewards/rejected": 0.0436435230076313,
54
+ "step": 20
55
+ },
56
+ {
57
+ "epoch": 0.10452961672473868,
58
+ "grad_norm": 586.0340590697643,
59
+ "learning_rate": 4.6551724137931035e-07,
60
+ "logits/chosen": -2.5952959060668945,
61
+ "logits/rejected": -2.5755515098571777,
62
+ "logps/chosen": -81.9345932006836,
63
+ "logps/rejected": -91.35636901855469,
64
+ "loss": 0.6673,
65
+ "rewards/accuracies": 0.33125001192092896,
66
+ "rewards/chosen": 0.48329487442970276,
67
+ "rewards/margins": 0.3252241015434265,
68
+ "rewards/rejected": 0.15807083249092102,
69
+ "step": 30
70
+ },
71
+ {
72
+ "epoch": 0.13937282229965156,
73
+ "grad_norm": 508.2061569040689,
74
+ "learning_rate": 6.206896551724138e-07,
75
+ "logits/chosen": -2.495903491973877,
76
+ "logits/rejected": -2.4941375255584717,
77
+ "logps/chosen": -78.31864929199219,
78
+ "logps/rejected": -74.04932403564453,
79
+ "loss": 0.6704,
80
+ "rewards/accuracies": 0.3187499940395355,
81
+ "rewards/chosen": -0.3442053496837616,
82
+ "rewards/margins": 0.5570718050003052,
83
+ "rewards/rejected": -0.9012771844863892,
84
+ "step": 40
85
+ },
86
+ {
87
+ "epoch": 0.17421602787456447,
88
+ "grad_norm": 371.6317110957719,
89
+ "learning_rate": 7.758620689655172e-07,
90
+ "logits/chosen": -2.522507667541504,
91
+ "logits/rejected": -2.5270426273345947,
92
+ "logps/chosen": -63.30371856689453,
93
+ "logps/rejected": -75.92526245117188,
94
+ "loss": 0.7487,
95
+ "rewards/accuracies": 0.30000001192092896,
96
+ "rewards/chosen": 0.5464689135551453,
97
+ "rewards/margins": 0.3932306468486786,
98
+ "rewards/rejected": 0.15323826670646667,
99
+ "step": 50
100
+ },
101
+ {
102
+ "epoch": 0.20905923344947736,
103
+ "grad_norm": 345.88352506142644,
104
+ "learning_rate": 8.965116279069767e-07,
105
+ "logits/chosen": -2.493328094482422,
106
+ "logits/rejected": -2.4874939918518066,
107
+ "logps/chosen": -70.6145248413086,
108
+ "logps/rejected": -66.24369049072266,
109
+ "loss": 0.7394,
110
+ "rewards/accuracies": 0.3125,
111
+ "rewards/chosen": 2.2291111946105957,
112
+ "rewards/margins": 0.3673287332057953,
113
+ "rewards/rejected": 1.8617826700210571,
114
+ "step": 60
115
+ },
116
+ {
117
+ "epoch": 0.24390243902439024,
118
+ "grad_norm": 461.9022420517148,
119
+ "learning_rate": 8.790697674418603e-07,
120
+ "logits/chosen": -2.5233232975006104,
121
+ "logits/rejected": -2.518942356109619,
122
+ "logps/chosen": -61.38080978393555,
123
+ "logps/rejected": -66.21412658691406,
124
+ "loss": 0.8029,
125
+ "rewards/accuracies": 0.2874999940395355,
126
+ "rewards/chosen": 2.4556429386138916,
127
+ "rewards/margins": 0.5528993606567383,
128
+ "rewards/rejected": 1.9027433395385742,
129
+ "step": 70
130
+ },
131
+ {
132
+ "epoch": 0.2787456445993031,
133
+ "grad_norm": 445.0623855333619,
134
+ "learning_rate": 8.616279069767441e-07,
135
+ "logits/chosen": -2.464928388595581,
136
+ "logits/rejected": -2.4558563232421875,
137
+ "logps/chosen": -72.87749481201172,
138
+ "logps/rejected": -75.74545288085938,
139
+ "loss": 0.8699,
140
+ "rewards/accuracies": 0.3062500059604645,
141
+ "rewards/chosen": 2.397139310836792,
142
+ "rewards/margins": 0.8753674626350403,
143
+ "rewards/rejected": 1.521772027015686,
144
+ "step": 80
145
+ },
146
+ {
147
+ "epoch": 0.313588850174216,
148
+ "grad_norm": 523.0685685145581,
149
+ "learning_rate": 8.441860465116279e-07,
150
+ "logits/chosen": -2.514057159423828,
151
+ "logits/rejected": -2.529362201690674,
152
+ "logps/chosen": -63.073158264160156,
153
+ "logps/rejected": -67.64947509765625,
154
+ "loss": 0.9667,
155
+ "rewards/accuracies": 0.2874999940395355,
156
+ "rewards/chosen": 2.5018441677093506,
157
+ "rewards/margins": 0.38231611251831055,
158
+ "rewards/rejected": 2.11952805519104,
159
+ "step": 90
160
+ },
161
+ {
162
+ "epoch": 0.34843205574912894,
163
+ "grad_norm": 538.3596051073214,
164
+ "learning_rate": 8.267441860465116e-07,
165
+ "logits/chosen": -2.5051705837249756,
166
+ "logits/rejected": -2.5038981437683105,
167
+ "logps/chosen": -72.73633575439453,
168
+ "logps/rejected": -79.80201721191406,
169
+ "loss": 0.9734,
170
+ "rewards/accuracies": 0.3125,
171
+ "rewards/chosen": 2.179957389831543,
172
+ "rewards/margins": 1.1052770614624023,
173
+ "rewards/rejected": 1.0746803283691406,
174
+ "step": 100
175
+ },
176
+ {
177
+ "epoch": 0.34843205574912894,
178
+ "eval_logits/chosen": -2.5853304862976074,
179
+ "eval_logits/rejected": -2.569460868835449,
180
+ "eval_logps/chosen": -72.08978271484375,
181
+ "eval_logps/rejected": -79.57595825195312,
182
+ "eval_loss": 0.9202933311462402,
183
+ "eval_rewards/accuracies": 0.329365074634552,
184
+ "eval_rewards/chosen": 2.065248489379883,
185
+ "eval_rewards/margins": 0.730187714099884,
186
+ "eval_rewards/rejected": 1.3350608348846436,
187
+ "eval_runtime": 113.407,
188
+ "eval_samples_per_second": 17.636,
189
+ "eval_steps_per_second": 0.556,
190
+ "step": 100
191
+ },
192
+ {
193
+ "epoch": 0.3832752613240418,
194
+ "grad_norm": 565.1538822918781,
195
+ "learning_rate": 8.093023255813954e-07,
196
+ "logits/chosen": -2.517366409301758,
197
+ "logits/rejected": -2.486677646636963,
198
+ "logps/chosen": -71.63627624511719,
199
+ "logps/rejected": -61.792930603027344,
200
+ "loss": 0.9796,
201
+ "rewards/accuracies": 0.2562499940395355,
202
+ "rewards/chosen": 1.3190182447433472,
203
+ "rewards/margins": 0.27183371782302856,
204
+ "rewards/rejected": 1.0471845865249634,
205
+ "step": 110
206
+ },
207
+ {
208
+ "epoch": 0.4181184668989547,
209
+ "grad_norm": 415.3172615536407,
210
+ "learning_rate": 7.91860465116279e-07,
211
+ "logits/chosen": -2.5475993156433105,
212
+ "logits/rejected": -2.518019437789917,
213
+ "logps/chosen": -75.4908676147461,
214
+ "logps/rejected": -65.90876770019531,
215
+ "loss": 0.9023,
216
+ "rewards/accuracies": 0.30000001192092896,
217
+ "rewards/chosen": 1.6948095560073853,
218
+ "rewards/margins": 1.0387893915176392,
219
+ "rewards/rejected": 0.6560201644897461,
220
+ "step": 120
221
+ },
222
+ {
223
+ "epoch": 0.4529616724738676,
224
+ "grad_norm": 654.650382698047,
225
+ "learning_rate": 7.744186046511627e-07,
226
+ "logits/chosen": -2.57270884513855,
227
+ "logits/rejected": -2.5545814037323,
228
+ "logps/chosen": -82.57270812988281,
229
+ "logps/rejected": -87.82828521728516,
230
+ "loss": 1.0166,
231
+ "rewards/accuracies": 0.34375,
232
+ "rewards/chosen": 1.1249371767044067,
233
+ "rewards/margins": 1.3999314308166504,
234
+ "rewards/rejected": -0.2749941945075989,
235
+ "step": 130
236
+ },
237
+ {
238
+ "epoch": 0.4878048780487805,
239
+ "grad_norm": 392.83640221764773,
240
+ "learning_rate": 7.569767441860465e-07,
241
+ "logits/chosen": -2.4829936027526855,
242
+ "logits/rejected": -2.474769353866577,
243
+ "logps/chosen": -78.97613525390625,
244
+ "logps/rejected": -70.12017059326172,
245
+ "loss": 0.9314,
246
+ "rewards/accuracies": 0.32499998807907104,
247
+ "rewards/chosen": 1.8655914068222046,
248
+ "rewards/margins": 1.3116776943206787,
249
+ "rewards/rejected": 0.5539135932922363,
250
+ "step": 140
251
+ },
252
+ {
253
+ "epoch": 0.5226480836236934,
254
+ "grad_norm": 496.0616334612732,
255
+ "learning_rate": 7.395348837209303e-07,
256
+ "logits/chosen": -2.5784010887145996,
257
+ "logits/rejected": -2.5391345024108887,
258
+ "logps/chosen": -77.13214111328125,
259
+ "logps/rejected": -78.45569610595703,
260
+ "loss": 1.0681,
261
+ "rewards/accuracies": 0.28125,
262
+ "rewards/chosen": 2.022892713546753,
263
+ "rewards/margins": 1.0811107158660889,
264
+ "rewards/rejected": 0.9417816400527954,
265
+ "step": 150
266
+ },
267
+ {
268
+ "epoch": 0.5574912891986062,
269
+ "grad_norm": 572.3657252794802,
270
+ "learning_rate": 7.220930232558139e-07,
271
+ "logits/chosen": -2.579862117767334,
272
+ "logits/rejected": -2.5984551906585693,
273
+ "logps/chosen": -62.170875549316406,
274
+ "logps/rejected": -70.7127914428711,
275
+ "loss": 0.996,
276
+ "rewards/accuracies": 0.28125,
277
+ "rewards/chosen": 2.024418354034424,
278
+ "rewards/margins": 0.9407089948654175,
279
+ "rewards/rejected": 1.0837091207504272,
280
+ "step": 160
281
+ },
282
+ {
283
+ "epoch": 0.5923344947735192,
284
+ "grad_norm": 568.8903494255211,
285
+ "learning_rate": 7.046511627906976e-07,
286
+ "logits/chosen": -2.5988354682922363,
287
+ "logits/rejected": -2.5848004817962646,
288
+ "logps/chosen": -66.76710510253906,
289
+ "logps/rejected": -75.21239471435547,
290
+ "loss": 0.9987,
291
+ "rewards/accuracies": 0.25,
292
+ "rewards/chosen": 1.8922548294067383,
293
+ "rewards/margins": 0.944841206073761,
294
+ "rewards/rejected": 0.9474137425422668,
295
+ "step": 170
296
+ },
297
+ {
298
+ "epoch": 0.627177700348432,
299
+ "grad_norm": 521.2606597836349,
300
+ "learning_rate": 6.872093023255814e-07,
301
+ "logits/chosen": -2.6132636070251465,
302
+ "logits/rejected": -2.602483034133911,
303
+ "logps/chosen": -89.0395736694336,
304
+ "logps/rejected": -84.7410659790039,
305
+ "loss": 1.0952,
306
+ "rewards/accuracies": 0.3375000059604645,
307
+ "rewards/chosen": 2.4231231212615967,
308
+ "rewards/margins": 0.9014472961425781,
309
+ "rewards/rejected": 1.5216760635375977,
310
+ "step": 180
311
+ },
312
+ {
313
+ "epoch": 0.662020905923345,
314
+ "grad_norm": 416.32272877425527,
315
+ "learning_rate": 6.697674418604651e-07,
316
+ "logits/chosen": -2.6020960807800293,
317
+ "logits/rejected": -2.594320297241211,
318
+ "logps/chosen": -68.85299682617188,
319
+ "logps/rejected": -79.95862579345703,
320
+ "loss": 1.0472,
321
+ "rewards/accuracies": 0.2874999940395355,
322
+ "rewards/chosen": 1.8906943798065186,
323
+ "rewards/margins": 0.5408729314804077,
324
+ "rewards/rejected": 1.3498214483261108,
325
+ "step": 190
326
+ },
327
+ {
328
+ "epoch": 0.6968641114982579,
329
+ "grad_norm": 736.2428171588583,
330
+ "learning_rate": 6.523255813953487e-07,
331
+ "logits/chosen": -2.6261801719665527,
332
+ "logits/rejected": -2.6321043968200684,
333
+ "logps/chosen": -86.841064453125,
334
+ "logps/rejected": -90.11552429199219,
335
+ "loss": 0.9883,
336
+ "rewards/accuracies": 0.36250001192092896,
337
+ "rewards/chosen": 3.2811522483825684,
338
+ "rewards/margins": 1.6169086694717407,
339
+ "rewards/rejected": 1.6642436981201172,
340
+ "step": 200
341
+ },
342
+ {
343
+ "epoch": 0.6968641114982579,
344
+ "eval_logits/chosen": -2.5874733924865723,
345
+ "eval_logits/rejected": -2.5707614421844482,
346
+ "eval_logps/chosen": -71.6690673828125,
347
+ "eval_logps/rejected": -79.74640655517578,
348
+ "eval_loss": 1.0967082977294922,
349
+ "eval_rewards/accuracies": 0.3373015820980072,
350
+ "eval_rewards/chosen": 2.4270544052124023,
351
+ "eval_rewards/margins": 1.2385817766189575,
352
+ "eval_rewards/rejected": 1.1884726285934448,
353
+ "eval_runtime": 113.2711,
354
+ "eval_samples_per_second": 17.657,
355
+ "eval_steps_per_second": 0.556,
356
+ "step": 200
357
+ },
358
+ {
359
+ "epoch": 0.7317073170731707,
360
+ "grad_norm": 760.3151343554975,
361
+ "learning_rate": 6.348837209302325e-07,
362
+ "logits/chosen": -2.594675064086914,
363
+ "logits/rejected": -2.570430278778076,
364
+ "logps/chosen": -66.980224609375,
365
+ "logps/rejected": -62.78594207763672,
366
+ "loss": 1.0072,
367
+ "rewards/accuracies": 0.3375000059604645,
368
+ "rewards/chosen": 2.011408805847168,
369
+ "rewards/margins": 1.4795125722885132,
370
+ "rewards/rejected": 0.5318960547447205,
371
+ "step": 210
372
+ },
373
+ {
374
+ "epoch": 0.7665505226480837,
375
+ "grad_norm": 481.77954463638264,
376
+ "learning_rate": 6.174418604651163e-07,
377
+ "logits/chosen": -2.645392894744873,
378
+ "logits/rejected": -2.6270740032196045,
379
+ "logps/chosen": -70.76057434082031,
380
+ "logps/rejected": -69.80204772949219,
381
+ "loss": 1.1057,
382
+ "rewards/accuracies": 0.24375000596046448,
383
+ "rewards/chosen": 2.319939136505127,
384
+ "rewards/margins": 0.7193705439567566,
385
+ "rewards/rejected": 1.6005685329437256,
386
+ "step": 220
387
+ },
388
+ {
389
+ "epoch": 0.8013937282229965,
390
+ "grad_norm": 765.046306381631,
391
+ "learning_rate": 6e-07,
392
+ "logits/chosen": -2.6732935905456543,
393
+ "logits/rejected": -2.65371036529541,
394
+ "logps/chosen": -86.65088653564453,
395
+ "logps/rejected": -87.35969543457031,
396
+ "loss": 1.1788,
397
+ "rewards/accuracies": 0.3375000059604645,
398
+ "rewards/chosen": 2.8807787895202637,
399
+ "rewards/margins": 2.20210599899292,
400
+ "rewards/rejected": 0.6786726713180542,
401
+ "step": 230
402
+ },
403
+ {
404
+ "epoch": 0.8362369337979094,
405
+ "grad_norm": 548.8932956260087,
406
+ "learning_rate": 5.825581395348838e-07,
407
+ "logits/chosen": -2.6739866733551025,
408
+ "logits/rejected": -2.6417675018310547,
409
+ "logps/chosen": -83.05066680908203,
410
+ "logps/rejected": -77.82533264160156,
411
+ "loss": 1.1372,
412
+ "rewards/accuracies": 0.34375,
413
+ "rewards/chosen": 3.1742072105407715,
414
+ "rewards/margins": 1.1869542598724365,
415
+ "rewards/rejected": 1.9872528314590454,
416
+ "step": 240
417
+ },
418
+ {
419
+ "epoch": 0.8710801393728222,
420
+ "grad_norm": 616.7927885159357,
421
+ "learning_rate": 5.651162790697674e-07,
422
+ "logits/chosen": -2.671238899230957,
423
+ "logits/rejected": -2.637321949005127,
424
+ "logps/chosen": -92.645263671875,
425
+ "logps/rejected": -88.57808685302734,
426
+ "loss": 0.8737,
427
+ "rewards/accuracies": 0.3499999940395355,
428
+ "rewards/chosen": 2.4114432334899902,
429
+ "rewards/margins": 1.2348735332489014,
430
+ "rewards/rejected": 1.1765693426132202,
431
+ "step": 250
432
+ },
433
+ {
434
+ "epoch": 0.9059233449477352,
435
+ "grad_norm": 478.21518650366113,
436
+ "learning_rate": 5.476744186046511e-07,
437
+ "logits/chosen": -2.555266857147217,
438
+ "logits/rejected": -2.5690598487854004,
439
+ "logps/chosen": -57.567100524902344,
440
+ "logps/rejected": -65.19673919677734,
441
+ "loss": 1.1295,
442
+ "rewards/accuracies": 0.3062500059604645,
443
+ "rewards/chosen": 1.2747440338134766,
444
+ "rewards/margins": 0.9036104083061218,
445
+ "rewards/rejected": 0.37113362550735474,
446
+ "step": 260
447
+ },
448
+ {
449
+ "epoch": 0.9407665505226481,
450
+ "grad_norm": 730.4342654590606,
451
+ "learning_rate": 5.302325581395349e-07,
452
+ "logits/chosen": -2.6346664428710938,
453
+ "logits/rejected": -2.633521556854248,
454
+ "logps/chosen": -67.13279724121094,
455
+ "logps/rejected": -82.27578735351562,
456
+ "loss": 1.0712,
457
+ "rewards/accuracies": 0.3062500059604645,
458
+ "rewards/chosen": 1.5027388334274292,
459
+ "rewards/margins": 1.282596468925476,
460
+ "rewards/rejected": 0.2201424092054367,
461
+ "step": 270
462
+ },
463
+ {
464
+ "epoch": 0.975609756097561,
465
+ "grad_norm": 595.5490958688126,
466
+ "learning_rate": 5.127906976744186e-07,
467
+ "logits/chosen": -2.5305819511413574,
468
+ "logits/rejected": -2.510344982147217,
469
+ "logps/chosen": -65.64894104003906,
470
+ "logps/rejected": -70.33460998535156,
471
+ "loss": 0.9263,
472
+ "rewards/accuracies": 0.32499998807907104,
473
+ "rewards/chosen": 1.8794810771942139,
474
+ "rewards/margins": 1.4496572017669678,
475
+ "rewards/rejected": 0.42982417345046997,
476
+ "step": 280
477
+ },
478
+ {
479
+ "epoch": 1.0104529616724738,
480
+ "grad_norm": 23.979212075374758,
481
+ "learning_rate": 4.953488372093023e-07,
482
+ "logits/chosen": -2.573585033416748,
483
+ "logits/rejected": -2.5460705757141113,
484
+ "logps/chosen": -66.83503723144531,
485
+ "logps/rejected": -65.80335998535156,
486
+ "loss": 0.7697,
487
+ "rewards/accuracies": 0.38749998807907104,
488
+ "rewards/chosen": 4.500079154968262,
489
+ "rewards/margins": 5.2650299072265625,
490
+ "rewards/rejected": -0.7649505734443665,
491
+ "step": 290
492
+ },
493
+ {
494
+ "epoch": 1.0452961672473868,
495
+ "grad_norm": 44.13210545620442,
496
+ "learning_rate": 4.779069767441861e-07,
497
+ "logits/chosen": -2.6247591972351074,
498
+ "logits/rejected": -2.610288381576538,
499
+ "logps/chosen": -56.94911575317383,
500
+ "logps/rejected": -78.56959533691406,
501
+ "loss": 0.4215,
502
+ "rewards/accuracies": 0.4124999940395355,
503
+ "rewards/chosen": 8.255823135375977,
504
+ "rewards/margins": 16.339366912841797,
505
+ "rewards/rejected": -8.08354377746582,
506
+ "step": 300
507
+ },
508
+ {
509
+ "epoch": 1.0452961672473868,
510
+ "eval_logits/chosen": -2.656022548675537,
511
+ "eval_logits/rejected": -2.6403489112854004,
512
+ "eval_logps/chosen": -70.90100860595703,
513
+ "eval_logps/rejected": -79.04632568359375,
514
+ "eval_loss": 1.1233958005905151,
515
+ "eval_rewards/accuracies": 0.3313491940498352,
516
+ "eval_rewards/chosen": 3.0875895023345947,
517
+ "eval_rewards/margins": 1.2970443964004517,
518
+ "eval_rewards/rejected": 1.790545105934143,
519
+ "eval_runtime": 113.4658,
520
+ "eval_samples_per_second": 17.626,
521
+ "eval_steps_per_second": 0.555,
522
+ "step": 300
523
+ },
524
+ {
525
+ "epoch": 1.0801393728222997,
526
+ "grad_norm": 40.17101348318783,
527
+ "learning_rate": 4.604651162790698e-07,
528
+ "logits/chosen": -2.611358165740967,
529
+ "logits/rejected": -2.6116092205047607,
530
+ "logps/chosen": -58.17817306518555,
531
+ "logps/rejected": -85.24242401123047,
532
+ "loss": 0.4163,
533
+ "rewards/accuracies": 0.4375,
534
+ "rewards/chosen": 8.134294509887695,
535
+ "rewards/margins": 16.730432510375977,
536
+ "rewards/rejected": -8.596138000488281,
537
+ "step": 310
538
+ },
539
+ {
540
+ "epoch": 1.1149825783972125,
541
+ "grad_norm": 153.73635089362784,
542
+ "learning_rate": 4.4302325581395346e-07,
543
+ "logits/chosen": -2.647315740585327,
544
+ "logits/rejected": -2.6344220638275146,
545
+ "logps/chosen": -64.29966735839844,
546
+ "logps/rejected": -88.07272338867188,
547
+ "loss": 0.3886,
548
+ "rewards/accuracies": 0.4749999940395355,
549
+ "rewards/chosen": 8.3077974319458,
550
+ "rewards/margins": 16.785139083862305,
551
+ "rewards/rejected": -8.477343559265137,
552
+ "step": 320
553
+ },
554
+ {
555
+ "epoch": 1.1498257839721253,
556
+ "grad_norm": 196.76703226808354,
557
+ "learning_rate": 4.255813953488372e-07,
558
+ "logits/chosen": -2.6384284496307373,
559
+ "logits/rejected": -2.6142385005950928,
560
+ "logps/chosen": -73.04707336425781,
561
+ "logps/rejected": -88.73326873779297,
562
+ "loss": 0.4236,
563
+ "rewards/accuracies": 0.518750011920929,
564
+ "rewards/chosen": 9.5239896774292,
565
+ "rewards/margins": 18.215595245361328,
566
+ "rewards/rejected": -8.691606521606445,
567
+ "step": 330
568
+ },
569
+ {
570
+ "epoch": 1.1846689895470384,
571
+ "grad_norm": 16.30501354867415,
572
+ "learning_rate": 4.081395348837209e-07,
573
+ "logits/chosen": -2.6152892112731934,
574
+ "logits/rejected": -2.617434024810791,
575
+ "logps/chosen": -72.81196594238281,
576
+ "logps/rejected": -107.9176254272461,
577
+ "loss": 0.4009,
578
+ "rewards/accuracies": 0.550000011920929,
579
+ "rewards/chosen": 9.404792785644531,
580
+ "rewards/margins": 20.270700454711914,
581
+ "rewards/rejected": -10.8659086227417,
582
+ "step": 340
583
+ },
584
+ {
585
+ "epoch": 1.2195121951219512,
586
+ "grad_norm": 253.70590340563848,
587
+ "learning_rate": 3.9069767441860464e-07,
588
+ "logits/chosen": -2.627079486846924,
589
+ "logits/rejected": -2.594177007675171,
590
+ "logps/chosen": -57.3041877746582,
591
+ "logps/rejected": -77.43646240234375,
592
+ "loss": 0.4007,
593
+ "rewards/accuracies": 0.48750001192092896,
594
+ "rewards/chosen": 9.71183967590332,
595
+ "rewards/margins": 19.823144912719727,
596
+ "rewards/rejected": -10.111306190490723,
597
+ "step": 350
598
+ },
599
+ {
600
+ "epoch": 1.254355400696864,
601
+ "grad_norm": 137.93033306808474,
602
+ "learning_rate": 3.7325581395348837e-07,
603
+ "logits/chosen": -2.603165626525879,
604
+ "logits/rejected": -2.573925495147705,
605
+ "logps/chosen": -62.46228790283203,
606
+ "logps/rejected": -73.45440673828125,
607
+ "loss": 0.374,
608
+ "rewards/accuracies": 0.4437499940395355,
609
+ "rewards/chosen": 8.073513984680176,
610
+ "rewards/margins": 14.77312183380127,
611
+ "rewards/rejected": -6.699607849121094,
612
+ "step": 360
613
+ },
614
+ {
615
+ "epoch": 1.289198606271777,
616
+ "grad_norm": 2.3357390678870833,
617
+ "learning_rate": 3.5581395348837204e-07,
618
+ "logits/chosen": -2.562800407409668,
619
+ "logits/rejected": -2.5790553092956543,
620
+ "logps/chosen": -61.26555633544922,
621
+ "logps/rejected": -86.61184692382812,
622
+ "loss": 0.4449,
623
+ "rewards/accuracies": 0.46875,
624
+ "rewards/chosen": 8.428425788879395,
625
+ "rewards/margins": 18.987430572509766,
626
+ "rewards/rejected": -10.559003829956055,
627
+ "step": 370
628
+ },
629
+ {
630
+ "epoch": 1.32404181184669,
631
+ "grad_norm": 240.07418332327254,
632
+ "learning_rate": 3.383720930232558e-07,
633
+ "logits/chosen": -2.55873966217041,
634
+ "logits/rejected": -2.556434392929077,
635
+ "logps/chosen": -77.422607421875,
636
+ "logps/rejected": -101.52474212646484,
637
+ "loss": 0.4144,
638
+ "rewards/accuracies": 0.5375000238418579,
639
+ "rewards/chosen": 8.128644943237305,
640
+ "rewards/margins": 21.078929901123047,
641
+ "rewards/rejected": -12.950288772583008,
642
+ "step": 380
643
+ },
644
+ {
645
+ "epoch": 1.3588850174216027,
646
+ "grad_norm": 26.553648537118264,
647
+ "learning_rate": 3.2093023255813955e-07,
648
+ "logits/chosen": -2.6296944618225098,
649
+ "logits/rejected": -2.6129908561706543,
650
+ "logps/chosen": -56.90552520751953,
651
+ "logps/rejected": -84.36430358886719,
652
+ "loss": 0.378,
653
+ "rewards/accuracies": 0.4375,
654
+ "rewards/chosen": 8.677267074584961,
655
+ "rewards/margins": 19.946393966674805,
656
+ "rewards/rejected": -11.269128799438477,
657
+ "step": 390
658
+ },
659
+ {
660
+ "epoch": 1.3937282229965158,
661
+ "grad_norm": 11.915044642787674,
662
+ "learning_rate": 3.034883720930232e-07,
663
+ "logits/chosen": -2.6192545890808105,
664
+ "logits/rejected": -2.591175079345703,
665
+ "logps/chosen": -77.75074768066406,
666
+ "logps/rejected": -112.25565338134766,
667
+ "loss": 0.393,
668
+ "rewards/accuracies": 0.518750011920929,
669
+ "rewards/chosen": 7.220816135406494,
670
+ "rewards/margins": 19.133440017700195,
671
+ "rewards/rejected": -11.912622451782227,
672
+ "step": 400
673
+ },
674
+ {
675
+ "epoch": 1.3937282229965158,
676
+ "eval_logits/chosen": -2.625042676925659,
677
+ "eval_logits/rejected": -2.609307050704956,
678
+ "eval_logps/chosen": -74.64740753173828,
679
+ "eval_logps/rejected": -83.32994842529297,
680
+ "eval_loss": 1.223362684249878,
681
+ "eval_rewards/accuracies": 0.32341268658638,
682
+ "eval_rewards/chosen": -0.1343114674091339,
683
+ "eval_rewards/margins": 1.759063720703125,
684
+ "eval_rewards/rejected": -1.893375277519226,
685
+ "eval_runtime": 113.3813,
686
+ "eval_samples_per_second": 17.64,
687
+ "eval_steps_per_second": 0.556,
688
+ "step": 400
689
+ },
690
+ {
691
+ "epoch": 1.4285714285714286,
692
+ "grad_norm": 265.4416418102539,
693
+ "learning_rate": 2.8604651162790695e-07,
694
+ "logits/chosen": -2.6114742755889893,
695
+ "logits/rejected": -2.601598024368286,
696
+ "logps/chosen": -73.08380889892578,
697
+ "logps/rejected": -94.53117370605469,
698
+ "loss": 0.6924,
699
+ "rewards/accuracies": 0.4937500059604645,
700
+ "rewards/chosen": 8.195378303527832,
701
+ "rewards/margins": 20.368560791015625,
702
+ "rewards/rejected": -12.17318344116211,
703
+ "step": 410
704
+ },
705
+ {
706
+ "epoch": 1.4634146341463414,
707
+ "grad_norm": 110.44055155377453,
708
+ "learning_rate": 2.6860465116279067e-07,
709
+ "logits/chosen": -2.651491641998291,
710
+ "logits/rejected": -2.65342378616333,
711
+ "logps/chosen": -69.68745422363281,
712
+ "logps/rejected": -97.7309341430664,
713
+ "loss": 0.4257,
714
+ "rewards/accuracies": 0.42500001192092896,
715
+ "rewards/chosen": 5.508909702301025,
716
+ "rewards/margins": 16.567991256713867,
717
+ "rewards/rejected": -11.059081077575684,
718
+ "step": 420
719
+ },
720
+ {
721
+ "epoch": 1.4982578397212545,
722
+ "grad_norm": 0.6603991710359594,
723
+ "learning_rate": 2.511627906976744e-07,
724
+ "logits/chosen": -2.606182336807251,
725
+ "logits/rejected": -2.5925304889678955,
726
+ "logps/chosen": -62.015174865722656,
727
+ "logps/rejected": -85.03865814208984,
728
+ "loss": 0.3861,
729
+ "rewards/accuracies": 0.4749999940395355,
730
+ "rewards/chosen": 5.7165632247924805,
731
+ "rewards/margins": 16.509204864501953,
732
+ "rewards/rejected": -10.792640686035156,
733
+ "step": 430
734
+ },
735
+ {
736
+ "epoch": 1.533101045296167,
737
+ "grad_norm": 171.49719572579374,
738
+ "learning_rate": 2.3372093023255815e-07,
739
+ "logits/chosen": -2.5812370777130127,
740
+ "logits/rejected": -2.5880465507507324,
741
+ "logps/chosen": -62.13433074951172,
742
+ "logps/rejected": -84.38856506347656,
743
+ "loss": 0.4158,
744
+ "rewards/accuracies": 0.42500001192092896,
745
+ "rewards/chosen": 6.746127128601074,
746
+ "rewards/margins": 16.110437393188477,
747
+ "rewards/rejected": -9.364312171936035,
748
+ "step": 440
749
+ },
750
+ {
751
+ "epoch": 1.5679442508710801,
752
+ "grad_norm": 28.499265540380463,
753
+ "learning_rate": 2.1627906976744185e-07,
754
+ "logits/chosen": -2.641430139541626,
755
+ "logits/rejected": -2.6030945777893066,
756
+ "logps/chosen": -79.95354461669922,
757
+ "logps/rejected": -103.62571716308594,
758
+ "loss": 0.4177,
759
+ "rewards/accuracies": 0.512499988079071,
760
+ "rewards/chosen": 9.364060401916504,
761
+ "rewards/margins": 24.554065704345703,
762
+ "rewards/rejected": -15.1900053024292,
763
+ "step": 450
764
+ },
765
+ {
766
+ "epoch": 1.6027874564459932,
767
+ "grad_norm": 208.9071332592839,
768
+ "learning_rate": 1.9883720930232558e-07,
769
+ "logits/chosen": -2.6206653118133545,
770
+ "logits/rejected": -2.597672939300537,
771
+ "logps/chosen": -66.5615463256836,
772
+ "logps/rejected": -93.40229797363281,
773
+ "loss": 0.4043,
774
+ "rewards/accuracies": 0.5,
775
+ "rewards/chosen": 8.271448135375977,
776
+ "rewards/margins": 22.373382568359375,
777
+ "rewards/rejected": -14.101933479309082,
778
+ "step": 460
779
+ },
780
+ {
781
+ "epoch": 1.6376306620209058,
782
+ "grad_norm": 0.3844536104459883,
783
+ "learning_rate": 1.8139534883720928e-07,
784
+ "logits/chosen": -2.5973286628723145,
785
+ "logits/rejected": -2.5908195972442627,
786
+ "logps/chosen": -54.78232955932617,
787
+ "logps/rejected": -81.273681640625,
788
+ "loss": 0.4316,
789
+ "rewards/accuracies": 0.4000000059604645,
790
+ "rewards/chosen": 5.607256889343262,
791
+ "rewards/margins": 13.865686416625977,
792
+ "rewards/rejected": -8.258430480957031,
793
+ "step": 470
794
+ },
795
+ {
796
+ "epoch": 1.6724738675958188,
797
+ "grad_norm": 150.7317542423224,
798
+ "learning_rate": 1.63953488372093e-07,
799
+ "logits/chosen": -2.599632501602173,
800
+ "logits/rejected": -2.584341049194336,
801
+ "logps/chosen": -49.672950744628906,
802
+ "logps/rejected": -57.88109588623047,
803
+ "loss": 0.4093,
804
+ "rewards/accuracies": 0.35624998807907104,
805
+ "rewards/chosen": 3.7741661071777344,
806
+ "rewards/margins": 8.84811782836914,
807
+ "rewards/rejected": -5.073951721191406,
808
+ "step": 480
809
+ },
810
+ {
811
+ "epoch": 1.7073170731707317,
812
+ "grad_norm": 51.065123641891695,
813
+ "learning_rate": 1.4651162790697676e-07,
814
+ "logits/chosen": -2.571542978286743,
815
+ "logits/rejected": -2.561433792114258,
816
+ "logps/chosen": -66.06110382080078,
817
+ "logps/rejected": -81.25625610351562,
818
+ "loss": 0.5041,
819
+ "rewards/accuracies": 0.4000000059604645,
820
+ "rewards/chosen": 6.158200740814209,
821
+ "rewards/margins": 15.785115242004395,
822
+ "rewards/rejected": -9.626913070678711,
823
+ "step": 490
824
+ },
825
+ {
826
+ "epoch": 1.7421602787456445,
827
+ "grad_norm": 9.459943139553493,
828
+ "learning_rate": 1.2906976744186046e-07,
829
+ "logits/chosen": -2.496741771697998,
830
+ "logits/rejected": -2.4906914234161377,
831
+ "logps/chosen": -68.98286437988281,
832
+ "logps/rejected": -96.44841003417969,
833
+ "loss": 0.3986,
834
+ "rewards/accuracies": 0.46875,
835
+ "rewards/chosen": 6.282340049743652,
836
+ "rewards/margins": 17.225933074951172,
837
+ "rewards/rejected": -10.943593978881836,
838
+ "step": 500
839
+ },
840
+ {
841
+ "epoch": 1.7421602787456445,
842
+ "eval_logits/chosen": -2.607004404067993,
843
+ "eval_logits/rejected": -2.5908732414245605,
844
+ "eval_logps/chosen": -74.26602935791016,
845
+ "eval_logps/rejected": -83.2776107788086,
846
+ "eval_loss": 1.2247282266616821,
847
+ "eval_rewards/accuracies": 0.3214285671710968,
848
+ "eval_rewards/chosen": 0.1936810314655304,
849
+ "eval_rewards/margins": 2.0420379638671875,
850
+ "eval_rewards/rejected": -1.848357081413269,
851
+ "eval_runtime": 113.3979,
852
+ "eval_samples_per_second": 17.637,
853
+ "eval_steps_per_second": 0.556,
854
+ "step": 500
855
+ },
856
+ {
857
+ "epoch": 1.7770034843205575,
858
+ "grad_norm": 22.381813019053016,
859
+ "learning_rate": 1.1162790697674417e-07,
860
+ "logits/chosen": -2.572063446044922,
861
+ "logits/rejected": -2.551802635192871,
862
+ "logps/chosen": -62.93878173828125,
863
+ "logps/rejected": -80.62296295166016,
864
+ "loss": 0.4109,
865
+ "rewards/accuracies": 0.45625001192092896,
866
+ "rewards/chosen": 6.723170280456543,
867
+ "rewards/margins": 16.216602325439453,
868
+ "rewards/rejected": -9.49343204498291,
869
+ "step": 510
870
+ },
871
+ {
872
+ "epoch": 1.8118466898954704,
873
+ "grad_norm": 34.84061860816185,
874
+ "learning_rate": 9.418604651162791e-08,
875
+ "logits/chosen": -2.5822184085845947,
876
+ "logits/rejected": -2.5742483139038086,
877
+ "logps/chosen": -66.30496978759766,
878
+ "logps/rejected": -88.59693908691406,
879
+ "loss": 0.3956,
880
+ "rewards/accuracies": 0.4625000059604645,
881
+ "rewards/chosen": 6.850976467132568,
882
+ "rewards/margins": 16.641326904296875,
883
+ "rewards/rejected": -9.790349006652832,
884
+ "step": 520
885
+ },
886
+ {
887
+ "epoch": 1.8466898954703832,
888
+ "grad_norm": 26.726116225430992,
889
+ "learning_rate": 7.674418604651163e-08,
890
+ "logits/chosen": -2.5606324672698975,
891
+ "logits/rejected": -2.5552823543548584,
892
+ "logps/chosen": -62.88715744018555,
893
+ "logps/rejected": -84.60001373291016,
894
+ "loss": 0.3962,
895
+ "rewards/accuracies": 0.4625000059604645,
896
+ "rewards/chosen": 7.2661237716674805,
897
+ "rewards/margins": 18.649456024169922,
898
+ "rewards/rejected": -11.383334159851074,
899
+ "step": 530
900
+ },
901
+ {
902
+ "epoch": 1.8815331010452963,
903
+ "grad_norm": 746.1912709837381,
904
+ "learning_rate": 5.930232558139535e-08,
905
+ "logits/chosen": -2.5763189792633057,
906
+ "logits/rejected": -2.589749336242676,
907
+ "logps/chosen": -54.900634765625,
908
+ "logps/rejected": -89.78245544433594,
909
+ "loss": 0.4804,
910
+ "rewards/accuracies": 0.45625001192092896,
911
+ "rewards/chosen": 7.59392786026001,
912
+ "rewards/margins": 21.733322143554688,
913
+ "rewards/rejected": -14.139394760131836,
914
+ "step": 540
915
+ },
916
+ {
917
+ "epoch": 1.916376306620209,
918
+ "grad_norm": 0.6251683362748027,
919
+ "learning_rate": 4.1860465116279067e-08,
920
+ "logits/chosen": -2.5647377967834473,
921
+ "logits/rejected": -2.5423054695129395,
922
+ "logps/chosen": -79.24058532714844,
923
+ "logps/rejected": -106.3379898071289,
924
+ "loss": 0.4013,
925
+ "rewards/accuracies": 0.5,
926
+ "rewards/chosen": 9.971015930175781,
927
+ "rewards/margins": 26.699609756469727,
928
+ "rewards/rejected": -16.728595733642578,
929
+ "step": 550
930
+ },
931
+ {
932
+ "epoch": 1.951219512195122,
933
+ "grad_norm": 252.84212758390947,
934
+ "learning_rate": 2.441860465116279e-08,
935
+ "logits/chosen": -2.560770034790039,
936
+ "logits/rejected": -2.567601442337036,
937
+ "logps/chosen": -57.37895584106445,
938
+ "logps/rejected": -80.76757049560547,
939
+ "loss": 0.4091,
940
+ "rewards/accuracies": 0.4437499940395355,
941
+ "rewards/chosen": 5.453306674957275,
942
+ "rewards/margins": 14.458730697631836,
943
+ "rewards/rejected": -9.005423545837402,
944
+ "step": 560
945
+ },
946
+ {
947
+ "epoch": 1.986062717770035,
948
+ "grad_norm": 110.2396617228431,
949
+ "learning_rate": 6.976744186046511e-09,
950
+ "logits/chosen": -2.637852191925049,
951
+ "logits/rejected": -2.608257293701172,
952
+ "logps/chosen": -58.60048294067383,
953
+ "logps/rejected": -71.56431579589844,
954
+ "loss": 0.3934,
955
+ "rewards/accuracies": 0.4375,
956
+ "rewards/chosen": 5.686958312988281,
957
+ "rewards/margins": 13.428431510925293,
958
+ "rewards/rejected": -7.7414727210998535,
959
+ "step": 570
960
+ },
961
+ {
962
+ "epoch": 2.0,
963
+ "step": 574,
964
+ "total_flos": 0.0,
965
+ "train_loss": 0.6794358088995106,
966
+ "train_runtime": 6413.3344,
967
+ "train_samples_per_second": 5.719,
968
+ "train_steps_per_second": 0.09
969
+ }
970
+ ],
971
+ "logging_steps": 10,
972
+ "max_steps": 574,
973
+ "num_input_tokens_seen": 0,
974
+ "num_train_epochs": 2,
975
+ "save_steps": 100,
976
+ "stateful_callbacks": {
977
+ "TrainerControl": {
978
+ "args": {
979
+ "should_epoch_stop": false,
980
+ "should_evaluate": false,
981
+ "should_log": false,
982
+ "should_save": true,
983
+ "should_training_stop": true
984
+ },
985
+ "attributes": {}
986
+ }
987
+ },
988
+ "total_flos": 0.0,
989
+ "train_batch_size": 8,
990
+ "trial_name": null,
991
+ "trial_params": null
992
+ }