lole25 commited on
Commit
8d6b87d
1 Parent(s): 6beb2ef

Model save

Browse files
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ tags:
4
+ - trl
5
+ - dpo
6
+ - generated_from_trainer
7
+ base_model: DUAL-GPO/phi-2-irepo-chatml-merged-i0
8
+ model-index:
9
+ - name: phi-2-irepo-chatml-v5-i1
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # phi-2-irepo-chatml-v5-i1
17
+
18
+ This model is a fine-tuned version of [DUAL-GPO/phi-2-irepo-chatml-merged-i0](https://huggingface.co/DUAL-GPO/phi-2-irepo-chatml-merged-i0) on the None dataset.
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 5e-06
38
+ - train_batch_size: 4
39
+ - eval_batch_size: 4
40
+ - seed: 42
41
+ - distributed_type: multi-GPU
42
+ - num_devices: 2
43
+ - gradient_accumulation_steps: 4
44
+ - total_train_batch_size: 32
45
+ - total_eval_batch_size: 8
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: cosine
48
+ - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 1
50
+
51
+ ### Training results
52
+
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - PEFT 0.7.1
58
+ - Transformers 4.36.2
59
+ - Pytorch 2.1.2+cu121
60
+ - Datasets 2.14.6
61
+ - Tokenizers 0.15.2
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2875b5d3dd017bbd012f2065ac3a07e19ece79e1f5cb29294952abcd46d69c9a
3
  size 335579632
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15c0f93f048a94a3463a5fcc6c9cef1c92a5bdd0b353e0c92543f59adf77dc54
3
  size 335579632
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.029720235659702103,
4
+ "train_runtime": 1648.0199,
5
+ "train_samples": 30000,
6
+ "train_samples_per_second": 18.204,
7
+ "train_steps_per_second": 0.569
8
+ }
runs/May20_18-55-49_gpu4-119-5/events.out.tfevents.1716195533.gpu4-119-5.934052.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7c3c9ba65339ada4cd27efd7a21a86d3ab424df4479baaec17af922bd83c6f5e
3
- size 11249
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5dd9afcb93316ffe42099b78c4001eecbf9def04ccb6840767549f217f03633
3
+ size 13505
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.029720235659702103,
4
+ "train_runtime": 1648.0199,
5
+ "train_samples": 30000,
6
+ "train_samples_per_second": 18.204,
7
+ "train_steps_per_second": 0.569
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9994666666666666,
5
+ "eval_steps": 500,
6
+ "global_step": 937,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "learning_rate": 5.319148936170213e-08,
14
+ "logits/chosen": 0.5103299021720886,
15
+ "logits/rejected": 0.45260655879974365,
16
+ "logps/chosen": -462.9068603515625,
17
+ "logps/rejected": -475.4417419433594,
18
+ "loss": 0.2285,
19
+ "rewards/accuracies": 0.0,
20
+ "rewards/chosen": 0.0,
21
+ "rewards/margins": 0.0,
22
+ "rewards/rejected": 0.0,
23
+ "step": 1
24
+ },
25
+ {
26
+ "epoch": 0.01,
27
+ "learning_rate": 5.319148936170213e-07,
28
+ "logits/chosen": 0.31862032413482666,
29
+ "logits/rejected": 0.314761757850647,
30
+ "logps/chosen": -292.8707275390625,
31
+ "logps/rejected": -296.7555847167969,
32
+ "loss": 0.2107,
33
+ "rewards/accuracies": 0.2361111044883728,
34
+ "rewards/chosen": 0.00015502750466112047,
35
+ "rewards/margins": -4.963619721820578e-05,
36
+ "rewards/rejected": 0.0002046637237071991,
37
+ "step": 10
38
+ },
39
+ {
40
+ "epoch": 0.02,
41
+ "learning_rate": 1.0638297872340427e-06,
42
+ "logits/chosen": 0.31885525584220886,
43
+ "logits/rejected": 0.3816074728965759,
44
+ "logps/chosen": -255.02761840820312,
45
+ "logps/rejected": -261.906005859375,
46
+ "loss": 0.2118,
47
+ "rewards/accuracies": 0.2562499940395355,
48
+ "rewards/chosen": -0.00024467133334837854,
49
+ "rewards/margins": 3.665524491225369e-05,
50
+ "rewards/rejected": -0.0002813266182783991,
51
+ "step": 20
52
+ },
53
+ {
54
+ "epoch": 0.03,
55
+ "learning_rate": 1.595744680851064e-06,
56
+ "logits/chosen": 0.306856244802475,
57
+ "logits/rejected": 0.26547059416770935,
58
+ "logps/chosen": -271.63592529296875,
59
+ "logps/rejected": -279.66046142578125,
60
+ "loss": 0.2112,
61
+ "rewards/accuracies": 0.25,
62
+ "rewards/chosen": -0.00020208054047543555,
63
+ "rewards/margins": -1.5414796507684514e-05,
64
+ "rewards/rejected": -0.00018666575488168746,
65
+ "step": 30
66
+ },
67
+ {
68
+ "epoch": 0.04,
69
+ "learning_rate": 2.1276595744680853e-06,
70
+ "logits/chosen": 0.3179105222225189,
71
+ "logits/rejected": 0.37513914704322815,
72
+ "logps/chosen": -291.2155456542969,
73
+ "logps/rejected": -309.3365173339844,
74
+ "loss": 0.2082,
75
+ "rewards/accuracies": 0.26249998807907104,
76
+ "rewards/chosen": -0.0003900996525771916,
77
+ "rewards/margins": -8.274164429167286e-05,
78
+ "rewards/rejected": -0.00030735801556147635,
79
+ "step": 40
80
+ },
81
+ {
82
+ "epoch": 0.05,
83
+ "learning_rate": 2.6595744680851065e-06,
84
+ "logits/chosen": 0.3564581871032715,
85
+ "logits/rejected": 0.328947514295578,
86
+ "logps/chosen": -265.4711608886719,
87
+ "logps/rejected": -290.76092529296875,
88
+ "loss": 0.1932,
89
+ "rewards/accuracies": 0.25,
90
+ "rewards/chosen": -4.1023264202522114e-05,
91
+ "rewards/margins": 8.93441028892994e-06,
92
+ "rewards/rejected": -4.995765630155802e-05,
93
+ "step": 50
94
+ },
95
+ {
96
+ "epoch": 0.06,
97
+ "learning_rate": 3.191489361702128e-06,
98
+ "logits/chosen": 0.3046855330467224,
99
+ "logits/rejected": 0.3436508774757385,
100
+ "logps/chosen": -353.6334228515625,
101
+ "logps/rejected": -360.71929931640625,
102
+ "loss": 0.2105,
103
+ "rewards/accuracies": 0.3687500059604645,
104
+ "rewards/chosen": -8.662064647069201e-05,
105
+ "rewards/margins": 0.00014803092926740646,
106
+ "rewards/rejected": -0.00023465156846214086,
107
+ "step": 60
108
+ },
109
+ {
110
+ "epoch": 0.07,
111
+ "learning_rate": 3.723404255319149e-06,
112
+ "logits/chosen": 0.31014540791511536,
113
+ "logits/rejected": 0.3247402608394623,
114
+ "logps/chosen": -270.9269104003906,
115
+ "logps/rejected": -301.1846008300781,
116
+ "loss": 0.2008,
117
+ "rewards/accuracies": 0.26249998807907104,
118
+ "rewards/chosen": -0.0006521933828480542,
119
+ "rewards/margins": -0.00015351924230344594,
120
+ "rewards/rejected": -0.0004986740532331169,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 0.09,
125
+ "learning_rate": 4.255319148936171e-06,
126
+ "logits/chosen": 0.2668500542640686,
127
+ "logits/rejected": 0.2699311673641205,
128
+ "logps/chosen": -279.9768371582031,
129
+ "logps/rejected": -305.3199157714844,
130
+ "loss": 0.2116,
131
+ "rewards/accuracies": 0.2750000059604645,
132
+ "rewards/chosen": -0.001971206162124872,
133
+ "rewards/margins": 0.00017751348786987364,
134
+ "rewards/rejected": -0.002148719970136881,
135
+ "step": 80
136
+ },
137
+ {
138
+ "epoch": 0.1,
139
+ "learning_rate": 4.787234042553192e-06,
140
+ "logits/chosen": 0.29766160249710083,
141
+ "logits/rejected": 0.3357502818107605,
142
+ "logps/chosen": -335.7276306152344,
143
+ "logps/rejected": -333.04852294921875,
144
+ "loss": 0.2104,
145
+ "rewards/accuracies": 0.29374998807907104,
146
+ "rewards/chosen": -0.0029864246025681496,
147
+ "rewards/margins": 0.00020421878434717655,
148
+ "rewards/rejected": -0.003190643386915326,
149
+ "step": 90
150
+ },
151
+ {
152
+ "epoch": 0.11,
153
+ "learning_rate": 4.999375059004058e-06,
154
+ "logits/chosen": 0.32819682359695435,
155
+ "logits/rejected": 0.3049379885196686,
156
+ "logps/chosen": -341.62030029296875,
157
+ "logps/rejected": -362.6277160644531,
158
+ "loss": 0.2148,
159
+ "rewards/accuracies": 0.3375000059604645,
160
+ "rewards/chosen": -0.002841348061338067,
161
+ "rewards/margins": 0.0008320774650201201,
162
+ "rewards/rejected": -0.003673425642773509,
163
+ "step": 100
164
+ },
165
+ {
166
+ "epoch": 0.12,
167
+ "learning_rate": 4.9955571065548795e-06,
168
+ "logits/chosen": 0.25718414783477783,
169
+ "logits/rejected": 0.22829556465148926,
170
+ "logps/chosen": -262.38275146484375,
171
+ "logps/rejected": -263.3617248535156,
172
+ "loss": 0.212,
173
+ "rewards/accuracies": 0.21250000596046448,
174
+ "rewards/chosen": -0.002418057294562459,
175
+ "rewards/margins": -0.0002635159471537918,
176
+ "rewards/rejected": -0.002154541201889515,
177
+ "step": 110
178
+ },
179
+ {
180
+ "epoch": 0.13,
181
+ "learning_rate": 4.9882736864879e-06,
182
+ "logits/chosen": 0.2612871825695038,
183
+ "logits/rejected": 0.20154158771038055,
184
+ "logps/chosen": -331.4267883300781,
185
+ "logps/rejected": -325.03582763671875,
186
+ "loss": 0.2083,
187
+ "rewards/accuracies": 0.2750000059604645,
188
+ "rewards/chosen": -0.0029244977049529552,
189
+ "rewards/margins": 0.00017272381228394806,
190
+ "rewards/rejected": -0.003097221488133073,
191
+ "step": 120
192
+ },
193
+ {
194
+ "epoch": 0.14,
195
+ "learning_rate": 4.977534912960124e-06,
196
+ "logits/chosen": 0.22134217619895935,
197
+ "logits/rejected": 0.236228346824646,
198
+ "logps/chosen": -259.1029968261719,
199
+ "logps/rejected": -295.55267333984375,
200
+ "loss": 0.2013,
201
+ "rewards/accuracies": 0.29374998807907104,
202
+ "rewards/chosen": -0.0034433710388839245,
203
+ "rewards/margins": 0.0002571121440269053,
204
+ "rewards/rejected": -0.003700483590364456,
205
+ "step": 130
206
+ },
207
+ {
208
+ "epoch": 0.15,
209
+ "learning_rate": 4.963355698422092e-06,
210
+ "logits/chosen": 0.24982547760009766,
211
+ "logits/rejected": 0.255920946598053,
212
+ "logps/chosen": -319.05792236328125,
213
+ "logps/rejected": -311.7349853515625,
214
+ "loss": 0.2154,
215
+ "rewards/accuracies": 0.3062500059604645,
216
+ "rewards/chosen": -0.003797318320721388,
217
+ "rewards/margins": 0.001411179080605507,
218
+ "rewards/rejected": -0.005208497401326895,
219
+ "step": 140
220
+ },
221
+ {
222
+ "epoch": 0.16,
223
+ "learning_rate": 4.945755732909625e-06,
224
+ "logits/chosen": 0.19847659766674042,
225
+ "logits/rejected": 0.2311740666627884,
226
+ "logps/chosen": -297.9468994140625,
227
+ "logps/rejected": -310.4879455566406,
228
+ "loss": 0.2133,
229
+ "rewards/accuracies": 0.3125,
230
+ "rewards/chosen": -9.477639105170965e-05,
231
+ "rewards/margins": 0.0019024491775780916,
232
+ "rewards/rejected": -0.0019972254522144794,
233
+ "step": 150
234
+ },
235
+ {
236
+ "epoch": 0.17,
237
+ "learning_rate": 4.924759456701167e-06,
238
+ "logits/chosen": 0.19259515404701233,
239
+ "logits/rejected": 0.15325775742530823,
240
+ "logps/chosen": -304.2120361328125,
241
+ "logps/rejected": -318.15814208984375,
242
+ "loss": 0.2097,
243
+ "rewards/accuracies": 0.32499998807907104,
244
+ "rewards/chosen": 0.002039076527580619,
245
+ "rewards/margins": -0.0007348087383434176,
246
+ "rewards/rejected": 0.002773885615170002,
247
+ "step": 160
248
+ },
249
+ {
250
+ "epoch": 0.18,
251
+ "learning_rate": 4.900396026378671e-06,
252
+ "logits/chosen": 0.2160247564315796,
253
+ "logits/rejected": 0.1700972020626068,
254
+ "logps/chosen": -297.9866943359375,
255
+ "logps/rejected": -340.97308349609375,
256
+ "loss": 0.2019,
257
+ "rewards/accuracies": 0.34375,
258
+ "rewards/chosen": 0.005410970654338598,
259
+ "rewards/margins": 0.0020456851925700903,
260
+ "rewards/rejected": 0.003365286160260439,
261
+ "step": 170
262
+ },
263
+ {
264
+ "epoch": 0.19,
265
+ "learning_rate": 4.872699274339169e-06,
266
+ "logits/chosen": 0.13720422983169556,
267
+ "logits/rejected": 0.17013493180274963,
268
+ "logps/chosen": -263.00592041015625,
269
+ "logps/rejected": -285.49493408203125,
270
+ "loss": 0.203,
271
+ "rewards/accuracies": 0.30000001192092896,
272
+ "rewards/chosen": 0.006990239955484867,
273
+ "rewards/margins": 0.004235823173075914,
274
+ "rewards/rejected": 0.0027544170152395964,
275
+ "step": 180
276
+ },
277
+ {
278
+ "epoch": 0.2,
279
+ "learning_rate": 4.8417076618132434e-06,
280
+ "logits/chosen": 0.1691962480545044,
281
+ "logits/rejected": 0.18480929732322693,
282
+ "logps/chosen": -260.39678955078125,
283
+ "logps/rejected": -315.83148193359375,
284
+ "loss": 0.2089,
285
+ "rewards/accuracies": 0.3687500059604645,
286
+ "rewards/chosen": 0.009463444352149963,
287
+ "rewards/margins": 0.0045218681916594505,
288
+ "rewards/rejected": 0.0049415766261518,
289
+ "step": 190
290
+ },
291
+ {
292
+ "epoch": 0.21,
293
+ "learning_rate": 4.807464225455655e-06,
294
+ "logits/chosen": 0.08825172483921051,
295
+ "logits/rejected": 0.15803390741348267,
296
+ "logps/chosen": -288.59637451171875,
297
+ "logps/rejected": -313.7530822753906,
298
+ "loss": 0.2001,
299
+ "rewards/accuracies": 0.3375000059604645,
300
+ "rewards/chosen": 0.008411525748670101,
301
+ "rewards/margins": 0.00624421751126647,
302
+ "rewards/rejected": 0.0021673087030649185,
303
+ "step": 200
304
+ },
305
+ {
306
+ "epoch": 0.22,
307
+ "learning_rate": 4.770016517582283e-06,
308
+ "logits/chosen": 0.09255161136388779,
309
+ "logits/rejected": 0.08204738795757294,
310
+ "logps/chosen": -287.4985046386719,
311
+ "logps/rejected": -305.6175231933594,
312
+ "loss": 0.2157,
313
+ "rewards/accuracies": 0.35624998807907104,
314
+ "rewards/chosen": 0.0021202426869422197,
315
+ "rewards/margins": 0.00259004021063447,
316
+ "rewards/rejected": -0.00046979711623862386,
317
+ "step": 210
318
+ },
319
+ {
320
+ "epoch": 0.23,
321
+ "learning_rate": 4.7294165401363616e-06,
322
+ "logits/chosen": 0.07696634531021118,
323
+ "logits/rejected": 0.0523553304374218,
324
+ "logps/chosen": -276.10125732421875,
325
+ "logps/rejected": -297.9394836425781,
326
+ "loss": 0.213,
327
+ "rewards/accuracies": 0.29374998807907104,
328
+ "rewards/chosen": 0.0040213195607066154,
329
+ "rewards/margins": 0.005335698835551739,
330
+ "rewards/rejected": -0.001314379507675767,
331
+ "step": 220
332
+ },
333
+ {
334
+ "epoch": 0.25,
335
+ "learning_rate": 4.68572067247573e-06,
336
+ "logits/chosen": 0.11084002256393433,
337
+ "logits/rejected": 0.025312194600701332,
338
+ "logps/chosen": -257.67864990234375,
339
+ "logps/rejected": -297.58941650390625,
340
+ "loss": 0.2099,
341
+ "rewards/accuracies": 0.33125001192092896,
342
+ "rewards/chosen": 0.00831601768732071,
343
+ "rewards/margins": 0.004710269160568714,
344
+ "rewards/rejected": 0.0036057487595826387,
345
+ "step": 230
346
+ },
347
+ {
348
+ "epoch": 0.26,
349
+ "learning_rate": 4.638989593081364e-06,
350
+ "logits/chosen": 0.1027657762169838,
351
+ "logits/rejected": 0.07919518649578094,
352
+ "logps/chosen": -280.58990478515625,
353
+ "logps/rejected": -299.32000732421875,
354
+ "loss": 0.2063,
355
+ "rewards/accuracies": 0.3125,
356
+ "rewards/chosen": 0.01294595468789339,
357
+ "rewards/margins": 0.004328696057200432,
358
+ "rewards/rejected": 0.008617257699370384,
359
+ "step": 240
360
+ },
361
+ {
362
+ "epoch": 0.27,
363
+ "learning_rate": 4.5892881952959015e-06,
364
+ "logits/chosen": 0.03375561907887459,
365
+ "logits/rejected": 0.06997523456811905,
366
+ "logps/chosen": -232.6851043701172,
367
+ "logps/rejected": -268.51544189453125,
368
+ "loss": 0.2106,
369
+ "rewards/accuracies": 0.3062500059604645,
370
+ "rewards/chosen": 0.0066901808604598045,
371
+ "rewards/margins": 0.004283884074538946,
372
+ "rewards/rejected": 0.0024062972515821457,
373
+ "step": 250
374
+ },
375
+ {
376
+ "epoch": 0.28,
377
+ "learning_rate": 4.536685497209182e-06,
378
+ "logits/chosen": 0.018621403723955154,
379
+ "logits/rejected": 0.031526364386081696,
380
+ "logps/chosen": -255.9957275390625,
381
+ "logps/rejected": -266.628173828125,
382
+ "loss": 0.2014,
383
+ "rewards/accuracies": 0.33125001192092896,
384
+ "rewards/chosen": 0.0048848409205675125,
385
+ "rewards/margins": 0.006752178072929382,
386
+ "rewards/rejected": -0.0018673368031159043,
387
+ "step": 260
388
+ },
389
+ {
390
+ "epoch": 0.29,
391
+ "learning_rate": 4.481254545815943e-06,
392
+ "logits/chosen": 0.012805347330868244,
393
+ "logits/rejected": 0.07229005545377731,
394
+ "logps/chosen": -299.4040222167969,
395
+ "logps/rejected": -313.70330810546875,
396
+ "loss": 0.2113,
397
+ "rewards/accuracies": 0.3499999940395355,
398
+ "rewards/chosen": 0.009505004622042179,
399
+ "rewards/margins": 0.009923577308654785,
400
+ "rewards/rejected": -0.00041857213363982737,
401
+ "step": 270
402
+ },
403
+ {
404
+ "epoch": 0.3,
405
+ "learning_rate": 4.42307231557875e-06,
406
+ "logits/chosen": 0.015601048246026039,
407
+ "logits/rejected": 0.0003178860933985561,
408
+ "logps/chosen": -202.97805786132812,
409
+ "logps/rejected": -224.36019897460938,
410
+ "loss": 0.2096,
411
+ "rewards/accuracies": 0.24375000596046448,
412
+ "rewards/chosen": 0.00423224363476038,
413
+ "rewards/margins": 0.0043428512290120125,
414
+ "rewards/rejected": -0.00011060731776524335,
415
+ "step": 280
416
+ },
417
+ {
418
+ "epoch": 0.31,
419
+ "learning_rate": 4.3622196015370305e-06,
420
+ "logits/chosen": -0.05607699602842331,
421
+ "logits/rejected": -0.054743945598602295,
422
+ "logps/chosen": -289.1156005859375,
423
+ "logps/rejected": -311.282470703125,
424
+ "loss": 0.1986,
425
+ "rewards/accuracies": 0.32499998807907104,
426
+ "rewards/chosen": 0.009154532104730606,
427
+ "rewards/margins": 0.0036413397174328566,
428
+ "rewards/rejected": 0.0055131930857896805,
429
+ "step": 290
430
+ },
431
+ {
432
+ "epoch": 0.32,
433
+ "learning_rate": 4.298780907110648e-06,
434
+ "logits/chosen": -0.047469399869441986,
435
+ "logits/rejected": -0.019701087847352028,
436
+ "logps/chosen": -285.7122497558594,
437
+ "logps/rejected": -288.4371032714844,
438
+ "loss": 0.2106,
439
+ "rewards/accuracies": 0.30000001192092896,
440
+ "rewards/chosen": 0.014235032722353935,
441
+ "rewards/margins": 0.002786442870274186,
442
+ "rewards/rejected": 0.011448590084910393,
443
+ "step": 300
444
+ },
445
+ {
446
+ "epoch": 0.33,
447
+ "learning_rate": 4.23284432675381e-06,
448
+ "logits/chosen": -0.08851752430200577,
449
+ "logits/rejected": -0.04262327030301094,
450
+ "logps/chosen": -283.068115234375,
451
+ "logps/rejected": -291.9867248535156,
452
+ "loss": 0.1975,
453
+ "rewards/accuracies": 0.30000001192092896,
454
+ "rewards/chosen": 0.0208656694740057,
455
+ "rewards/margins": 0.0043783774599432945,
456
+ "rewards/rejected": 0.016487296670675278,
457
+ "step": 310
458
+ },
459
+ {
460
+ "epoch": 0.34,
461
+ "learning_rate": 4.164501423622277e-06,
462
+ "logits/chosen": -0.017280403524637222,
463
+ "logits/rejected": -0.0075028399005532265,
464
+ "logps/chosen": -257.34454345703125,
465
+ "logps/rejected": -289.70953369140625,
466
+ "loss": 0.2046,
467
+ "rewards/accuracies": 0.28125,
468
+ "rewards/chosen": 0.013227777555584908,
469
+ "rewards/margins": 0.0045712715946137905,
470
+ "rewards/rejected": 0.008656504563987255,
471
+ "step": 320
472
+ },
473
+ {
474
+ "epoch": 0.35,
475
+ "learning_rate": 4.0938471024237355e-06,
476
+ "logits/chosen": -0.04185862094163895,
477
+ "logits/rejected": -0.07007891684770584,
478
+ "logps/chosen": -287.055419921875,
479
+ "logps/rejected": -345.94244384765625,
480
+ "loss": 0.2081,
481
+ "rewards/accuracies": 0.39375001192092896,
482
+ "rewards/chosen": 0.01658710278570652,
483
+ "rewards/margins": 0.01041549164801836,
484
+ "rewards/rejected": 0.00617161113768816,
485
+ "step": 330
486
+ },
487
+ {
488
+ "epoch": 0.36,
489
+ "learning_rate": 4.020979477627907e-06,
490
+ "logits/chosen": -0.1421864926815033,
491
+ "logits/rejected": -0.06375002861022949,
492
+ "logps/chosen": -326.5301208496094,
493
+ "logps/rejected": -310.8834228515625,
494
+ "loss": 0.2112,
495
+ "rewards/accuracies": 0.30000001192092896,
496
+ "rewards/chosen": -0.002902784151956439,
497
+ "rewards/margins": -0.0022658538073301315,
498
+ "rewards/rejected": -0.0006369305774569511,
499
+ "step": 340
500
+ },
501
+ {
502
+ "epoch": 0.37,
503
+ "learning_rate": 3.9459997372194105e-06,
504
+ "logits/chosen": -0.10621865093708038,
505
+ "logits/rejected": -0.07717886567115784,
506
+ "logps/chosen": -333.44171142578125,
507
+ "logps/rejected": -341.5768737792969,
508
+ "loss": 0.2066,
509
+ "rewards/accuracies": 0.32499998807907104,
510
+ "rewards/chosen": 0.006154628936201334,
511
+ "rewards/margins": 0.004178780596703291,
512
+ "rewards/rejected": 0.0019758488051593304,
513
+ "step": 350
514
+ },
515
+ {
516
+ "epoch": 0.38,
517
+ "learning_rate": 3.869012002182573e-06,
518
+ "logits/chosen": -0.12794511020183563,
519
+ "logits/rejected": -0.042172081768512726,
520
+ "logps/chosen": -287.95379638671875,
521
+ "logps/rejected": -327.0418701171875,
522
+ "loss": 0.212,
523
+ "rewards/accuracies": 0.38749998807907104,
524
+ "rewards/chosen": 0.010754548013210297,
525
+ "rewards/margins": 0.009464221075177193,
526
+ "rewards/rejected": 0.0012903271708637476,
527
+ "step": 360
528
+ },
529
+ {
530
+ "epoch": 0.39,
531
+ "learning_rate": 3.7901231819133104e-06,
532
+ "logits/chosen": -0.039213500916957855,
533
+ "logits/rejected": -0.06362026929855347,
534
+ "logps/chosen": -317.4774169921875,
535
+ "logps/rejected": -290.7700500488281,
536
+ "loss": 0.2108,
537
+ "rewards/accuracies": 0.28125,
538
+ "rewards/chosen": 0.007657637353986502,
539
+ "rewards/margins": 0.001145793590694666,
540
+ "rewards/rejected": 0.00651184469461441,
541
+ "step": 370
542
+ },
543
+ {
544
+ "epoch": 0.41,
545
+ "learning_rate": 3.709442825758875e-06,
546
+ "logits/chosen": -0.08858563005924225,
547
+ "logits/rejected": -0.11657065153121948,
548
+ "logps/chosen": -254.9708251953125,
549
+ "logps/rejected": -272.658935546875,
550
+ "loss": 0.2083,
551
+ "rewards/accuracies": 0.30000001192092896,
552
+ "rewards/chosen": 0.009081033989787102,
553
+ "rewards/margins": 0.008375957608222961,
554
+ "rewards/rejected": 0.0007050756248645484,
555
+ "step": 380
556
+ },
557
+ {
558
+ "epoch": 0.42,
559
+ "learning_rate": 3.6270829708916113e-06,
560
+ "logits/chosen": -0.17941537499427795,
561
+ "logits/rejected": -0.11935138702392578,
562
+ "logps/chosen": -268.1692810058594,
563
+ "logps/rejected": -290.07513427734375,
564
+ "loss": 0.2068,
565
+ "rewards/accuracies": 0.30000001192092896,
566
+ "rewards/chosen": 0.009544593282043934,
567
+ "rewards/margins": 0.0030928333289921284,
568
+ "rewards/rejected": 0.006451760418713093,
569
+ "step": 390
570
+ },
571
+ {
572
+ "epoch": 0.43,
573
+ "learning_rate": 3.543157986727991e-06,
574
+ "logits/chosen": -0.0844186395406723,
575
+ "logits/rejected": -0.16169007122516632,
576
+ "logps/chosen": -207.09371948242188,
577
+ "logps/rejected": -256.3302917480469,
578
+ "loss": 0.2001,
579
+ "rewards/accuracies": 0.30000001192092896,
580
+ "rewards/chosen": 0.0057321893982589245,
581
+ "rewards/margins": 0.006300176493823528,
582
+ "rewards/rejected": -0.0005679865134879947,
583
+ "step": 400
584
+ },
585
+ {
586
+ "epoch": 0.44,
587
+ "learning_rate": 3.4577844161089614e-06,
588
+ "logits/chosen": -0.14396438002586365,
589
+ "logits/rejected": -0.12671178579330444,
590
+ "logps/chosen": -260.51361083984375,
591
+ "logps/rejected": -311.62823486328125,
592
+ "loss": 0.2088,
593
+ "rewards/accuracies": 0.39375001192092896,
594
+ "rewards/chosen": 0.01238117553293705,
595
+ "rewards/margins": 0.013218119740486145,
596
+ "rewards/rejected": -0.0008369443821720779,
597
+ "step": 410
598
+ },
599
+ {
600
+ "epoch": 0.45,
601
+ "learning_rate": 3.3710808134621577e-06,
602
+ "logits/chosen": -0.15392616391181946,
603
+ "logits/rejected": -0.13130106031894684,
604
+ "logps/chosen": -273.2947082519531,
605
+ "logps/rejected": -266.83770751953125,
606
+ "loss": 0.2103,
607
+ "rewards/accuracies": 0.25,
608
+ "rewards/chosen": 0.008025998249650002,
609
+ "rewards/margins": 0.003167554037645459,
610
+ "rewards/rejected": 0.004858444444835186,
611
+ "step": 420
612
+ },
613
+ {
614
+ "epoch": 0.46,
615
+ "learning_rate": 3.2831675801707126e-06,
616
+ "logits/chosen": -0.12201549857854843,
617
+ "logits/rejected": -0.13493959605693817,
618
+ "logps/chosen": -292.46600341796875,
619
+ "logps/rejected": -327.0417175292969,
620
+ "loss": 0.2043,
621
+ "rewards/accuracies": 0.3187499940395355,
622
+ "rewards/chosen": 0.007237803190946579,
623
+ "rewards/margins": 0.00763185927644372,
624
+ "rewards/rejected": -0.0003940565511584282,
625
+ "step": 430
626
+ },
627
+ {
628
+ "epoch": 0.47,
629
+ "learning_rate": 3.194166797377289e-06,
630
+ "logits/chosen": -0.18244734406471252,
631
+ "logits/rejected": -0.19711770117282867,
632
+ "logps/chosen": -238.47006225585938,
633
+ "logps/rejected": -273.8080139160156,
634
+ "loss": 0.2071,
635
+ "rewards/accuracies": 0.28125,
636
+ "rewards/chosen": 0.006393319461494684,
637
+ "rewards/margins": 0.004871468059718609,
638
+ "rewards/rejected": 0.0015218511689454317,
639
+ "step": 440
640
+ },
641
+ {
642
+ "epoch": 0.48,
643
+ "learning_rate": 3.104202056455501e-06,
644
+ "logits/chosen": -0.1901955008506775,
645
+ "logits/rejected": -0.2543238401412964,
646
+ "logps/chosen": -316.27459716796875,
647
+ "logps/rejected": -338.86798095703125,
648
+ "loss": 0.2094,
649
+ "rewards/accuracies": 0.3062500059604645,
650
+ "rewards/chosen": 0.006844863295555115,
651
+ "rewards/margins": 0.0016734026139602065,
652
+ "rewards/rejected": 0.005171460565179586,
653
+ "step": 450
654
+ },
655
+ {
656
+ "epoch": 0.49,
657
+ "learning_rate": 3.013398287384144e-06,
658
+ "logits/chosen": -0.24599988758563995,
659
+ "logits/rejected": -0.2185782939195633,
660
+ "logps/chosen": -298.28271484375,
661
+ "logps/rejected": -333.5831604003906,
662
+ "loss": 0.2115,
663
+ "rewards/accuracies": 0.3375000059604645,
664
+ "rewards/chosen": 0.011538187973201275,
665
+ "rewards/margins": 0.006993747316300869,
666
+ "rewards/rejected": 0.004544440656900406,
667
+ "step": 460
668
+ },
669
+ {
670
+ "epoch": 0.5,
671
+ "learning_rate": 2.9218815852625717e-06,
672
+ "logits/chosen": -0.24635422229766846,
673
+ "logits/rejected": -0.19935253262519836,
674
+ "logps/chosen": -315.05218505859375,
675
+ "logps/rejected": -328.00213623046875,
676
+ "loss": 0.2077,
677
+ "rewards/accuracies": 0.32499998807907104,
678
+ "rewards/chosen": 0.007171535398811102,
679
+ "rewards/margins": 0.004863684065639973,
680
+ "rewards/rejected": 0.002307850867509842,
681
+ "step": 470
682
+ },
683
+ {
684
+ "epoch": 0.51,
685
+ "learning_rate": 2.829779035208113e-06,
686
+ "logits/chosen": -0.198508158326149,
687
+ "logits/rejected": -0.16920627653598785,
688
+ "logps/chosen": -279.35040283203125,
689
+ "logps/rejected": -301.2700500488281,
690
+ "loss": 0.1993,
691
+ "rewards/accuracies": 0.29374998807907104,
692
+ "rewards/chosen": 0.007105141878128052,
693
+ "rewards/margins": 0.0047775073908269405,
694
+ "rewards/rejected": 0.0023276349529623985,
695
+ "step": 480
696
+ },
697
+ {
698
+ "epoch": 0.52,
699
+ "learning_rate": 2.737218535878705e-06,
700
+ "logits/chosen": -0.24639983475208282,
701
+ "logits/rejected": -0.24618887901306152,
702
+ "logps/chosen": -284.9212341308594,
703
+ "logps/rejected": -297.228271484375,
704
+ "loss": 0.2031,
705
+ "rewards/accuracies": 0.2874999940395355,
706
+ "rewards/chosen": 0.0051613058894872665,
707
+ "rewards/margins": 0.004140165634453297,
708
+ "rewards/rejected": 0.0010211400222033262,
709
+ "step": 490
710
+ },
711
+ {
712
+ "epoch": 0.53,
713
+ "learning_rate": 2.64432862186579e-06,
714
+ "logits/chosen": -0.26932382583618164,
715
+ "logits/rejected": -0.24082553386688232,
716
+ "logps/chosen": -272.0877990722656,
717
+ "logps/rejected": -298.43389892578125,
718
+ "loss": 0.2158,
719
+ "rewards/accuracies": 0.2874999940395355,
720
+ "rewards/chosen": 0.007115568965673447,
721
+ "rewards/margins": 0.0035896040499210358,
722
+ "rewards/rejected": 0.003525964915752411,
723
+ "step": 500
724
+ },
725
+ {
726
+ "epoch": 0.54,
727
+ "learning_rate": 2.551238285204126e-06,
728
+ "logits/chosen": -0.23894211649894714,
729
+ "logits/rejected": -0.18848392367362976,
730
+ "logps/chosen": -248.10400390625,
731
+ "logps/rejected": -285.8645935058594,
732
+ "loss": 0.2031,
733
+ "rewards/accuracies": 0.3125,
734
+ "rewards/chosen": 0.01616227999329567,
735
+ "rewards/margins": 0.007524468004703522,
736
+ "rewards/rejected": 0.008637811988592148,
737
+ "step": 510
738
+ },
739
+ {
740
+ "epoch": 0.55,
741
+ "learning_rate": 2.4580767962463688e-06,
742
+ "logits/chosen": -0.22787991166114807,
743
+ "logits/rejected": -0.20557613670825958,
744
+ "logps/chosen": -230.24072265625,
745
+ "logps/rejected": -255.6363983154297,
746
+ "loss": 0.2083,
747
+ "rewards/accuracies": 0.26875001192092896,
748
+ "rewards/chosen": 0.011957610957324505,
749
+ "rewards/margins": 0.006479016039520502,
750
+ "rewards/rejected": 0.005478595849126577,
751
+ "step": 520
752
+ },
753
+ {
754
+ "epoch": 0.57,
755
+ "learning_rate": 2.3649735241511546e-06,
756
+ "logits/chosen": -0.2212837040424347,
757
+ "logits/rejected": -0.19795656204223633,
758
+ "logps/chosen": -331.7243347167969,
759
+ "logps/rejected": -322.8146667480469,
760
+ "loss": 0.2026,
761
+ "rewards/accuracies": 0.28125,
762
+ "rewards/chosen": 0.006065345369279385,
763
+ "rewards/margins": 0.00012618684559129179,
764
+ "rewards/rejected": 0.005939158145338297,
765
+ "step": 530
766
+ },
767
+ {
768
+ "epoch": 0.58,
769
+ "learning_rate": 2.2720577572339914e-06,
770
+ "logits/chosen": -0.23584437370300293,
771
+ "logits/rejected": -0.22455132007598877,
772
+ "logps/chosen": -276.896484375,
773
+ "logps/rejected": -315.27154541015625,
774
+ "loss": 0.2034,
775
+ "rewards/accuracies": 0.3062500059604645,
776
+ "rewards/chosen": 0.009026501327753067,
777
+ "rewards/margins": 0.0023224998731166124,
778
+ "rewards/rejected": 0.006704001221805811,
779
+ "step": 540
780
+ },
781
+ {
782
+ "epoch": 0.59,
783
+ "learning_rate": 2.1794585234303995e-06,
784
+ "logits/chosen": -0.20962193608283997,
785
+ "logits/rejected": -0.20796938240528107,
786
+ "logps/chosen": -241.4213409423828,
787
+ "logps/rejected": -284.7034606933594,
788
+ "loss": 0.2051,
789
+ "rewards/accuracies": 0.33125001192092896,
790
+ "rewards/chosen": 0.012364699505269527,
791
+ "rewards/margins": 0.010054551064968109,
792
+ "rewards/rejected": 0.0023101491387933493,
793
+ "step": 550
794
+ },
795
+ {
796
+ "epoch": 0.6,
797
+ "learning_rate": 2.0873044111206407e-06,
798
+ "logits/chosen": -0.2906285524368286,
799
+ "logits/rejected": -0.24792306125164032,
800
+ "logps/chosen": -259.7020568847656,
801
+ "logps/rejected": -283.08673095703125,
802
+ "loss": 0.204,
803
+ "rewards/accuracies": 0.3062500059604645,
804
+ "rewards/chosen": 0.008635065518319607,
805
+ "rewards/margins": 0.004379413556307554,
806
+ "rewards/rejected": 0.004255651496350765,
807
+ "step": 560
808
+ },
809
+ {
810
+ "epoch": 0.61,
811
+ "learning_rate": 1.9957233905648293e-06,
812
+ "logits/chosen": -0.3118959367275238,
813
+ "logits/rejected": -0.25501811504364014,
814
+ "logps/chosen": -276.6239013671875,
815
+ "logps/rejected": -296.7880554199219,
816
+ "loss": 0.2075,
817
+ "rewards/accuracies": 0.39375001192092896,
818
+ "rewards/chosen": 0.014445966109633446,
819
+ "rewards/margins": 0.011028407141566277,
820
+ "rewards/rejected": 0.0034175582695752382,
821
+ "step": 570
822
+ },
823
+ {
824
+ "epoch": 0.62,
825
+ "learning_rate": 1.904842636196402e-06,
826
+ "logits/chosen": -0.31071215867996216,
827
+ "logits/rejected": -0.27819615602493286,
828
+ "logps/chosen": -312.6679992675781,
829
+ "logps/rejected": -342.43603515625,
830
+ "loss": 0.216,
831
+ "rewards/accuracies": 0.35624998807907104,
832
+ "rewards/chosen": 0.008847331628203392,
833
+ "rewards/margins": 0.008721059188246727,
834
+ "rewards/rejected": 0.00012627345859073102,
835
+ "step": 580
836
+ },
837
+ {
838
+ "epoch": 0.63,
839
+ "learning_rate": 1.814788350020726e-06,
840
+ "logits/chosen": -0.291133314371109,
841
+ "logits/rejected": -0.31414347887039185,
842
+ "logps/chosen": -303.83502197265625,
843
+ "logps/rejected": -353.6219482421875,
844
+ "loss": 0.2042,
845
+ "rewards/accuracies": 0.3375000059604645,
846
+ "rewards/chosen": 0.009907987900078297,
847
+ "rewards/margins": 0.008294956758618355,
848
+ "rewards/rejected": 0.0016130302101373672,
849
+ "step": 590
850
+ },
851
+ {
852
+ "epoch": 0.64,
853
+ "learning_rate": 1.725685586364051e-06,
854
+ "logits/chosen": -0.32893937826156616,
855
+ "logits/rejected": -0.34056320786476135,
856
+ "logps/chosen": -279.0770263671875,
857
+ "logps/rejected": -284.71649169921875,
858
+ "loss": 0.2059,
859
+ "rewards/accuracies": 0.28125,
860
+ "rewards/chosen": 0.0037155388854444027,
861
+ "rewards/margins": 0.0032048276625573635,
862
+ "rewards/rejected": 0.000510711339302361,
863
+ "step": 600
864
+ },
865
+ {
866
+ "epoch": 0.65,
867
+ "learning_rate": 1.6376580782162172e-06,
868
+ "logits/chosen": -0.41928038001060486,
869
+ "logits/rejected": -0.3275998830795288,
870
+ "logps/chosen": -322.4991149902344,
871
+ "logps/rejected": -333.2740478515625,
872
+ "loss": 0.2,
873
+ "rewards/accuracies": 0.30000001192092896,
874
+ "rewards/chosen": 0.005991303361952305,
875
+ "rewards/margins": 0.004061755258589983,
876
+ "rewards/rejected": 0.0019295478705316782,
877
+ "step": 610
878
+ },
879
+ {
880
+ "epoch": 0.66,
881
+ "learning_rate": 1.550828065408227e-06,
882
+ "logits/chosen": -0.382260799407959,
883
+ "logits/rejected": -0.40049606561660767,
884
+ "logps/chosen": -250.11538696289062,
885
+ "logps/rejected": -292.5018615722656,
886
+ "loss": 0.2112,
887
+ "rewards/accuracies": 0.3125,
888
+ "rewards/chosen": 0.003382362425327301,
889
+ "rewards/margins": 0.007972141727805138,
890
+ "rewards/rejected": -0.004589778836816549,
891
+ "step": 620
892
+ },
893
+ {
894
+ "epoch": 0.67,
895
+ "learning_rate": 1.4653161248633053e-06,
896
+ "logits/chosen": -0.38077178597450256,
897
+ "logits/rejected": -0.3729470372200012,
898
+ "logps/chosen": -257.94268798828125,
899
+ "logps/rejected": -269.142822265625,
900
+ "loss": 0.2012,
901
+ "rewards/accuracies": 0.21250000596046448,
902
+ "rewards/chosen": 0.0004567865398712456,
903
+ "rewards/margins": 0.0030794304329901934,
904
+ "rewards/rejected": -0.0026226439513266087,
905
+ "step": 630
906
+ },
907
+ {
908
+ "epoch": 0.68,
909
+ "learning_rate": 1.381241003157162e-06,
910
+ "logits/chosen": -0.3339672386646271,
911
+ "logits/rejected": -0.32460784912109375,
912
+ "logps/chosen": -285.16754150390625,
913
+ "logps/rejected": -296.7590026855469,
914
+ "loss": 0.2119,
915
+ "rewards/accuracies": 0.3062500059604645,
916
+ "rewards/chosen": -0.0008748549735173583,
917
+ "rewards/margins": 0.006057094782590866,
918
+ "rewards/rejected": -0.0069319503381848335,
919
+ "step": 640
920
+ },
921
+ {
922
+ "epoch": 0.69,
923
+ "learning_rate": 1.298719451619979e-06,
924
+ "logits/chosen": -0.3941500782966614,
925
+ "logits/rejected": -0.3795050084590912,
926
+ "logps/chosen": -330.65777587890625,
927
+ "logps/rejected": -311.400146484375,
928
+ "loss": 0.2109,
929
+ "rewards/accuracies": 0.28125,
930
+ "rewards/chosen": -0.001492818701080978,
931
+ "rewards/margins": 0.0023579366970807314,
932
+ "rewards/rejected": -0.003850755747407675,
933
+ "step": 650
934
+ },
935
+ {
936
+ "epoch": 0.7,
937
+ "learning_rate": 1.2178660642091036e-06,
938
+ "logits/chosen": -0.34223705530166626,
939
+ "logits/rejected": -0.3617275357246399,
940
+ "logps/chosen": -322.3763122558594,
941
+ "logps/rejected": -316.99615478515625,
942
+ "loss": 0.2077,
943
+ "rewards/accuracies": 0.3125,
944
+ "rewards/chosen": -0.0017126466846093535,
945
+ "rewards/margins": 0.003889720421284437,
946
+ "rewards/rejected": -0.005602366756647825,
947
+ "step": 660
948
+ },
949
+ {
950
+ "epoch": 0.71,
951
+ "learning_rate": 1.1387931183775821e-06,
952
+ "logits/chosen": -0.34621626138687134,
953
+ "logits/rejected": -0.35351529717445374,
954
+ "logps/chosen": -324.47442626953125,
955
+ "logps/rejected": -359.94256591796875,
956
+ "loss": 0.2096,
957
+ "rewards/accuracies": 0.3375000059604645,
958
+ "rewards/chosen": -0.002571218181401491,
959
+ "rewards/margins": 0.00809338316321373,
960
+ "rewards/rejected": -0.010664600878953934,
961
+ "step": 670
962
+ },
963
+ {
964
+ "epoch": 0.73,
965
+ "learning_rate": 1.061610419159532e-06,
966
+ "logits/chosen": -0.38271594047546387,
967
+ "logits/rejected": -0.3791079521179199,
968
+ "logps/chosen": -278.4394226074219,
969
+ "logps/rejected": -310.6518859863281,
970
+ "loss": 0.2058,
971
+ "rewards/accuracies": 0.32499998807907104,
972
+ "rewards/chosen": 0.0030440581031143665,
973
+ "rewards/margins": 0.011548629961907864,
974
+ "rewards/rejected": -0.00850457139313221,
975
+ "step": 680
976
+ },
977
+ {
978
+ "epoch": 0.74,
979
+ "learning_rate": 9.864251466888364e-07,
980
+ "logits/chosen": -0.3675210475921631,
981
+ "logits/rejected": -0.4030851423740387,
982
+ "logps/chosen": -281.1656494140625,
983
+ "logps/rejected": -319.8685607910156,
984
+ "loss": 0.2095,
985
+ "rewards/accuracies": 0.36250001192092896,
986
+ "rewards/chosen": 0.0020102262496948242,
987
+ "rewards/margins": 0.010647585615515709,
988
+ "rewards/rejected": -0.00863736029714346,
989
+ "step": 690
990
+ },
991
+ {
992
+ "epoch": 0.75,
993
+ "learning_rate": 9.133417073629288e-07,
994
+ "logits/chosen": -0.3838180601596832,
995
+ "logits/rejected": -0.36158618330955505,
996
+ "logps/chosen": -293.8908386230469,
997
+ "logps/rejected": -322.5982360839844,
998
+ "loss": 0.2055,
999
+ "rewards/accuracies": 0.3375000059604645,
1000
+ "rewards/chosen": -0.003357111709192395,
1001
+ "rewards/margins": 0.009485922753810883,
1002
+ "rewards/rejected": -0.01284303329885006,
1003
+ "step": 700
1004
+ },
1005
+ {
1006
+ "epoch": 0.76,
1007
+ "learning_rate": 8.424615888583332e-07,
1008
+ "logits/chosen": -0.3621710240840912,
1009
+ "logits/rejected": -0.3666265606880188,
1010
+ "logps/chosen": -317.04534912109375,
1011
+ "logps/rejected": -355.4456481933594,
1012
+ "loss": 0.2086,
1013
+ "rewards/accuracies": 0.36250001192092896,
1014
+ "rewards/chosen": -0.0012256987392902374,
1015
+ "rewards/margins": 0.009772216901183128,
1016
+ "rewards/rejected": -0.010997915640473366,
1017
+ "step": 710
1018
+ },
1019
+ {
1020
+ "epoch": 0.77,
1021
+ "learning_rate": 7.738832191993092e-07,
1022
+ "logits/chosen": -0.38925933837890625,
1023
+ "logits/rejected": -0.34841951727867126,
1024
+ "logps/chosen": -233.68087768554688,
1025
+ "logps/rejected": -276.94183349609375,
1026
+ "loss": 0.2102,
1027
+ "rewards/accuracies": 0.29374998807907104,
1028
+ "rewards/chosen": 0.0036917359102517366,
1029
+ "rewards/margins": 0.009330306202173233,
1030
+ "rewards/rejected": -0.005638569127768278,
1031
+ "step": 720
1032
+ },
1033
+ {
1034
+ "epoch": 0.78,
1035
+ "learning_rate": 7.077018300752917e-07,
1036
+ "logits/chosen": -0.42922061681747437,
1037
+ "logits/rejected": -0.37054792046546936,
1038
+ "logps/chosen": -305.6702880859375,
1039
+ "logps/rejected": -336.8018798828125,
1040
+ "loss": 0.2114,
1041
+ "rewards/accuracies": 0.29374998807907104,
1042
+ "rewards/chosen": 0.0010867089731618762,
1043
+ "rewards/margins": 0.007067729718983173,
1044
+ "rewards/rejected": -0.005981021095067263,
1045
+ "step": 730
1046
+ },
1047
+ {
1048
+ "epoch": 0.79,
1049
+ "learning_rate": 6.440093245969342e-07,
1050
+ "logits/chosen": -0.36909401416778564,
1051
+ "logits/rejected": -0.3725087344646454,
1052
+ "logps/chosen": -314.70892333984375,
1053
+ "logps/rejected": -341.8924255371094,
1054
+ "loss": 0.2081,
1055
+ "rewards/accuracies": 0.35624998807907104,
1056
+ "rewards/chosen": -0.0023520714603364468,
1057
+ "rewards/margins": 0.00565011752769351,
1058
+ "rewards/rejected": -0.008002189919352531,
1059
+ "step": 740
1060
+ },
1061
+ {
1062
+ "epoch": 0.8,
1063
+ "learning_rate": 5.828941496744075e-07,
1064
+ "logits/chosen": -0.3493804633617401,
1065
+ "logits/rejected": -0.36864691972732544,
1066
+ "logps/chosen": -318.31158447265625,
1067
+ "logps/rejected": -329.7109680175781,
1068
+ "loss": 0.2081,
1069
+ "rewards/accuracies": 0.30000001192092896,
1070
+ "rewards/chosen": -0.004697232972830534,
1071
+ "rewards/margins": 0.0050050243735313416,
1072
+ "rewards/rejected": -0.009702257812023163,
1073
+ "step": 750
1074
+ },
1075
+ {
1076
+ "epoch": 0.81,
1077
+ "learning_rate": 5.244411731951671e-07,
1078
+ "logits/chosen": -0.3421717882156372,
1079
+ "logits/rejected": -0.3551585376262665,
1080
+ "logps/chosen": -284.5762939453125,
1081
+ "logps/rejected": -303.1432189941406,
1082
+ "loss": 0.2021,
1083
+ "rewards/accuracies": 0.30000001192092896,
1084
+ "rewards/chosen": -0.006993536837399006,
1085
+ "rewards/margins": 0.010174809023737907,
1086
+ "rewards/rejected": -0.017168346792459488,
1087
+ "step": 760
1088
+ },
1089
+ {
1090
+ "epoch": 0.82,
1091
+ "learning_rate": 4.6873156617173594e-07,
1092
+ "logits/chosen": -0.3510192930698395,
1093
+ "logits/rejected": -0.351273775100708,
1094
+ "logps/chosen": -283.2718505859375,
1095
+ "logps/rejected": -298.7680358886719,
1096
+ "loss": 0.2054,
1097
+ "rewards/accuracies": 0.32499998807907104,
1098
+ "rewards/chosen": -0.001104214577935636,
1099
+ "rewards/margins": 0.008227458223700523,
1100
+ "rewards/rejected": -0.00933167152106762,
1101
+ "step": 770
1102
+ },
1103
+ {
1104
+ "epoch": 0.83,
1105
+ "learning_rate": 4.1584269002318653e-07,
1106
+ "logits/chosen": -0.3711737394332886,
1107
+ "logits/rejected": -0.35552793741226196,
1108
+ "logps/chosen": -317.1517028808594,
1109
+ "logps/rejected": -300.7435607910156,
1110
+ "loss": 0.2134,
1111
+ "rewards/accuracies": 0.25,
1112
+ "rewards/chosen": -0.0016167514258995652,
1113
+ "rewards/margins": -0.0006161117926239967,
1114
+ "rewards/rejected": -0.0010006397496908903,
1115
+ "step": 780
1116
+ },
1117
+ {
1118
+ "epoch": 0.84,
1119
+ "learning_rate": 3.658479891468258e-07,
1120
+ "logits/chosen": -0.382574200630188,
1121
+ "logits/rejected": -0.37562495470046997,
1122
+ "logps/chosen": -311.616943359375,
1123
+ "logps/rejected": -327.84442138671875,
1124
+ "loss": 0.203,
1125
+ "rewards/accuracies": 0.2874999940395355,
1126
+ "rewards/chosen": -0.003906398545950651,
1127
+ "rewards/margins": 0.0055776028893888,
1128
+ "rewards/rejected": -0.009484002366662025,
1129
+ "step": 790
1130
+ },
1131
+ {
1132
+ "epoch": 0.85,
1133
+ "learning_rate": 3.18816888929272e-07,
1134
+ "logits/chosen": -0.3587433695793152,
1135
+ "logits/rejected": -0.3470574915409088,
1136
+ "logps/chosen": -262.9179382324219,
1137
+ "logps/rejected": -290.76214599609375,
1138
+ "loss": 0.2072,
1139
+ "rewards/accuracies": 0.2874999940395355,
1140
+ "rewards/chosen": -0.0015628261025995016,
1141
+ "rewards/margins": 0.004087894223630428,
1142
+ "rewards/rejected": -0.005650720559060574,
1143
+ "step": 800
1144
+ },
1145
+ {
1146
+ "epoch": 0.86,
1147
+ "learning_rate": 2.748146993385484e-07,
1148
+ "logits/chosen": -0.3712787926197052,
1149
+ "logits/rejected": -0.3702451288700104,
1150
+ "logps/chosen": -261.8824768066406,
1151
+ "logps/rejected": -319.24224853515625,
1152
+ "loss": 0.2093,
1153
+ "rewards/accuracies": 0.3812499940395355,
1154
+ "rewards/chosen": 0.0052637443877756596,
1155
+ "rewards/margins": 0.014261695556342602,
1156
+ "rewards/rejected": -0.008997952565550804,
1157
+ "step": 810
1158
+ },
1159
+ {
1160
+ "epoch": 0.87,
1161
+ "learning_rate": 2.3390252423108077e-07,
1162
+ "logits/chosen": -0.32240134477615356,
1163
+ "logits/rejected": -0.37527790665626526,
1164
+ "logps/chosen": -248.02835083007812,
1165
+ "logps/rejected": -285.5946350097656,
1166
+ "loss": 0.1976,
1167
+ "rewards/accuracies": 0.2874999940395355,
1168
+ "rewards/chosen": 0.0007311357767321169,
1169
+ "rewards/margins": 0.01329697947949171,
1170
+ "rewards/rejected": -0.012565843760967255,
1171
+ "step": 820
1172
+ },
1173
+ {
1174
+ "epoch": 0.89,
1175
+ "learning_rate": 1.961371764995243e-07,
1176
+ "logits/chosen": -0.37857678532600403,
1177
+ "logits/rejected": -0.35946762561798096,
1178
+ "logps/chosen": -203.0977020263672,
1179
+ "logps/rejected": -246.66049194335938,
1180
+ "loss": 0.1988,
1181
+ "rewards/accuracies": 0.2874999940395355,
1182
+ "rewards/chosen": 0.0019016917794942856,
1183
+ "rewards/margins": 0.00954529084265232,
1184
+ "rewards/rejected": -0.007643598131835461,
1185
+ "step": 830
1186
+ },
1187
+ {
1188
+ "epoch": 0.9,
1189
+ "learning_rate": 1.61571099179261e-07,
1190
+ "logits/chosen": -0.3962993621826172,
1191
+ "logits/rejected": -0.4342166781425476,
1192
+ "logps/chosen": -274.3848571777344,
1193
+ "logps/rejected": -307.4154357910156,
1194
+ "loss": 0.2018,
1195
+ "rewards/accuracies": 0.3187499940395355,
1196
+ "rewards/chosen": 0.004304953850805759,
1197
+ "rewards/margins": 0.008778423070907593,
1198
+ "rewards/rejected": -0.004473468754440546,
1199
+ "step": 840
1200
+ },
1201
+ {
1202
+ "epoch": 0.91,
1203
+ "learning_rate": 1.3025229262312367e-07,
1204
+ "logits/chosen": -0.36348485946655273,
1205
+ "logits/rejected": -0.3567599058151245,
1206
+ "logps/chosen": -286.14520263671875,
1207
+ "logps/rejected": -310.6136474609375,
1208
+ "loss": 0.1949,
1209
+ "rewards/accuracies": 0.3125,
1210
+ "rewards/chosen": -0.0005280302138999104,
1211
+ "rewards/margins": 0.006353036966174841,
1212
+ "rewards/rejected": -0.006881065666675568,
1213
+ "step": 850
1214
+ },
1215
+ {
1216
+ "epoch": 0.92,
1217
+ "learning_rate": 1.0222424784546853e-07,
1218
+ "logits/chosen": -0.3500184118747711,
1219
+ "logits/rejected": -0.342162549495697,
1220
+ "logps/chosen": -286.3838806152344,
1221
+ "logps/rejected": -341.6246337890625,
1222
+ "loss": 0.2074,
1223
+ "rewards/accuracies": 0.3125,
1224
+ "rewards/chosen": -0.00207870383746922,
1225
+ "rewards/margins": 0.011935941874980927,
1226
+ "rewards/rejected": -0.014014643616974354,
1227
+ "step": 860
1228
+ },
1229
+ {
1230
+ "epoch": 0.93,
1231
+ "learning_rate": 7.752588612816553e-08,
1232
+ "logits/chosen": -0.39309799671173096,
1233
+ "logits/rejected": -0.33321046829223633,
1234
+ "logps/chosen": -252.35006713867188,
1235
+ "logps/rejected": -278.26580810546875,
1236
+ "loss": 0.2062,
1237
+ "rewards/accuracies": 0.32499998807907104,
1238
+ "rewards/chosen": 0.001301859738305211,
1239
+ "rewards/margins": 0.009171300567686558,
1240
+ "rewards/rejected": -0.00786944106221199,
1241
+ "step": 870
1242
+ },
1243
+ {
1244
+ "epoch": 0.94,
1245
+ "learning_rate": 5.619150497236991e-08,
1246
+ "logits/chosen": -0.38764262199401855,
1247
+ "logits/rejected": -0.3758700489997864,
1248
+ "logps/chosen": -274.4250183105469,
1249
+ "logps/rejected": -305.24371337890625,
1250
+ "loss": 0.2028,
1251
+ "rewards/accuracies": 0.30000001192092896,
1252
+ "rewards/chosen": -0.004850239958614111,
1253
+ "rewards/margins": 0.006510905921459198,
1254
+ "rewards/rejected": -0.011361144483089447,
1255
+ "step": 880
1256
+ },
1257
+ {
1258
+ "epoch": 0.95,
1259
+ "learning_rate": 3.825073047112743e-08,
1260
+ "logits/chosen": -0.35581859946250916,
1261
+ "logits/rejected": -0.36488813161849976,
1262
+ "logps/chosen": -298.48529052734375,
1263
+ "logps/rejected": -312.73004150390625,
1264
+ "loss": 0.2046,
1265
+ "rewards/accuracies": 0.3187499940395355,
1266
+ "rewards/chosen": -0.0006998656317591667,
1267
+ "rewards/margins": 0.0070778196677565575,
1268
+ "rewards/rejected": -0.007777686230838299,
1269
+ "step": 890
1270
+ },
1271
+ {
1272
+ "epoch": 0.96,
1273
+ "learning_rate": 2.372847616895685e-08,
1274
+ "logits/chosen": -0.3638850152492523,
1275
+ "logits/rejected": -0.3694193363189697,
1276
+ "logps/chosen": -304.8964538574219,
1277
+ "logps/rejected": -322.5799560546875,
1278
+ "loss": 0.1952,
1279
+ "rewards/accuracies": 0.38749998807907104,
1280
+ "rewards/chosen": 4.08724881708622e-05,
1281
+ "rewards/margins": 0.00868744682520628,
1282
+ "rewards/rejected": -0.008646574802696705,
1283
+ "step": 900
1284
+ },
1285
+ {
1286
+ "epoch": 0.97,
1287
+ "learning_rate": 1.264490846553279e-08,
1288
+ "logits/chosen": -0.40093159675598145,
1289
+ "logits/rejected": -0.35080868005752563,
1290
+ "logps/chosen": -306.1406555175781,
1291
+ "logps/rejected": -354.2425842285156,
1292
+ "loss": 0.2078,
1293
+ "rewards/accuracies": 0.36250001192092896,
1294
+ "rewards/chosen": -0.0017454406479373574,
1295
+ "rewards/margins": 0.01031532697379589,
1296
+ "rewards/rejected": -0.012060767039656639,
1297
+ "step": 910
1298
+ },
1299
+ {
1300
+ "epoch": 0.98,
1301
+ "learning_rate": 5.015418611516165e-09,
1302
+ "logits/chosen": -0.4077305793762207,
1303
+ "logits/rejected": -0.3492675721645355,
1304
+ "logps/chosen": -302.4758605957031,
1305
+ "logps/rejected": -299.92266845703125,
1306
+ "loss": 0.2094,
1307
+ "rewards/accuracies": 0.28125,
1308
+ "rewards/chosen": -0.0018849109765142202,
1309
+ "rewards/margins": 0.002199392067268491,
1310
+ "rewards/rejected": -0.004084303043782711,
1311
+ "step": 920
1312
+ },
1313
+ {
1314
+ "epoch": 0.99,
1315
+ "learning_rate": 8.506013354186993e-10,
1316
+ "logits/chosen": -0.35477060079574585,
1317
+ "logits/rejected": -0.3815353512763977,
1318
+ "logps/chosen": -273.3544006347656,
1319
+ "logps/rejected": -309.14599609375,
1320
+ "loss": 0.2057,
1321
+ "rewards/accuracies": 0.34375,
1322
+ "rewards/chosen": 0.0007420766050927341,
1323
+ "rewards/margins": 0.01061691902577877,
1324
+ "rewards/rejected": -0.009874843060970306,
1325
+ "step": 930
1326
+ },
1327
+ {
1328
+ "epoch": 1.0,
1329
+ "step": 937,
1330
+ "total_flos": 0.0,
1331
+ "train_loss": 0.029720235659702103,
1332
+ "train_runtime": 1648.0199,
1333
+ "train_samples_per_second": 18.204,
1334
+ "train_steps_per_second": 0.569
1335
+ }
1336
+ ],
1337
+ "logging_steps": 10,
1338
+ "max_steps": 937,
1339
+ "num_input_tokens_seen": 0,
1340
+ "num_train_epochs": 1,
1341
+ "save_steps": 100,
1342
+ "total_flos": 0.0,
1343
+ "train_batch_size": 4,
1344
+ "trial_name": null,
1345
+ "trial_params": null
1346
+ }