yakazimir commited on
Commit
4e00741
·
verified ·
1 Parent(s): 40b48f0

Model save

Browse files
Files changed (5) hide show
  1. README.md +75 -0
  2. all_results.json +9 -0
  3. generation_config.json +12 -0
  4. train_results.json +9 -0
  5. trainer_state.json +1515 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: llama3
4
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
5
+ tags:
6
+ - trl
7
+ - simpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: llama3_qfUNL_best_entropy
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # llama3_qfUNL_best_entropy
18
+
19
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 2.0936
22
+ - Rewards/chosen: -5.3144
23
+ - Rewards/rejected: -7.1419
24
+ - Rewards/accuracies: 0.7771
25
+ - Rewards/margins: 1.8275
26
+ - Logps/rejected: -0.7142
27
+ - Logps/chosen: -0.5314
28
+ - Logits/rejected: -1.3326
29
+ - Logits/chosen: -1.3731
30
+ - Semantic Entropy: 0.9976
31
+
32
+ ## Model description
33
+
34
+ More information needed
35
+
36
+ ## Intended uses & limitations
37
+
38
+ More information needed
39
+
40
+ ## Training and evaluation data
41
+
42
+ More information needed
43
+
44
+ ## Training procedure
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 1e-06
50
+ - train_batch_size: 2
51
+ - eval_batch_size: 4
52
+ - seed: 42
53
+ - distributed_type: multi-GPU
54
+ - num_devices: 4
55
+ - gradient_accumulation_steps: 16
56
+ - total_train_batch_size: 128
57
+ - total_eval_batch_size: 16
58
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
+ - lr_scheduler_type: cosine
60
+ - lr_scheduler_warmup_ratio: 0.1
61
+ - num_epochs: 1.0
62
+
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Semantic Entropy |
66
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:----------------:|
67
+ | 2.0424 | 0.8743 | 400 | 2.0936 | -5.3144 | -7.1419 | 0.7771 | 1.8275 | -0.7142 | -0.5314 | -1.3326 | -1.3731 | 0.9976 |
68
+
69
+
70
+ ### Framework versions
71
+
72
+ - Transformers 4.44.2
73
+ - Pytorch 2.2.2+cu121
74
+ - Datasets 2.18.0
75
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9989071038251366,
3
+ "total_flos": 0.0,
4
+ "train_loss": 2.4655840506438875,
5
+ "train_runtime": 5955.1851,
6
+ "train_samples": 58558,
7
+ "train_samples_per_second": 9.833,
8
+ "train_steps_per_second": 0.077
9
+ }
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 128000,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 128001,
6
+ 128009
7
+ ],
8
+ "max_length": 4096,
9
+ "temperature": 0.6,
10
+ "top_p": 0.9,
11
+ "transformers_version": "4.44.2"
12
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9989071038251366,
3
+ "total_flos": 0.0,
4
+ "train_loss": 2.4655840506438875,
5
+ "train_runtime": 5955.1851,
6
+ "train_samples": 58558,
7
+ "train_samples_per_second": 9.833,
8
+ "train_steps_per_second": 0.077
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1515 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9989071038251366,
5
+ "eval_steps": 400,
6
+ "global_step": 457,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.01092896174863388,
13
+ "grad_norm": 315.2451683922351,
14
+ "learning_rate": 1.0869565217391303e-07,
15
+ "logits/chosen": -1.0065257549285889,
16
+ "logits/rejected": -1.0008176565170288,
17
+ "logps/chosen": -0.28065255284309387,
18
+ "logps/rejected": -0.28539329767227173,
19
+ "loss": 3.4114,
20
+ "rewards/accuracies": 0.53125,
21
+ "rewards/chosen": -2.806525468826294,
22
+ "rewards/margins": 0.04740738496184349,
23
+ "rewards/rejected": -2.8539328575134277,
24
+ "semantic_entropy": 0.7513969540596008,
25
+ "step": 5
26
+ },
27
+ {
28
+ "epoch": 0.02185792349726776,
29
+ "grad_norm": 181.1484663719842,
30
+ "learning_rate": 2.1739130434782607e-07,
31
+ "logits/chosen": -1.0534369945526123,
32
+ "logits/rejected": -1.0029994249343872,
33
+ "logps/chosen": -0.2570807933807373,
34
+ "logps/rejected": -0.27113229036331177,
35
+ "loss": 3.3911,
36
+ "rewards/accuracies": 0.543749988079071,
37
+ "rewards/chosen": -2.570807933807373,
38
+ "rewards/margins": 0.14051488041877747,
39
+ "rewards/rejected": -2.711322784423828,
40
+ "semantic_entropy": 0.710273802280426,
41
+ "step": 10
42
+ },
43
+ {
44
+ "epoch": 0.03278688524590164,
45
+ "grad_norm": 178.65067568018998,
46
+ "learning_rate": 3.260869565217391e-07,
47
+ "logits/chosen": -1.0082308053970337,
48
+ "logits/rejected": -0.9609392285346985,
49
+ "logps/chosen": -0.26744094491004944,
50
+ "logps/rejected": -0.27332359552383423,
51
+ "loss": 3.3533,
52
+ "rewards/accuracies": 0.512499988079071,
53
+ "rewards/chosen": -2.6744089126586914,
54
+ "rewards/margins": 0.05882669612765312,
55
+ "rewards/rejected": -2.7332358360290527,
56
+ "semantic_entropy": 0.7273439168930054,
57
+ "step": 15
58
+ },
59
+ {
60
+ "epoch": 0.04371584699453552,
61
+ "grad_norm": 258.46792679447157,
62
+ "learning_rate": 4.3478260869565214e-07,
63
+ "logits/chosen": -0.9462105631828308,
64
+ "logits/rejected": -0.8957524299621582,
65
+ "logps/chosen": -0.27257752418518066,
66
+ "logps/rejected": -0.2848864197731018,
67
+ "loss": 3.3976,
68
+ "rewards/accuracies": 0.5687500238418579,
69
+ "rewards/chosen": -2.7257750034332275,
70
+ "rewards/margins": 0.12308906018733978,
71
+ "rewards/rejected": -2.8488640785217285,
72
+ "semantic_entropy": 0.7455072999000549,
73
+ "step": 20
74
+ },
75
+ {
76
+ "epoch": 0.0546448087431694,
77
+ "grad_norm": 273.0706988791117,
78
+ "learning_rate": 5.434782608695652e-07,
79
+ "logits/chosen": -0.9422909617424011,
80
+ "logits/rejected": -0.8697713613510132,
81
+ "logps/chosen": -0.2761459946632385,
82
+ "logps/rejected": -0.2941877543926239,
83
+ "loss": 3.346,
84
+ "rewards/accuracies": 0.581250011920929,
85
+ "rewards/chosen": -2.761460065841675,
86
+ "rewards/margins": 0.18041765689849854,
87
+ "rewards/rejected": -2.9418773651123047,
88
+ "semantic_entropy": 0.7553174495697021,
89
+ "step": 25
90
+ },
91
+ {
92
+ "epoch": 0.06557377049180328,
93
+ "grad_norm": 260.6158887162735,
94
+ "learning_rate": 6.521739130434782e-07,
95
+ "logits/chosen": -1.0548616647720337,
96
+ "logits/rejected": -0.9892600774765015,
97
+ "logps/chosen": -0.2682558596134186,
98
+ "logps/rejected": -0.284037709236145,
99
+ "loss": 3.4058,
100
+ "rewards/accuracies": 0.5062500238418579,
101
+ "rewards/chosen": -2.682558536529541,
102
+ "rewards/margins": 0.1578185111284256,
103
+ "rewards/rejected": -2.8403773307800293,
104
+ "semantic_entropy": 0.7250551581382751,
105
+ "step": 30
106
+ },
107
+ {
108
+ "epoch": 0.07650273224043716,
109
+ "grad_norm": 124.4121289092538,
110
+ "learning_rate": 7.608695652173913e-07,
111
+ "logits/chosen": -1.0096337795257568,
112
+ "logits/rejected": -0.9423675537109375,
113
+ "logps/chosen": -0.2600244879722595,
114
+ "logps/rejected": -0.27900081872940063,
115
+ "loss": 3.313,
116
+ "rewards/accuracies": 0.4937500059604645,
117
+ "rewards/chosen": -2.6002449989318848,
118
+ "rewards/margins": 0.18976299464702606,
119
+ "rewards/rejected": -2.790008068084717,
120
+ "semantic_entropy": 0.7209498882293701,
121
+ "step": 35
122
+ },
123
+ {
124
+ "epoch": 0.08743169398907104,
125
+ "grad_norm": 147.2873491379567,
126
+ "learning_rate": 8.695652173913043e-07,
127
+ "logits/chosen": -0.9600120782852173,
128
+ "logits/rejected": -0.8983286619186401,
129
+ "logps/chosen": -0.2835314869880676,
130
+ "logps/rejected": -0.2980334460735321,
131
+ "loss": 3.454,
132
+ "rewards/accuracies": 0.512499988079071,
133
+ "rewards/chosen": -2.835314989089966,
134
+ "rewards/margins": 0.1450195610523224,
135
+ "rewards/rejected": -2.980334520339966,
136
+ "semantic_entropy": 0.7609063386917114,
137
+ "step": 40
138
+ },
139
+ {
140
+ "epoch": 0.09836065573770492,
141
+ "grad_norm": 93.94573860721647,
142
+ "learning_rate": 9.782608695652173e-07,
143
+ "logits/chosen": -1.0171349048614502,
144
+ "logits/rejected": -0.9333709478378296,
145
+ "logps/chosen": -0.28658169507980347,
146
+ "logps/rejected": -0.3058907389640808,
147
+ "loss": 3.1741,
148
+ "rewards/accuracies": 0.5062500238418579,
149
+ "rewards/chosen": -2.865816831588745,
150
+ "rewards/margins": 0.1930905282497406,
151
+ "rewards/rejected": -3.0589072704315186,
152
+ "semantic_entropy": 0.7610034346580505,
153
+ "step": 45
154
+ },
155
+ {
156
+ "epoch": 0.1092896174863388,
157
+ "grad_norm": 286.4609615603791,
158
+ "learning_rate": 9.997663088532014e-07,
159
+ "logits/chosen": -0.9543835520744324,
160
+ "logits/rejected": -0.8730956315994263,
161
+ "logps/chosen": -0.2823755145072937,
162
+ "logps/rejected": -0.2902544140815735,
163
+ "loss": 3.3278,
164
+ "rewards/accuracies": 0.48124998807907104,
165
+ "rewards/chosen": -2.8237555027008057,
166
+ "rewards/margins": 0.07878823578357697,
167
+ "rewards/rejected": -2.902543544769287,
168
+ "semantic_entropy": 0.7529318332672119,
169
+ "step": 50
170
+ },
171
+ {
172
+ "epoch": 0.12021857923497267,
173
+ "grad_norm": 122.74091082377944,
174
+ "learning_rate": 9.98817312944725e-07,
175
+ "logits/chosen": -0.9809161424636841,
176
+ "logits/rejected": -0.8649328947067261,
177
+ "logps/chosen": -0.2803560495376587,
178
+ "logps/rejected": -0.3149644732475281,
179
+ "loss": 3.1758,
180
+ "rewards/accuracies": 0.543749988079071,
181
+ "rewards/chosen": -2.803560256958008,
182
+ "rewards/margins": 0.3460845947265625,
183
+ "rewards/rejected": -3.1496450901031494,
184
+ "semantic_entropy": 0.7631497979164124,
185
+ "step": 55
186
+ },
187
+ {
188
+ "epoch": 0.13114754098360656,
189
+ "grad_norm": 87.3444735452583,
190
+ "learning_rate": 9.971397915250336e-07,
191
+ "logits/chosen": -1.033070683479309,
192
+ "logits/rejected": -0.9856836199760437,
193
+ "logps/chosen": -0.2708079218864441,
194
+ "logps/rejected": -0.31113672256469727,
195
+ "loss": 3.0283,
196
+ "rewards/accuracies": 0.59375,
197
+ "rewards/chosen": -2.7080790996551514,
198
+ "rewards/margins": 0.40328770875930786,
199
+ "rewards/rejected": -3.1113669872283936,
200
+ "semantic_entropy": 0.7593907713890076,
201
+ "step": 60
202
+ },
203
+ {
204
+ "epoch": 0.14207650273224043,
205
+ "grad_norm": 162.53486819318002,
206
+ "learning_rate": 9.94736194623663e-07,
207
+ "logits/chosen": -1.0021493434906006,
208
+ "logits/rejected": -0.9318512082099915,
209
+ "logps/chosen": -0.31085288524627686,
210
+ "logps/rejected": -0.3405633866786957,
211
+ "loss": 3.2559,
212
+ "rewards/accuracies": 0.5375000238418579,
213
+ "rewards/chosen": -3.1085288524627686,
214
+ "rewards/margins": 0.29710477590560913,
215
+ "rewards/rejected": -3.4056334495544434,
216
+ "semantic_entropy": 0.8082467317581177,
217
+ "step": 65
218
+ },
219
+ {
220
+ "epoch": 0.15300546448087432,
221
+ "grad_norm": 348.65729079716573,
222
+ "learning_rate": 9.916100327075037e-07,
223
+ "logits/chosen": -0.9440506100654602,
224
+ "logits/rejected": -0.9250672459602356,
225
+ "logps/chosen": -0.29765281081199646,
226
+ "logps/rejected": -0.3226909935474396,
227
+ "loss": 3.0703,
228
+ "rewards/accuracies": 0.5687500238418579,
229
+ "rewards/chosen": -2.9765281677246094,
230
+ "rewards/margins": 0.2503815293312073,
231
+ "rewards/rejected": -3.22691011428833,
232
+ "semantic_entropy": 0.7739163041114807,
233
+ "step": 70
234
+ },
235
+ {
236
+ "epoch": 0.16393442622950818,
237
+ "grad_norm": 78.90080101900976,
238
+ "learning_rate": 9.877658715537428e-07,
239
+ "logits/chosen": -0.9282974004745483,
240
+ "logits/rejected": -0.912223219871521,
241
+ "logps/chosen": -0.3213742971420288,
242
+ "logps/rejected": -0.351571649312973,
243
+ "loss": 3.1657,
244
+ "rewards/accuracies": 0.6187499761581421,
245
+ "rewards/chosen": -3.213742733001709,
246
+ "rewards/margins": 0.30197376012802124,
247
+ "rewards/rejected": -3.515717029571533,
248
+ "semantic_entropy": 0.8054312467575073,
249
+ "step": 75
250
+ },
251
+ {
252
+ "epoch": 0.17486338797814208,
253
+ "grad_norm": 86.95229439852206,
254
+ "learning_rate": 9.832093255815216e-07,
255
+ "logits/chosen": -0.9330040812492371,
256
+ "logits/rejected": -0.8699474334716797,
257
+ "logps/chosen": -0.3141597807407379,
258
+ "logps/rejected": -0.3350343406200409,
259
+ "loss": 3.1277,
260
+ "rewards/accuracies": 0.612500011920929,
261
+ "rewards/chosen": -3.1415977478027344,
262
+ "rewards/margins": 0.20874571800231934,
263
+ "rewards/rejected": -3.350343704223633,
264
+ "semantic_entropy": 0.7814024686813354,
265
+ "step": 80
266
+ },
267
+ {
268
+ "epoch": 0.18579234972677597,
269
+ "grad_norm": 129.31831333568232,
270
+ "learning_rate": 9.779470496520441e-07,
271
+ "logits/chosen": -0.9329907298088074,
272
+ "logits/rejected": -0.8826324343681335,
273
+ "logps/chosen": -0.31158381700515747,
274
+ "logps/rejected": -0.3667066693305969,
275
+ "loss": 2.9946,
276
+ "rewards/accuracies": 0.643750011920929,
277
+ "rewards/chosen": -3.1158382892608643,
278
+ "rewards/margins": 0.5512284636497498,
279
+ "rewards/rejected": -3.667067050933838,
280
+ "semantic_entropy": 0.802832305431366,
281
+ "step": 85
282
+ },
283
+ {
284
+ "epoch": 0.19672131147540983,
285
+ "grad_norm": 101.69393552819945,
286
+ "learning_rate": 9.719867293491144e-07,
287
+ "logits/chosen": -1.0058772563934326,
288
+ "logits/rejected": -0.9232236742973328,
289
+ "logps/chosen": -0.33966144919395447,
290
+ "logps/rejected": -0.37590381503105164,
291
+ "loss": 3.0893,
292
+ "rewards/accuracies": 0.5625,
293
+ "rewards/chosen": -3.3966145515441895,
294
+ "rewards/margins": 0.3624236285686493,
295
+ "rewards/rejected": -3.759038209915161,
296
+ "semantic_entropy": 0.8507563471794128,
297
+ "step": 90
298
+ },
299
+ {
300
+ "epoch": 0.20765027322404372,
301
+ "grad_norm": 91.30230852825771,
302
+ "learning_rate": 9.653370697542987e-07,
303
+ "logits/chosen": -0.9487398862838745,
304
+ "logits/rejected": -0.9483828544616699,
305
+ "logps/chosen": -0.3381520211696625,
306
+ "logps/rejected": -0.362968385219574,
307
+ "loss": 2.9005,
308
+ "rewards/accuracies": 0.574999988079071,
309
+ "rewards/chosen": -3.3815205097198486,
310
+ "rewards/margins": 0.24816343188285828,
311
+ "rewards/rejected": -3.62968373298645,
312
+ "semantic_entropy": 0.8359481692314148,
313
+ "step": 95
314
+ },
315
+ {
316
+ "epoch": 0.2185792349726776,
317
+ "grad_norm": 108.89056387914384,
318
+ "learning_rate": 9.580077827331037e-07,
319
+ "logits/chosen": -0.9571771621704102,
320
+ "logits/rejected": -0.9095252752304077,
321
+ "logps/chosen": -0.3725859522819519,
322
+ "logps/rejected": -0.4279399514198303,
323
+ "loss": 2.8907,
324
+ "rewards/accuracies": 0.6187499761581421,
325
+ "rewards/chosen": -3.7258598804473877,
326
+ "rewards/margins": 0.553540050983429,
327
+ "rewards/rejected": -4.2793989181518555,
328
+ "semantic_entropy": 0.8787097930908203,
329
+ "step": 100
330
+ },
331
+ {
332
+ "epoch": 0.22950819672131148,
333
+ "grad_norm": 127.65723295420393,
334
+ "learning_rate": 9.500095727507419e-07,
335
+ "logits/chosen": -1.008998155593872,
336
+ "logits/rejected": -0.9757212400436401,
337
+ "logps/chosen": -0.3544849753379822,
338
+ "logps/rejected": -0.3977915942668915,
339
+ "loss": 2.9862,
340
+ "rewards/accuracies": 0.625,
341
+ "rewards/chosen": -3.5448498725891113,
342
+ "rewards/margins": 0.4330664277076721,
343
+ "rewards/rejected": -3.9779160022735596,
344
+ "semantic_entropy": 0.8548823595046997,
345
+ "step": 105
346
+ },
347
+ {
348
+ "epoch": 0.24043715846994534,
349
+ "grad_norm": 114.56272891566377,
350
+ "learning_rate": 9.413541212382004e-07,
351
+ "logits/chosen": -0.9939772486686707,
352
+ "logits/rejected": -0.9762369990348816,
353
+ "logps/chosen": -0.3617566227912903,
354
+ "logps/rejected": -0.4416491985321045,
355
+ "loss": 2.8457,
356
+ "rewards/accuracies": 0.637499988079071,
357
+ "rewards/chosen": -3.617565870285034,
358
+ "rewards/margins": 0.7989261746406555,
359
+ "rewards/rejected": -4.416492462158203,
360
+ "semantic_entropy": 0.8938226699829102,
361
+ "step": 110
362
+ },
363
+ {
364
+ "epoch": 0.25136612021857924,
365
+ "grad_norm": 93.3647181422965,
366
+ "learning_rate": 9.320540695314438e-07,
367
+ "logits/chosen": -1.007943868637085,
368
+ "logits/rejected": -0.9657400846481323,
369
+ "logps/chosen": -0.3646220564842224,
370
+ "logps/rejected": -0.46302324533462524,
371
+ "loss": 2.7542,
372
+ "rewards/accuracies": 0.6937500238418579,
373
+ "rewards/chosen": -3.6462206840515137,
374
+ "rewards/margins": 0.9840116500854492,
375
+ "rewards/rejected": -4.630232334136963,
376
+ "semantic_entropy": 0.878681480884552,
377
+ "step": 115
378
+ },
379
+ {
380
+ "epoch": 0.26229508196721313,
381
+ "grad_norm": 83.92874813803249,
382
+ "learning_rate": 9.221230004086721e-07,
383
+ "logits/chosen": -1.0344518423080444,
384
+ "logits/rejected": -0.9601195454597473,
385
+ "logps/chosen": -0.37165606021881104,
386
+ "logps/rejected": -0.42726248502731323,
387
+ "loss": 2.8688,
388
+ "rewards/accuracies": 0.668749988079071,
389
+ "rewards/chosen": -3.7165608406066895,
390
+ "rewards/margins": 0.5560643076896667,
391
+ "rewards/rejected": -4.272624969482422,
392
+ "semantic_entropy": 0.8952873945236206,
393
+ "step": 120
394
+ },
395
+ {
396
+ "epoch": 0.273224043715847,
397
+ "grad_norm": 190.904470995411,
398
+ "learning_rate": 9.11575418252596e-07,
399
+ "logits/chosen": -0.9347244501113892,
400
+ "logits/rejected": -0.8975458145141602,
401
+ "logps/chosen": -0.38535335659980774,
402
+ "logps/rejected": -0.4541945457458496,
403
+ "loss": 2.7207,
404
+ "rewards/accuracies": 0.6875,
405
+ "rewards/chosen": -3.853533983230591,
406
+ "rewards/margins": 0.68841153383255,
407
+ "rewards/rejected": -4.541945457458496,
408
+ "semantic_entropy": 0.9004859924316406,
409
+ "step": 125
410
+ },
411
+ {
412
+ "epoch": 0.28415300546448086,
413
+ "grad_norm": 92.76045922249655,
414
+ "learning_rate": 9.004267278667031e-07,
415
+ "logits/chosen": -0.9624107480049133,
416
+ "logits/rejected": -0.9534618258476257,
417
+ "logps/chosen": -0.4056780934333801,
418
+ "logps/rejected": -0.5245551466941833,
419
+ "loss": 2.7139,
420
+ "rewards/accuracies": 0.706250011920929,
421
+ "rewards/chosen": -4.056780815124512,
422
+ "rewards/margins": 1.1887714862823486,
423
+ "rewards/rejected": -5.245552062988281,
424
+ "semantic_entropy": 0.8837997317314148,
425
+ "step": 130
426
+ },
427
+ {
428
+ "epoch": 0.29508196721311475,
429
+ "grad_norm": 73.9902335755549,
430
+ "learning_rate": 8.886932119764565e-07,
431
+ "logits/chosen": -1.0003821849822998,
432
+ "logits/rejected": -0.9125338792800903,
433
+ "logps/chosen": -0.3864729106426239,
434
+ "logps/rejected": -0.4857531189918518,
435
+ "loss": 2.6137,
436
+ "rewards/accuracies": 0.6625000238418579,
437
+ "rewards/chosen": -3.8647284507751465,
438
+ "rewards/margins": 0.9928020238876343,
439
+ "rewards/rejected": -4.8575310707092285,
440
+ "semantic_entropy": 0.9038194417953491,
441
+ "step": 135
442
+ },
443
+ {
444
+ "epoch": 0.30601092896174864,
445
+ "grad_norm": 86.17327659708472,
446
+ "learning_rate": 8.763920074482809e-07,
447
+ "logits/chosen": -0.9963301420211792,
448
+ "logits/rejected": -0.9396141767501831,
449
+ "logps/chosen": -0.4139133095741272,
450
+ "logps/rejected": -0.5436104536056519,
451
+ "loss": 2.4158,
452
+ "rewards/accuracies": 0.6937500238418579,
453
+ "rewards/chosen": -4.139132976531982,
454
+ "rewards/margins": 1.296971082687378,
455
+ "rewards/rejected": -5.4361042976379395,
456
+ "semantic_entropy": 0.9314233064651489,
457
+ "step": 140
458
+ },
459
+ {
460
+ "epoch": 0.31693989071038253,
461
+ "grad_norm": 95.91548705026597,
462
+ "learning_rate": 8.635410802610723e-07,
463
+ "logits/chosen": -0.9837471842765808,
464
+ "logits/rejected": -0.96197909116745,
465
+ "logps/chosen": -0.3892672657966614,
466
+ "logps/rejected": -0.4463191032409668,
467
+ "loss": 2.5469,
468
+ "rewards/accuracies": 0.71875,
469
+ "rewards/chosen": -3.8926727771759033,
470
+ "rewards/margins": 0.5705188512802124,
471
+ "rewards/rejected": -4.463191986083984,
472
+ "semantic_entropy": 0.9131715893745422,
473
+ "step": 145
474
+ },
475
+ {
476
+ "epoch": 0.32786885245901637,
477
+ "grad_norm": 62.831854072245996,
478
+ "learning_rate": 8.501591992667849e-07,
479
+ "logits/chosen": -1.0432965755462646,
480
+ "logits/rejected": -1.0063092708587646,
481
+ "logps/chosen": -0.42201298475265503,
482
+ "logps/rejected": -0.5824503302574158,
483
+ "loss": 2.4081,
484
+ "rewards/accuracies": 0.731249988079071,
485
+ "rewards/chosen": -4.22012996673584,
486
+ "rewards/margins": 1.6043736934661865,
487
+ "rewards/rejected": -5.8245038986206055,
488
+ "semantic_entropy": 0.9204033613204956,
489
+ "step": 150
490
+ },
491
+ {
492
+ "epoch": 0.33879781420765026,
493
+ "grad_norm": 71.34642946039108,
494
+ "learning_rate": 8.362659087784152e-07,
495
+ "logits/chosen": -1.0033342838287354,
496
+ "logits/rejected": -0.943057656288147,
497
+ "logps/chosen": -0.4163185656070709,
498
+ "logps/rejected": -0.5151209831237793,
499
+ "loss": 2.5079,
500
+ "rewards/accuracies": 0.6812499761581421,
501
+ "rewards/chosen": -4.163184642791748,
502
+ "rewards/margins": 0.9880247116088867,
503
+ "rewards/rejected": -5.151209831237793,
504
+ "semantic_entropy": 0.9188691973686218,
505
+ "step": 155
506
+ },
507
+ {
508
+ "epoch": 0.34972677595628415,
509
+ "grad_norm": 95.49736374278223,
510
+ "learning_rate": 8.218815000254231e-07,
511
+ "logits/chosen": -1.036727786064148,
512
+ "logits/rejected": -0.9749704599380493,
513
+ "logps/chosen": -0.46870869398117065,
514
+ "logps/rejected": -0.5485578775405884,
515
+ "loss": 2.5679,
516
+ "rewards/accuracies": 0.6875,
517
+ "rewards/chosen": -4.68708610534668,
518
+ "rewards/margins": 0.7984916567802429,
519
+ "rewards/rejected": -5.4855780601501465,
520
+ "semantic_entropy": 0.9369996786117554,
521
+ "step": 160
522
+ },
523
+ {
524
+ "epoch": 0.36065573770491804,
525
+ "grad_norm": 90.06304805222751,
526
+ "learning_rate": 8.07026981518276e-07,
527
+ "logits/chosen": -1.0219743251800537,
528
+ "logits/rejected": -0.9637954831123352,
529
+ "logps/chosen": -0.4483868181705475,
530
+ "logps/rejected": -0.5347827076911926,
531
+ "loss": 2.5189,
532
+ "rewards/accuracies": 0.731249988079071,
533
+ "rewards/chosen": -4.48386812210083,
534
+ "rewards/margins": 0.8639583587646484,
535
+ "rewards/rejected": -5.3478264808654785,
536
+ "semantic_entropy": 0.9511811137199402,
537
+ "step": 165
538
+ },
539
+ {
540
+ "epoch": 0.37158469945355194,
541
+ "grad_norm": 99.59126068255084,
542
+ "learning_rate": 7.917240483654e-07,
543
+ "logits/chosen": -1.01731276512146,
544
+ "logits/rejected": -0.9495924115180969,
545
+ "logps/chosen": -0.44470348954200745,
546
+ "logps/rejected": -0.5198506712913513,
547
+ "loss": 2.6168,
548
+ "rewards/accuracies": 0.6875,
549
+ "rewards/chosen": -4.447035312652588,
550
+ "rewards/margins": 0.7514716386795044,
551
+ "rewards/rejected": -5.198506832122803,
552
+ "semantic_entropy": 0.9501636624336243,
553
+ "step": 170
554
+ },
555
+ {
556
+ "epoch": 0.3825136612021858,
557
+ "grad_norm": 91.44599232569668,
558
+ "learning_rate": 7.759950505873521e-07,
559
+ "logits/chosen": -1.067455768585205,
560
+ "logits/rejected": -1.031198263168335,
561
+ "logps/chosen": -0.4639251232147217,
562
+ "logps/rejected": -0.5280762910842896,
563
+ "loss": 2.4665,
564
+ "rewards/accuracies": 0.6499999761581421,
565
+ "rewards/chosen": -4.639250755310059,
566
+ "rewards/margins": 0.6415112614631653,
567
+ "rewards/rejected": -5.280762672424316,
568
+ "semantic_entropy": 0.9261935949325562,
569
+ "step": 175
570
+ },
571
+ {
572
+ "epoch": 0.39344262295081966,
573
+ "grad_norm": 90.82885753738844,
574
+ "learning_rate": 7.598629604744872e-07,
575
+ "logits/chosen": -1.0707954168319702,
576
+ "logits/rejected": -1.0595567226409912,
577
+ "logps/chosen": -0.43981847167015076,
578
+ "logps/rejected": -0.5758017897605896,
579
+ "loss": 2.3437,
580
+ "rewards/accuracies": 0.7749999761581421,
581
+ "rewards/chosen": -4.398184776306152,
582
+ "rewards/margins": 1.3598332405090332,
583
+ "rewards/rejected": -5.7580180168151855,
584
+ "semantic_entropy": 0.966151237487793,
585
+ "step": 180
586
+ },
587
+ {
588
+ "epoch": 0.40437158469945356,
589
+ "grad_norm": 67.92403731887116,
590
+ "learning_rate": 7.433513390357989e-07,
591
+ "logits/chosen": -1.108884572982788,
592
+ "logits/rejected": -1.1143901348114014,
593
+ "logps/chosen": -0.46474918723106384,
594
+ "logps/rejected": -0.5912537574768066,
595
+ "loss": 2.3628,
596
+ "rewards/accuracies": 0.793749988079071,
597
+ "rewards/chosen": -4.647492408752441,
598
+ "rewards/margins": 1.265044927597046,
599
+ "rewards/rejected": -5.912537097930908,
600
+ "semantic_entropy": 0.9750612378120422,
601
+ "step": 185
602
+ },
603
+ {
604
+ "epoch": 0.41530054644808745,
605
+ "grad_norm": 108.44092763175198,
606
+ "learning_rate": 7.264843015879321e-07,
607
+ "logits/chosen": -1.1020928621292114,
608
+ "logits/rejected": -1.0545780658721924,
609
+ "logps/chosen": -0.4519892632961273,
610
+ "logps/rejected": -0.6003154516220093,
611
+ "loss": 2.4166,
612
+ "rewards/accuracies": 0.7562500238418579,
613
+ "rewards/chosen": -4.51989221572876,
614
+ "rewards/margins": 1.483262062072754,
615
+ "rewards/rejected": -6.003154754638672,
616
+ "semantic_entropy": 0.9666361808776855,
617
+ "step": 190
618
+ },
619
+ {
620
+ "epoch": 0.4262295081967213,
621
+ "grad_norm": 83.4733675057699,
622
+ "learning_rate": 7.092864825346266e-07,
623
+ "logits/chosen": -1.129482626914978,
624
+ "logits/rejected": -1.0993843078613281,
625
+ "logps/chosen": -0.5358282327651978,
626
+ "logps/rejected": -0.7053772211074829,
627
+ "loss": 2.467,
628
+ "rewards/accuracies": 0.737500011920929,
629
+ "rewards/chosen": -5.358282566070557,
630
+ "rewards/margins": 1.6954904794692993,
631
+ "rewards/rejected": -7.053772926330566,
632
+ "semantic_entropy": 0.9725033044815063,
633
+ "step": 195
634
+ },
635
+ {
636
+ "epoch": 0.4371584699453552,
637
+ "grad_norm": 84.36629950168863,
638
+ "learning_rate": 6.917829993880302e-07,
639
+ "logits/chosen": -1.1204617023468018,
640
+ "logits/rejected": -1.0279228687286377,
641
+ "logps/chosen": -0.5025330185890198,
642
+ "logps/rejected": -0.6288330554962158,
643
+ "loss": 2.3371,
644
+ "rewards/accuracies": 0.762499988079071,
645
+ "rewards/chosen": -5.025330543518066,
646
+ "rewards/margins": 1.263000249862671,
647
+ "rewards/rejected": -6.288330078125,
648
+ "semantic_entropy": 0.9786258935928345,
649
+ "step": 200
650
+ },
651
+ {
652
+ "epoch": 0.44808743169398907,
653
+ "grad_norm": 69.08292746057649,
654
+ "learning_rate": 6.739994160844309e-07,
655
+ "logits/chosen": -1.0733792781829834,
656
+ "logits/rejected": -1.0833173990249634,
657
+ "logps/chosen": -0.4674602448940277,
658
+ "logps/rejected": -0.6061697006225586,
659
+ "loss": 2.2298,
660
+ "rewards/accuracies": 0.800000011920929,
661
+ "rewards/chosen": -4.67460298538208,
662
+ "rewards/margins": 1.3870941400527954,
663
+ "rewards/rejected": -6.061697006225586,
664
+ "semantic_entropy": 0.9808717966079712,
665
+ "step": 205
666
+ },
667
+ {
668
+ "epoch": 0.45901639344262296,
669
+ "grad_norm": 173.88772431903314,
670
+ "learning_rate": 6.559617056479827e-07,
671
+ "logits/chosen": -1.1001962423324585,
672
+ "logits/rejected": -1.0926573276519775,
673
+ "logps/chosen": -0.5027323961257935,
674
+ "logps/rejected": -0.6716328263282776,
675
+ "loss": 2.2974,
676
+ "rewards/accuracies": 0.7875000238418579,
677
+ "rewards/chosen": -5.027324676513672,
678
+ "rewards/margins": 1.6890045404434204,
679
+ "rewards/rejected": -6.7163286209106445,
680
+ "semantic_entropy": 0.9590319395065308,
681
+ "step": 210
682
+ },
683
+ {
684
+ "epoch": 0.46994535519125685,
685
+ "grad_norm": 94.73792447265116,
686
+ "learning_rate": 6.376962122569567e-07,
687
+ "logits/chosen": -1.1140978336334229,
688
+ "logits/rejected": -1.0545861721038818,
689
+ "logps/chosen": -0.5114679336547852,
690
+ "logps/rejected": -0.6812509298324585,
691
+ "loss": 2.4425,
692
+ "rewards/accuracies": 0.793749988079071,
693
+ "rewards/chosen": -5.114679336547852,
694
+ "rewards/margins": 1.6978304386138916,
695
+ "rewards/rejected": -6.812510013580322,
696
+ "semantic_entropy": 1.0018761157989502,
697
+ "step": 215
698
+ },
699
+ {
700
+ "epoch": 0.4808743169398907,
701
+ "grad_norm": 92.5570818396132,
702
+ "learning_rate": 6.192296127679192e-07,
703
+ "logits/chosen": -1.1659886837005615,
704
+ "logits/rejected": -1.1348073482513428,
705
+ "logps/chosen": -0.5338795781135559,
706
+ "logps/rejected": -0.6664601564407349,
707
+ "loss": 2.2908,
708
+ "rewards/accuracies": 0.71875,
709
+ "rewards/chosen": -5.338796138763428,
710
+ "rewards/margins": 1.325805425643921,
711
+ "rewards/rejected": -6.664601802825928,
712
+ "semantic_entropy": 0.9806681871414185,
713
+ "step": 220
714
+ },
715
+ {
716
+ "epoch": 0.4918032786885246,
717
+ "grad_norm": 69.72669762595787,
718
+ "learning_rate": 6.005888777540319e-07,
719
+ "logits/chosen": -1.1951611042022705,
720
+ "logits/rejected": -1.1498881578445435,
721
+ "logps/chosen": -0.5304981470108032,
722
+ "logps/rejected": -0.6832343935966492,
723
+ "loss": 2.335,
724
+ "rewards/accuracies": 0.762499988079071,
725
+ "rewards/chosen": -5.304981708526611,
726
+ "rewards/margins": 1.5273630619049072,
727
+ "rewards/rejected": -6.832344055175781,
728
+ "semantic_entropy": 1.0007470846176147,
729
+ "step": 225
730
+ },
731
+ {
732
+ "epoch": 0.5027322404371585,
733
+ "grad_norm": 75.21829123168051,
734
+ "learning_rate": 5.818012321143773e-07,
735
+ "logits/chosen": -1.0969598293304443,
736
+ "logits/rejected": -1.0912028551101685,
737
+ "logps/chosen": -0.5201154351234436,
738
+ "logps/rejected": -0.7016697525978088,
739
+ "loss": 2.2528,
740
+ "rewards/accuracies": 0.8062499761581421,
741
+ "rewards/chosen": -5.2011542320251465,
742
+ "rewards/margins": 1.8155431747436523,
743
+ "rewards/rejected": -7.016697883605957,
744
+ "semantic_entropy": 0.9929972887039185,
745
+ "step": 230
746
+ },
747
+ {
748
+ "epoch": 0.5136612021857924,
749
+ "grad_norm": 83.71823549926938,
750
+ "learning_rate": 5.628941153118388e-07,
751
+ "logits/chosen": -1.1018104553222656,
752
+ "logits/rejected": -1.0575555562973022,
753
+ "logps/chosen": -0.5212110280990601,
754
+ "logps/rejected": -0.664139986038208,
755
+ "loss": 2.2375,
756
+ "rewards/accuracies": 0.78125,
757
+ "rewards/chosen": -5.21211051940918,
758
+ "rewards/margins": 1.4292891025543213,
759
+ "rewards/rejected": -6.641399383544922,
760
+ "semantic_entropy": 0.9852234125137329,
761
+ "step": 235
762
+ },
763
+ {
764
+ "epoch": 0.5245901639344263,
765
+ "grad_norm": 69.35272919599275,
766
+ "learning_rate": 5.438951412976098e-07,
767
+ "logits/chosen": -1.1364176273345947,
768
+ "logits/rejected": -1.141788125038147,
769
+ "logps/chosen": -0.49681615829467773,
770
+ "logps/rejected": -0.6832265853881836,
771
+ "loss": 2.0805,
772
+ "rewards/accuracies": 0.800000011920929,
773
+ "rewards/chosen": -4.9681620597839355,
774
+ "rewards/margins": 1.8641045093536377,
775
+ "rewards/rejected": -6.832266330718994,
776
+ "semantic_entropy": 1.0116462707519531,
777
+ "step": 240
778
+ },
779
+ {
780
+ "epoch": 0.5355191256830601,
781
+ "grad_norm": 71.19254935234711,
782
+ "learning_rate": 5.248320581808619e-07,
783
+ "logits/chosen": -1.0613957643508911,
784
+ "logits/rejected": -1.0091025829315186,
785
+ "logps/chosen": -0.5141640901565552,
786
+ "logps/rejected": -0.6968377828598022,
787
+ "loss": 2.1843,
788
+ "rewards/accuracies": 0.75,
789
+ "rewards/chosen": -5.141640663146973,
790
+ "rewards/margins": 1.8267381191253662,
791
+ "rewards/rejected": -6.96837854385376,
792
+ "semantic_entropy": 0.9718330502510071,
793
+ "step": 245
794
+ },
795
+ {
796
+ "epoch": 0.546448087431694,
797
+ "grad_norm": 70.83927823916859,
798
+ "learning_rate": 5.057327077024744e-07,
799
+ "logits/chosen": -1.146533727645874,
800
+ "logits/rejected": -1.1047497987747192,
801
+ "logps/chosen": -0.5028788447380066,
802
+ "logps/rejected": -0.6331702470779419,
803
+ "loss": 2.2718,
804
+ "rewards/accuracies": 0.7250000238418579,
805
+ "rewards/chosen": -5.028789043426514,
806
+ "rewards/margins": 1.3029136657714844,
807
+ "rewards/rejected": -6.331702709197998,
808
+ "semantic_entropy": 0.9849420785903931,
809
+ "step": 250
810
+ },
811
+ {
812
+ "epoch": 0.5573770491803278,
813
+ "grad_norm": 78.29453714448445,
814
+ "learning_rate": 4.866249845720132e-07,
815
+ "logits/chosen": -1.1301579475402832,
816
+ "logits/rejected": -1.091973900794983,
817
+ "logps/chosen": -0.555388331413269,
818
+ "logps/rejected": -0.7187477946281433,
819
+ "loss": 2.1692,
820
+ "rewards/accuracies": 0.762499988079071,
821
+ "rewards/chosen": -5.553883075714111,
822
+ "rewards/margins": 1.633594274520874,
823
+ "rewards/rejected": -7.187478065490723,
824
+ "semantic_entropy": 0.9999436140060425,
825
+ "step": 255
826
+ },
827
+ {
828
+ "epoch": 0.5683060109289617,
829
+ "grad_norm": 74.66530559540921,
830
+ "learning_rate": 4.675367957273505e-07,
831
+ "logits/chosen": -1.096861720085144,
832
+ "logits/rejected": -1.0846450328826904,
833
+ "logps/chosen": -0.5131552815437317,
834
+ "logps/rejected": -0.671288251876831,
835
+ "loss": 2.1911,
836
+ "rewards/accuracies": 0.800000011920929,
837
+ "rewards/chosen": -5.131552219390869,
838
+ "rewards/margins": 1.5813300609588623,
839
+ "rewards/rejected": -6.712882041931152,
840
+ "semantic_entropy": 0.9927156567573547,
841
+ "step": 260
842
+ },
843
+ {
844
+ "epoch": 0.5792349726775956,
845
+ "grad_norm": 122.60471760919329,
846
+ "learning_rate": 4.4849601957642285e-07,
847
+ "logits/chosen": -1.124089002609253,
848
+ "logits/rejected": -1.0828189849853516,
849
+ "logps/chosen": -0.5186060070991516,
850
+ "logps/rejected": -0.6825847625732422,
851
+ "loss": 2.1964,
852
+ "rewards/accuracies": 0.762499988079071,
853
+ "rewards/chosen": -5.186059474945068,
854
+ "rewards/margins": 1.6397874355316162,
855
+ "rewards/rejected": -6.8258466720581055,
856
+ "semantic_entropy": 0.9880490303039551,
857
+ "step": 265
858
+ },
859
+ {
860
+ "epoch": 0.5901639344262295,
861
+ "grad_norm": 86.34568213442027,
862
+ "learning_rate": 4.295304652806592e-07,
863
+ "logits/chosen": -1.1392979621887207,
864
+ "logits/rejected": -1.1078673601150513,
865
+ "logps/chosen": -0.5172940492630005,
866
+ "logps/rejected": -0.699386477470398,
867
+ "loss": 2.0791,
868
+ "rewards/accuracies": 0.8062499761581421,
869
+ "rewards/chosen": -5.172940254211426,
870
+ "rewards/margins": 1.8209247589111328,
871
+ "rewards/rejected": -6.993865013122559,
872
+ "semantic_entropy": 0.9866276979446411,
873
+ "step": 270
874
+ },
875
+ {
876
+ "epoch": 0.6010928961748634,
877
+ "grad_norm": 58.20062582439532,
878
+ "learning_rate": 4.106678321395433e-07,
879
+ "logits/chosen": -1.1032135486602783,
880
+ "logits/rejected": -1.0302824974060059,
881
+ "logps/chosen": -0.5297619104385376,
882
+ "logps/rejected": -0.627161979675293,
883
+ "loss": 2.1916,
884
+ "rewards/accuracies": 0.643750011920929,
885
+ "rewards/chosen": -5.2976179122924805,
886
+ "rewards/margins": 0.9740018844604492,
887
+ "rewards/rejected": -6.271620750427246,
888
+ "semantic_entropy": 0.9879854917526245,
889
+ "step": 275
890
+ },
891
+ {
892
+ "epoch": 0.6120218579234973,
893
+ "grad_norm": 84.89751844734043,
894
+ "learning_rate": 3.9193566913562915e-07,
895
+ "logits/chosen": -1.0617036819458008,
896
+ "logits/rejected": -1.0624239444732666,
897
+ "logps/chosen": -0.5223734378814697,
898
+ "logps/rejected": -0.7275804281234741,
899
+ "loss": 2.1763,
900
+ "rewards/accuracies": 0.7437499761581421,
901
+ "rewards/chosen": -5.223733901977539,
902
+ "rewards/margins": 2.0520694255828857,
903
+ "rewards/rejected": -7.275804042816162,
904
+ "semantic_entropy": 0.986528754234314,
905
+ "step": 280
906
+ },
907
+ {
908
+ "epoch": 0.6229508196721312,
909
+ "grad_norm": 72.19485427611927,
910
+ "learning_rate": 3.7336133469909623e-07,
911
+ "logits/chosen": -1.1958709955215454,
912
+ "logits/rejected": -1.1594369411468506,
913
+ "logps/chosen": -0.5087668895721436,
914
+ "logps/rejected": -0.702479898929596,
915
+ "loss": 2.1358,
916
+ "rewards/accuracies": 0.8125,
917
+ "rewards/chosen": -5.0876688957214355,
918
+ "rewards/margins": 1.9371296167373657,
919
+ "rewards/rejected": -7.024799346923828,
920
+ "semantic_entropy": 0.9978361129760742,
921
+ "step": 285
922
+ },
923
+ {
924
+ "epoch": 0.6338797814207651,
925
+ "grad_norm": 81.98995443073827,
926
+ "learning_rate": 3.549719567506076e-07,
927
+ "logits/chosen": -1.1317315101623535,
928
+ "logits/rejected": -1.0870287418365479,
929
+ "logps/chosen": -0.5346897840499878,
930
+ "logps/rejected": -0.6969183087348938,
931
+ "loss": 2.1523,
932
+ "rewards/accuracies": 0.800000011920929,
933
+ "rewards/chosen": -5.346898078918457,
934
+ "rewards/margins": 1.62228524684906,
935
+ "rewards/rejected": -6.969183444976807,
936
+ "semantic_entropy": 1.0123668909072876,
937
+ "step": 290
938
+ },
939
+ {
940
+ "epoch": 0.644808743169399,
941
+ "grad_norm": 67.90034025619751,
942
+ "learning_rate": 3.3679439308082774e-07,
943
+ "logits/chosen": -1.115994930267334,
944
+ "logits/rejected": -1.1152690649032593,
945
+ "logps/chosen": -0.5274439454078674,
946
+ "logps/rejected": -0.7239035367965698,
947
+ "loss": 1.9278,
948
+ "rewards/accuracies": 0.8125,
949
+ "rewards/chosen": -5.274438381195068,
950
+ "rewards/margins": 1.9645967483520508,
951
+ "rewards/rejected": -7.239035606384277,
952
+ "semantic_entropy": 1.0061827898025513,
953
+ "step": 295
954
+ },
955
+ {
956
+ "epoch": 0.6557377049180327,
957
+ "grad_norm": 70.85439265034472,
958
+ "learning_rate": 3.1885519212446716e-07,
959
+ "logits/chosen": -1.144639253616333,
960
+ "logits/rejected": -1.1228580474853516,
961
+ "logps/chosen": -0.542576253414154,
962
+ "logps/rejected": -0.7291213274002075,
963
+ "loss": 2.0159,
964
+ "rewards/accuracies": 0.8125,
965
+ "rewards/chosen": -5.42576265335083,
966
+ "rewards/margins": 1.865450143814087,
967
+ "rewards/rejected": -7.291213035583496,
968
+ "semantic_entropy": 0.9855409860610962,
969
+ "step": 300
970
+ },
971
+ {
972
+ "epoch": 0.6666666666666666,
973
+ "grad_norm": 76.04305212768787,
974
+ "learning_rate": 3.0118055418614295e-07,
975
+ "logits/chosen": -1.1450592279434204,
976
+ "logits/rejected": -1.0869606733322144,
977
+ "logps/chosen": -0.5319762229919434,
978
+ "logps/rejected": -0.7148723006248474,
979
+ "loss": 2.1436,
980
+ "rewards/accuracies": 0.8062499761581421,
981
+ "rewards/chosen": -5.319762706756592,
982
+ "rewards/margins": 1.8289600610733032,
983
+ "rewards/rejected": -7.1487226486206055,
984
+ "semantic_entropy": 1.007246732711792,
985
+ "step": 305
986
+ },
987
+ {
988
+ "epoch": 0.6775956284153005,
989
+ "grad_norm": 108.42660481521786,
990
+ "learning_rate": 2.83796293174686e-07,
991
+ "logits/chosen": -1.0885827541351318,
992
+ "logits/rejected": -1.092543363571167,
993
+ "logps/chosen": -0.5401273369789124,
994
+ "logps/rejected": -0.748576283454895,
995
+ "loss": 2.2592,
996
+ "rewards/accuracies": 0.75,
997
+ "rewards/chosen": -5.401273250579834,
998
+ "rewards/margins": 2.084489107131958,
999
+ "rewards/rejected": -7.485762119293213,
1000
+ "semantic_entropy": 0.9948571920394897,
1001
+ "step": 310
1002
+ },
1003
+ {
1004
+ "epoch": 0.6885245901639344,
1005
+ "grad_norm": 70.86094179855972,
1006
+ "learning_rate": 2.6672779890178046e-07,
1007
+ "logits/chosen": -1.1491663455963135,
1008
+ "logits/rejected": -1.1490873098373413,
1009
+ "logps/chosen": -0.5631974935531616,
1010
+ "logps/rejected": -0.6903260946273804,
1011
+ "loss": 2.1308,
1012
+ "rewards/accuracies": 0.7250000238418579,
1013
+ "rewards/chosen": -5.631974697113037,
1014
+ "rewards/margins": 1.2712849378585815,
1015
+ "rewards/rejected": -6.903260231018066,
1016
+ "semantic_entropy": 0.9926439523696899,
1017
+ "step": 315
1018
+ },
1019
+ {
1020
+ "epoch": 0.6994535519125683,
1021
+ "grad_norm": 57.13767602235213,
1022
+ "learning_rate": 2.500000000000001e-07,
1023
+ "logits/chosen": -1.2022249698638916,
1024
+ "logits/rejected": -1.1517468690872192,
1025
+ "logps/chosen": -0.5420448184013367,
1026
+ "logps/rejected": -0.747357964515686,
1027
+ "loss": 2.0999,
1028
+ "rewards/accuracies": 0.824999988079071,
1029
+ "rewards/chosen": -5.42044734954834,
1030
+ "rewards/margins": 2.053131580352783,
1031
+ "rewards/rejected": -7.473579406738281,
1032
+ "semantic_entropy": 1.00899338722229,
1033
+ "step": 320
1034
+ },
1035
+ {
1036
+ "epoch": 0.7103825136612022,
1037
+ "grad_norm": 82.09122654285451,
1038
+ "learning_rate": 2.3363732751439923e-07,
1039
+ "logits/chosen": -1.1618945598602295,
1040
+ "logits/rejected": -1.143754243850708,
1041
+ "logps/chosen": -0.5291402339935303,
1042
+ "logps/rejected": -0.7228410243988037,
1043
+ "loss": 2.0999,
1044
+ "rewards/accuracies": 0.78125,
1045
+ "rewards/chosen": -5.291402339935303,
1046
+ "rewards/margins": 1.9370079040527344,
1047
+ "rewards/rejected": -7.228410243988037,
1048
+ "semantic_entropy": 1.0088245868682861,
1049
+ "step": 325
1050
+ },
1051
+ {
1052
+ "epoch": 0.7213114754098361,
1053
+ "grad_norm": 62.18032792736575,
1054
+ "learning_rate": 2.1766367922083283e-07,
1055
+ "logits/chosen": -1.112157940864563,
1056
+ "logits/rejected": -1.0798307657241821,
1057
+ "logps/chosen": -0.4986172318458557,
1058
+ "logps/rejected": -0.7466678023338318,
1059
+ "loss": 2.0623,
1060
+ "rewards/accuracies": 0.8062499761581421,
1061
+ "rewards/chosen": -4.986172676086426,
1062
+ "rewards/margins": 2.4805047512054443,
1063
+ "rewards/rejected": -7.466677188873291,
1064
+ "semantic_entropy": 0.98463374376297,
1065
+ "step": 330
1066
+ },
1067
+ {
1068
+ "epoch": 0.73224043715847,
1069
+ "grad_norm": 77.93323322539872,
1070
+ "learning_rate": 2.021023847231202e-07,
1071
+ "logits/chosen": -1.1002051830291748,
1072
+ "logits/rejected": -1.0612647533416748,
1073
+ "logps/chosen": -0.5647180080413818,
1074
+ "logps/rejected": -0.758574366569519,
1075
+ "loss": 2.0578,
1076
+ "rewards/accuracies": 0.8374999761581421,
1077
+ "rewards/chosen": -5.647180557250977,
1078
+ "rewards/margins": 1.938563346862793,
1079
+ "rewards/rejected": -7.5857439041137695,
1080
+ "semantic_entropy": 0.9851423501968384,
1081
+ "step": 335
1082
+ },
1083
+ {
1084
+ "epoch": 0.7431693989071039,
1085
+ "grad_norm": 86.17026425537334,
1086
+ "learning_rate": 1.869761713800254e-07,
1087
+ "logits/chosen": -1.1088799238204956,
1088
+ "logits/rejected": -1.064263939857483,
1089
+ "logps/chosen": -0.5516521334648132,
1090
+ "logps/rejected": -0.7183089256286621,
1091
+ "loss": 2.1191,
1092
+ "rewards/accuracies": 0.71875,
1093
+ "rewards/chosen": -5.516521453857422,
1094
+ "rewards/margins": 1.6665668487548828,
1095
+ "rewards/rejected": -7.183088779449463,
1096
+ "semantic_entropy": 0.9953676462173462,
1097
+ "step": 340
1098
+ },
1099
+ {
1100
+ "epoch": 0.7540983606557377,
1101
+ "grad_norm": 80.58633018790611,
1102
+ "learning_rate": 1.7230713111182164e-07,
1103
+ "logits/chosen": -1.156589150428772,
1104
+ "logits/rejected": -1.1543285846710205,
1105
+ "logps/chosen": -0.5463498830795288,
1106
+ "logps/rejected": -0.7534428238868713,
1107
+ "loss": 2.1553,
1108
+ "rewards/accuracies": 0.8125,
1109
+ "rewards/chosen": -5.463499546051025,
1110
+ "rewards/margins": 2.0709292888641357,
1111
+ "rewards/rejected": -7.534428596496582,
1112
+ "semantic_entropy": 0.9916456341743469,
1113
+ "step": 345
1114
+ },
1115
+ {
1116
+ "epoch": 0.7650273224043715,
1117
+ "grad_norm": 89.6142113445473,
1118
+ "learning_rate": 1.5811668813491696e-07,
1119
+ "logits/chosen": -1.1436890363693237,
1120
+ "logits/rejected": -1.124874234199524,
1121
+ "logps/chosen": -0.5143482685089111,
1122
+ "logps/rejected": -0.6786841154098511,
1123
+ "loss": 2.0898,
1124
+ "rewards/accuracies": 0.8062499761581421,
1125
+ "rewards/chosen": -5.143482685089111,
1126
+ "rewards/margins": 1.643358588218689,
1127
+ "rewards/rejected": -6.786840915679932,
1128
+ "semantic_entropy": 0.9892457127571106,
1129
+ "step": 350
1130
+ },
1131
+ {
1132
+ "epoch": 0.7759562841530054,
1133
+ "grad_norm": 66.31246023221092,
1134
+ "learning_rate": 1.4442556767166369e-07,
1135
+ "logits/chosen": -1.1231715679168701,
1136
+ "logits/rejected": -1.0935585498809814,
1137
+ "logps/chosen": -0.5266932845115662,
1138
+ "logps/rejected": -0.6977185010910034,
1139
+ "loss": 2.0776,
1140
+ "rewards/accuracies": 0.800000011920929,
1141
+ "rewards/chosen": -5.266932487487793,
1142
+ "rewards/margins": 1.7102525234222412,
1143
+ "rewards/rejected": -6.9771857261657715,
1144
+ "semantic_entropy": 1.0020959377288818,
1145
+ "step": 355
1146
+ },
1147
+ {
1148
+ "epoch": 0.7868852459016393,
1149
+ "grad_norm": 83.110534751007,
1150
+ "learning_rate": 1.312537656810549e-07,
1151
+ "logits/chosen": -1.0739078521728516,
1152
+ "logits/rejected": -1.0743262767791748,
1153
+ "logps/chosen": -0.5362976789474487,
1154
+ "logps/rejected": -0.714411735534668,
1155
+ "loss": 2.1365,
1156
+ "rewards/accuracies": 0.737500011920929,
1157
+ "rewards/chosen": -5.362977504730225,
1158
+ "rewards/margins": 1.7811400890350342,
1159
+ "rewards/rejected": -7.144117832183838,
1160
+ "semantic_entropy": 0.9827717542648315,
1161
+ "step": 360
1162
+ },
1163
+ {
1164
+ "epoch": 0.7978142076502732,
1165
+ "grad_norm": 141.91877356203636,
1166
+ "learning_rate": 1.1862051965451214e-07,
1167
+ "logits/chosen": -1.1579176187515259,
1168
+ "logits/rejected": -1.1566094160079956,
1169
+ "logps/chosen": -0.5423863530158997,
1170
+ "logps/rejected": -0.7343495488166809,
1171
+ "loss": 2.054,
1172
+ "rewards/accuracies": 0.824999988079071,
1173
+ "rewards/chosen": -5.423863410949707,
1174
+ "rewards/margins": 1.9196319580078125,
1175
+ "rewards/rejected": -7.3434953689575195,
1176
+ "semantic_entropy": 1.011482834815979,
1177
+ "step": 365
1178
+ },
1179
+ {
1180
+ "epoch": 0.8087431693989071,
1181
+ "grad_norm": 78.59703524378126,
1182
+ "learning_rate": 1.0654428051942138e-07,
1183
+ "logits/chosen": -1.165038824081421,
1184
+ "logits/rejected": -1.1290335655212402,
1185
+ "logps/chosen": -0.5577388405799866,
1186
+ "logps/rejected": -0.7825115919113159,
1187
+ "loss": 2.1829,
1188
+ "rewards/accuracies": 0.7749999761581421,
1189
+ "rewards/chosen": -5.577388286590576,
1190
+ "rewards/margins": 2.247727632522583,
1191
+ "rewards/rejected": -7.825116157531738,
1192
+ "semantic_entropy": 1.0018393993377686,
1193
+ "step": 370
1194
+ },
1195
+ {
1196
+ "epoch": 0.819672131147541,
1197
+ "grad_norm": 63.6353735816688,
1198
+ "learning_rate": 9.504268569144763e-08,
1199
+ "logits/chosen": -1.2013657093048096,
1200
+ "logits/rejected": -1.1338837146759033,
1201
+ "logps/chosen": -0.5316141843795776,
1202
+ "logps/rejected": -0.7204681038856506,
1203
+ "loss": 2.0911,
1204
+ "rewards/accuracies": 0.762499988079071,
1205
+ "rewards/chosen": -5.316141605377197,
1206
+ "rewards/margins": 1.8885393142700195,
1207
+ "rewards/rejected": -7.204681396484375,
1208
+ "semantic_entropy": 1.0054622888565063,
1209
+ "step": 375
1210
+ },
1211
+ {
1212
+ "epoch": 0.8306010928961749,
1213
+ "grad_norm": 64.71607446988344,
1214
+ "learning_rate": 8.413253331499049e-08,
1215
+ "logits/chosen": -1.0807088613510132,
1216
+ "logits/rejected": -1.102399230003357,
1217
+ "logps/chosen": -0.549530029296875,
1218
+ "logps/rejected": -0.7361005544662476,
1219
+ "loss": 2.0374,
1220
+ "rewards/accuracies": 0.862500011920929,
1221
+ "rewards/chosen": -5.49530029296875,
1222
+ "rewards/margins": 1.8657052516937256,
1223
+ "rewards/rejected": -7.361004829406738,
1224
+ "semantic_entropy": 1.0044524669647217,
1225
+ "step": 380
1226
+ },
1227
+ {
1228
+ "epoch": 0.8415300546448088,
1229
+ "grad_norm": 67.03443291040514,
1230
+ "learning_rate": 7.382975772939865e-08,
1231
+ "logits/chosen": -1.1790930032730103,
1232
+ "logits/rejected": -1.1615774631500244,
1233
+ "logps/chosen": -0.590388834476471,
1234
+ "logps/rejected": -0.7754439115524292,
1235
+ "loss": 2.1706,
1236
+ "rewards/accuracies": 0.84375,
1237
+ "rewards/chosen": -5.903887748718262,
1238
+ "rewards/margins": 1.850551962852478,
1239
+ "rewards/rejected": -7.754439353942871,
1240
+ "semantic_entropy": 1.0115418434143066,
1241
+ "step": 385
1242
+ },
1243
+ {
1244
+ "epoch": 0.8524590163934426,
1245
+ "grad_norm": 103.88519596336293,
1246
+ "learning_rate": 6.414940619677734e-08,
1247
+ "logits/chosen": -1.166526436805725,
1248
+ "logits/rejected": -1.1425046920776367,
1249
+ "logps/chosen": -0.5350316762924194,
1250
+ "logps/rejected": -0.7514439821243286,
1251
+ "loss": 2.1188,
1252
+ "rewards/accuracies": 0.831250011920929,
1253
+ "rewards/chosen": -5.350315570831299,
1254
+ "rewards/margins": 2.164124011993408,
1255
+ "rewards/rejected": -7.514439582824707,
1256
+ "semantic_entropy": 1.0120559930801392,
1257
+ "step": 390
1258
+ },
1259
+ {
1260
+ "epoch": 0.8633879781420765,
1261
+ "grad_norm": 71.31872002268815,
1262
+ "learning_rate": 5.5105616925376296e-08,
1263
+ "logits/chosen": -1.1460245847702026,
1264
+ "logits/rejected": -1.1267726421356201,
1265
+ "logps/chosen": -0.5442631244659424,
1266
+ "logps/rejected": -0.6925864815711975,
1267
+ "loss": 2.0494,
1268
+ "rewards/accuracies": 0.768750011920929,
1269
+ "rewards/chosen": -5.442631721496582,
1270
+ "rewards/margins": 1.4832336902618408,
1271
+ "rewards/rejected": -6.925864219665527,
1272
+ "semantic_entropy": 1.0200846195220947,
1273
+ "step": 395
1274
+ },
1275
+ {
1276
+ "epoch": 0.8743169398907104,
1277
+ "grad_norm": 75.02341041681119,
1278
+ "learning_rate": 4.6711598420656976e-08,
1279
+ "logits/chosen": -1.0774163007736206,
1280
+ "logits/rejected": -1.0491969585418701,
1281
+ "logps/chosen": -0.5711244344711304,
1282
+ "logps/rejected": -0.7721945643424988,
1283
+ "loss": 2.0424,
1284
+ "rewards/accuracies": 0.824999988079071,
1285
+ "rewards/chosen": -5.711243629455566,
1286
+ "rewards/margins": 2.010702133178711,
1287
+ "rewards/rejected": -7.721946716308594,
1288
+ "semantic_entropy": 0.9843025207519531,
1289
+ "step": 400
1290
+ },
1291
+ {
1292
+ "epoch": 0.8743169398907104,
1293
+ "eval_logits/chosen": -1.373081088066101,
1294
+ "eval_logits/rejected": -1.332649827003479,
1295
+ "eval_logps/chosen": -0.5314387083053589,
1296
+ "eval_logps/rejected": -0.7141891121864319,
1297
+ "eval_loss": 2.0935795307159424,
1298
+ "eval_rewards/accuracies": 0.7771084308624268,
1299
+ "eval_rewards/chosen": -5.314386367797852,
1300
+ "eval_rewards/margins": 1.8275047540664673,
1301
+ "eval_rewards/rejected": -7.1418914794921875,
1302
+ "eval_runtime": 37.9946,
1303
+ "eval_samples_per_second": 34.689,
1304
+ "eval_semantic_entropy": 0.9976296424865723,
1305
+ "eval_steps_per_second": 2.185,
1306
+ "step": 400
1307
+ },
1308
+ {
1309
+ "epoch": 0.8852459016393442,
1310
+ "grad_norm": 68.55432141696805,
1311
+ "learning_rate": 3.897961019419516e-08,
1312
+ "logits/chosen": -1.1141546964645386,
1313
+ "logits/rejected": -1.046690583229065,
1314
+ "logps/chosen": -0.5127943754196167,
1315
+ "logps/rejected": -0.656581699848175,
1316
+ "loss": 1.9895,
1317
+ "rewards/accuracies": 0.7749999761581421,
1318
+ "rewards/chosen": -5.127943992614746,
1319
+ "rewards/margins": 1.4378730058670044,
1320
+ "rewards/rejected": -6.565816402435303,
1321
+ "semantic_entropy": 1.010331630706787,
1322
+ "step": 405
1323
+ },
1324
+ {
1325
+ "epoch": 0.8961748633879781,
1326
+ "grad_norm": 78.80129315841073,
1327
+ "learning_rate": 3.192094485859526e-08,
1328
+ "logits/chosen": -1.1211316585540771,
1329
+ "logits/rejected": -1.1407296657562256,
1330
+ "logps/chosen": -0.5510164499282837,
1331
+ "logps/rejected": -0.7787143588066101,
1332
+ "loss": 2.0771,
1333
+ "rewards/accuracies": 0.762499988079071,
1334
+ "rewards/chosen": -5.5101637840271,
1335
+ "rewards/margins": 2.2769789695739746,
1336
+ "rewards/rejected": -7.787143707275391,
1337
+ "semantic_entropy": 0.9897591471672058,
1338
+ "step": 410
1339
+ },
1340
+ {
1341
+ "epoch": 0.907103825136612,
1342
+ "grad_norm": 63.07603955603986,
1343
+ "learning_rate": 2.5545911634565265e-08,
1344
+ "logits/chosen": -1.1598929166793823,
1345
+ "logits/rejected": -1.1571664810180664,
1346
+ "logps/chosen": -0.5616727471351624,
1347
+ "logps/rejected": -0.7835390567779541,
1348
+ "loss": 2.1028,
1349
+ "rewards/accuracies": 0.824999988079071,
1350
+ "rewards/chosen": -5.616726875305176,
1351
+ "rewards/margins": 2.2186641693115234,
1352
+ "rewards/rejected": -7.835390567779541,
1353
+ "semantic_entropy": 0.9976503252983093,
1354
+ "step": 415
1355
+ },
1356
+ {
1357
+ "epoch": 0.9180327868852459,
1358
+ "grad_norm": 89.74277499235377,
1359
+ "learning_rate": 1.9863821294241522e-08,
1360
+ "logits/chosen": -1.1581684350967407,
1361
+ "logits/rejected": -1.1270772218704224,
1362
+ "logps/chosen": -0.5167144536972046,
1363
+ "logps/rejected": -0.7354345321655273,
1364
+ "loss": 1.9853,
1365
+ "rewards/accuracies": 0.800000011920929,
1366
+ "rewards/chosen": -5.167144298553467,
1367
+ "rewards/margins": 2.1872007846832275,
1368
+ "rewards/rejected": -7.354344844818115,
1369
+ "semantic_entropy": 1.002636194229126,
1370
+ "step": 420
1371
+ },
1372
+ {
1373
+ "epoch": 0.9289617486338798,
1374
+ "grad_norm": 71.82508706026904,
1375
+ "learning_rate": 1.4882972562753615e-08,
1376
+ "logits/chosen": -1.1459519863128662,
1377
+ "logits/rejected": -1.1357475519180298,
1378
+ "logps/chosen": -0.5768141746520996,
1379
+ "logps/rejected": -0.7887662053108215,
1380
+ "loss": 2.1228,
1381
+ "rewards/accuracies": 0.78125,
1382
+ "rewards/chosen": -5.768141746520996,
1383
+ "rewards/margins": 2.1195199489593506,
1384
+ "rewards/rejected": -7.887660980224609,
1385
+ "semantic_entropy": 0.9999387860298157,
1386
+ "step": 425
1387
+ },
1388
+ {
1389
+ "epoch": 0.9398907103825137,
1390
+ "grad_norm": 91.24129023810345,
1391
+ "learning_rate": 1.0610639997888915e-08,
1392
+ "logits/chosen": -1.0857610702514648,
1393
+ "logits/rejected": -1.0863049030303955,
1394
+ "logps/chosen": -0.520858883857727,
1395
+ "logps/rejected": -0.7221606969833374,
1396
+ "loss": 1.9229,
1397
+ "rewards/accuracies": 0.8687499761581421,
1398
+ "rewards/chosen": -5.208588600158691,
1399
+ "rewards/margins": 2.0130181312561035,
1400
+ "rewards/rejected": -7.221606254577637,
1401
+ "semantic_entropy": 1.0139762163162231,
1402
+ "step": 430
1403
+ },
1404
+ {
1405
+ "epoch": 0.9508196721311475,
1406
+ "grad_norm": 88.53510645254667,
1407
+ "learning_rate": 7.053063365559997e-09,
1408
+ "logits/chosen": -1.147918939590454,
1409
+ "logits/rejected": -1.1737738847732544,
1410
+ "logps/chosen": -0.5408393740653992,
1411
+ "logps/rejected": -0.7648274898529053,
1412
+ "loss": 2.0597,
1413
+ "rewards/accuracies": 0.84375,
1414
+ "rewards/chosen": -5.408394813537598,
1415
+ "rewards/margins": 2.2398805618286133,
1416
+ "rewards/rejected": -7.6482744216918945,
1417
+ "semantic_entropy": 1.0074278116226196,
1418
+ "step": 435
1419
+ },
1420
+ {
1421
+ "epoch": 0.9617486338797814,
1422
+ "grad_norm": 77.47275941246323,
1423
+ "learning_rate": 4.215438526591064e-09,
1424
+ "logits/chosen": -1.08914053440094,
1425
+ "logits/rejected": -1.0488555431365967,
1426
+ "logps/chosen": -0.5592411160469055,
1427
+ "logps/rejected": -0.7156537175178528,
1428
+ "loss": 2.0346,
1429
+ "rewards/accuracies": 0.7562500238418579,
1430
+ "rewards/chosen": -5.592411518096924,
1431
+ "rewards/margins": 1.5641257762908936,
1432
+ "rewards/rejected": -7.156537055969238,
1433
+ "semantic_entropy": 0.9892482757568359,
1434
+ "step": 440
1435
+ },
1436
+ {
1437
+ "epoch": 0.9726775956284153,
1438
+ "grad_norm": 85.37542604306078,
1439
+ "learning_rate": 2.1019098481337426e-09,
1440
+ "logits/chosen": -1.1617168188095093,
1441
+ "logits/rejected": -1.1342122554779053,
1442
+ "logps/chosen": -0.5354763865470886,
1443
+ "logps/rejected": -0.7130267024040222,
1444
+ "loss": 2.0073,
1445
+ "rewards/accuracies": 0.8125,
1446
+ "rewards/chosen": -5.354763984680176,
1447
+ "rewards/margins": 1.7755035161972046,
1448
+ "rewards/rejected": -7.130267143249512,
1449
+ "semantic_entropy": 1.0182139873504639,
1450
+ "step": 445
1451
+ },
1452
+ {
1453
+ "epoch": 0.9836065573770492,
1454
+ "grad_norm": 92.55545564226749,
1455
+ "learning_rate": 7.155641507955445e-10,
1456
+ "logits/chosen": -1.0736119747161865,
1457
+ "logits/rejected": -1.0667097568511963,
1458
+ "logps/chosen": -0.5805756449699402,
1459
+ "logps/rejected": -0.7572126388549805,
1460
+ "loss": 2.146,
1461
+ "rewards/accuracies": 0.78125,
1462
+ "rewards/chosen": -5.805756568908691,
1463
+ "rewards/margins": 1.7663694620132446,
1464
+ "rewards/rejected": -7.5721259117126465,
1465
+ "semantic_entropy": 0.9835384488105774,
1466
+ "step": 450
1467
+ },
1468
+ {
1469
+ "epoch": 0.994535519125683,
1470
+ "grad_norm": 79.92510322372067,
1471
+ "learning_rate": 5.842620032053824e-11,
1472
+ "logits/chosen": -1.0938892364501953,
1473
+ "logits/rejected": -1.0882636308670044,
1474
+ "logps/chosen": -0.5764094591140747,
1475
+ "logps/rejected": -0.7290435433387756,
1476
+ "loss": 2.187,
1477
+ "rewards/accuracies": 0.7562500238418579,
1478
+ "rewards/chosen": -5.764094829559326,
1479
+ "rewards/margins": 1.5263407230377197,
1480
+ "rewards/rejected": -7.290434837341309,
1481
+ "semantic_entropy": 1.0177193880081177,
1482
+ "step": 455
1483
+ },
1484
+ {
1485
+ "epoch": 0.9989071038251366,
1486
+ "step": 457,
1487
+ "total_flos": 0.0,
1488
+ "train_loss": 2.4655840506438875,
1489
+ "train_runtime": 5955.1851,
1490
+ "train_samples_per_second": 9.833,
1491
+ "train_steps_per_second": 0.077
1492
+ }
1493
+ ],
1494
+ "logging_steps": 5,
1495
+ "max_steps": 457,
1496
+ "num_input_tokens_seen": 0,
1497
+ "num_train_epochs": 1,
1498
+ "save_steps": 1000000,
1499
+ "stateful_callbacks": {
1500
+ "TrainerControl": {
1501
+ "args": {
1502
+ "should_epoch_stop": false,
1503
+ "should_evaluate": false,
1504
+ "should_log": false,
1505
+ "should_save": true,
1506
+ "should_training_stop": true
1507
+ },
1508
+ "attributes": {}
1509
+ }
1510
+ },
1511
+ "total_flos": 0.0,
1512
+ "train_batch_size": 2,
1513
+ "trial_name": null,
1514
+ "trial_params": null
1515
+ }