File size: 134,484 Bytes
26958c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
========================
START TIME: Wed Jul  3 01:01:47 UTC 2024
python3 version = Python 3.10.14
========================
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
Token is valid (permission: write).
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token
Login successful
Already on 'bench_cluster'
M	examples/config_tiny_llama.py
M	examples/config_tiny_llama.yaml
M	examples/train_tiny_llama.sh
M	src/nanotron/models/llama.py
M	src/nanotron/trainer.py
Your branch is up to date with 'origin/bench_cluster'.
Job status: RUNNING
W0703 01:01:50.304000 139848241071936 torch/distributed/run.py:757] 
W0703 01:01:50.304000 139848241071936 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.304000 139848241071936 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
W0703 01:01:50.304000 139848241071936 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.306000 140174290601792 torch/distributed/run.py:757] 
W0703 01:01:50.306000 140174290601792 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.306000 140174290601792 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
W0703 01:01:50.306000 140174290601792 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.311000 140430853101376 torch/distributed/run.py:757] 
W0703 01:01:50.311000 140430853101376 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.311000 140430853101376 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
W0703 01:01:50.311000 140430853101376 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.312000 140553062291264 torch/distributed/run.py:757] 
W0703 01:01:50.312000 140553062291264 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.312000 140553062291264 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
W0703 01:01:50.312000 140553062291264 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.317000 140071857948480 torch/distributed/run.py:757] 
W0703 01:01:50.317000 140071857948480 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.317000 140071857948480 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
W0703 01:01:50.317000 140071857948480 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.317000 140653719533376 torch/distributed/run.py:757] 
W0703 01:01:50.317000 140653719533376 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.317000 140653719533376 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
W0703 01:01:50.317000 140653719533376 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.336000 139880512571200 torch/distributed/run.py:757] 
W0703 01:01:50.336000 139880512571200 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.336000 139880512571200 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
W0703 01:01:50.336000 139880512571200 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.346000 139821835138880 torch/distributed/run.py:757] 
W0703 01:01:50.346000 139821835138880 torch/distributed/run.py:757] *****************************************
W0703 01:01:50.346000 139821835138880 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
W0703 01:01:50.346000 139821835138880 torch/distributed/run.py:757] *****************************************
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Config:
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Config(general=GeneralArgs(project='bench_cluster',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                            run='%date_%jobid',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                            seed=42,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                            step=None,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                            consumed_train_samples=None,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                            benchmark_csv_path=None,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                            ignore_sanity_checks=True),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:        parallelism=ParallelismArgs(dp=64,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                    pp=1,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                    tp=1,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                    pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7f4e5e7bc670>,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                    tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                    tp_linear_async_communication=False,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                    expert_parallel_size=1),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:        model=ModelArgs(model_config=LlamaConfig(bos_token_id=1,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 eos_token_id=2,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 hidden_act='silu',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 hidden_size=2048,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 initializer_range=0.02,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 intermediate_size=4096,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 is_llama_config=True,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 max_position_embeddings=4096,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 num_attention_heads=32,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 num_hidden_layers=24,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 num_key_value_heads=32,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 pad_token_id=None,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 pretraining_tp=1,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 rms_norm_eps=1e-05,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 rope_scaling=None,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 rope_theta=10000.0,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 tie_word_embeddings=True,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 use_cache=True,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                 vocab_size=50257),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                        init_method=RandomInit(std=0.025),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                        dtype=torch.bfloat16,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                        make_vocab_size_divisible_by=1,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                        ddp_bucket_cap_mb=25),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:        tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                tokenizer_revision=None,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                tokenizer_max_length=None),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:        checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                    checkpoint_interval=100000,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                    save_initial_state=False,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                    resume_checkpoint_path=None,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                    checkpoints_path_is_shared_file_system=False),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:        logging=LoggingArgs(log_level='info',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                            log_level_replica='info',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                            iteration_step_info_interval=1),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:        tokens=TokensArgs(sequence_length=4096,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                          train_steps=20,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                          micro_batch_size=2,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                          batch_accumulation_per_replica=8,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                          val_check_interval=-1,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                          limit_val_batches=0,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                          limit_test_batches=0),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:        optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                     adam_beta1=0.9,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                     adam_beta2=0.95,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                     torch_adam_is_fused=True,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                     name='adamW'),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                zero_stage=1,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                weight_decay=0.01,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                clip_grad=1.0,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                accumulate_grad_in_fp32=True,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                        lr_warmup_steps=1,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                        lr_warmup_style='linear',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                        lr_decay_style='linear',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                        lr_decay_steps=19,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                        lr_decay_starting_step=None,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                        min_decay_lr=1e-05)),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:        data_stages=[DatasetStageArgs(name='Training Stage',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                      start_training_step=1,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                      data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                                 hf_dataset_splits='train',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                                 hf_dataset_config_name=None,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                                 dataset_processing_num_proc_per_process=64,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                                 dataset_overwrite_cache=False,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                                                 text_column_name='text'),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                    seed=42,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:                                                    num_loading_workers=0))],
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:        profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/64_GPUS/dp-64_tp-1_pp-1_mbz-2')),
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:        lighteval=None)
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Model Config:
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: LlamaConfig(bos_token_id=1,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             eos_token_id=2,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             hidden_act='silu',
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             hidden_size=2048,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             initializer_range=0.02,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             intermediate_size=4096,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             is_llama_config=True,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             max_position_embeddings=4096,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             num_attention_heads=32,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             num_hidden_layers=24,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             num_key_value_heads=32,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             pad_token_id=None,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             pretraining_tp=1,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             rms_norm_eps=1e-05,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             rope_scaling=None,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             rope_theta=10000.0,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             tie_word_embeddings=True,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             use_cache=True,
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:             vocab_size=50257)
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Building model..
[default0]:07/03/2024 01:02:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Setting PP block ranks...
[default3]:07/03/2024 01:02:20 [INFO|DP=59|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default1]:07/03/2024 01:02:20 [INFO|DP=57|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default2]:07/03/2024 01:02:20 [INFO|DP=58|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default5]:07/03/2024 01:02:20 [INFO|DP=61|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default4]:07/03/2024 01:02:20 [INFO|DP=60|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default2]:07/03/2024 01:02:20 [INFO|DP=2|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided.
[default4]:07/03/2024 01:02:20 [INFO|DP=28|PP=0|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default6]:07/03/2024 01:02:20 [INFO|DP=30|PP=0|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default1]:07/03/2024 01:02:20 [INFO|DP=25|PP=0|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default2]:07/03/2024 01:02:20 [INFO|DP=10|PP=0|TP=0|ip-26-0-161-103]: No checkpoint path provided.
[default1]:07/03/2024 01:02:20 [INFO|DP=1|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided.
[default4]:07/03/2024 01:02:20 [INFO|DP=12|PP=0|TP=0|ip-26-0-161-103]: No checkpoint path provided.
[default5]:07/03/2024 01:02:20 [INFO|DP=5|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided.
[default0]:07/03/2024 01:02:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Total number of parameters: 1.11G (2116.51MiB)
[default0]:07/03/2024 01:02:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Local number of parameters: 1.11G (2116.51MiB)
[default0]:07/03/2024 01:02:20 [INFO|DP=8|PP=0|TP=0|ip-26-0-161-103]: No checkpoint path provided.
[default0]:07/03/2024 01:02:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [After model building] Memory usage: 2140.53MiB. Peak allocated: 2338.88MiB Peak reserved: 2392.00MiB
[default0]:07/03/2024 01:02:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided.
[default0]:07/03/2024 01:02:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Parametrizing model parameters using StandardParametrizator
[default3]:07/03/2024 01:02:20 [INFO|DP=11|PP=0|TP=0|ip-26-0-161-103]: No checkpoint path provided.
[default1]:07/03/2024 01:02:20 [INFO|DP=9|PP=0|TP=0|ip-26-0-161-103]: No checkpoint path provided.
[default7]:07/03/2024 01:02:20 [INFO|DP=15|PP=0|TP=0|ip-26-0-161-103]: No checkpoint path provided.
[default6]:07/03/2024 01:02:20 [INFO|DP=6|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided.
[default7]:07/03/2024 01:02:20 [INFO|DP=63|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default2]:07/03/2024 01:02:20 [INFO|DP=34|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default5]:07/03/2024 01:02:20 [INFO|DP=37|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default1]:07/03/2024 01:02:20 [INFO|DP=33|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default4]:07/03/2024 01:02:20 [INFO|DP=4|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided.
[default3]:07/03/2024 01:02:20 [INFO|DP=3|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided.
[default0]:07/03/2024 01:02:20 [INFO|DP=24|PP=0|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default7]:07/03/2024 01:02:20 [INFO|DP=31|PP=0|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default0]:07/03/2024 01:02:20 [INFO|DP=40|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided.
[default5]:07/03/2024 01:02:20 [INFO|DP=45|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided.
[default2]:07/03/2024 01:02:20 [INFO|DP=26|PP=0|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default4]:07/03/2024 01:02:20 [INFO|DP=44|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided.
[default7]:07/03/2024 01:02:20 [INFO|DP=47|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided.
[default3]:07/03/2024 01:02:20 [INFO|DP=27|PP=0|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default3]:07/03/2024 01:02:20 [INFO|DP=43|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided.
[default6]:07/03/2024 01:02:20 [INFO|DP=62|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default5]:07/03/2024 01:02:20 [INFO|DP=29|PP=0|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default5]:07/03/2024 01:02:20 [INFO|DP=13|PP=0|TP=0|ip-26-0-161-103]: No checkpoint path provided.
[default6]:07/03/2024 01:02:20 [INFO|DP=14|PP=0|TP=0|ip-26-0-161-103]: No checkpoint path provided.
[default0]:07/03/2024 01:02:20 [INFO|DP=32|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default3]:07/03/2024 01:02:20 [INFO|DP=19|PP=0|TP=0|ip-26-0-161-153]: No checkpoint path provided.
[default0]:07/03/2024 01:02:20 [INFO|DP=16|PP=0|TP=0|ip-26-0-161-153]: No checkpoint path provided.
[default7]:07/03/2024 01:02:20 [INFO|DP=7|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided.
[default1]:07/03/2024 01:02:20 [INFO|DP=17|PP=0|TP=0|ip-26-0-161-153]: No checkpoint path provided.
[default5]:07/03/2024 01:02:20 [INFO|DP=21|PP=0|TP=0|ip-26-0-161-153]: No checkpoint path provided.
[default2]:07/03/2024 01:02:20 [INFO|DP=42|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided.
[default2]:07/03/2024 01:02:20 [INFO|DP=18|PP=0|TP=0|ip-26-0-161-153]: No checkpoint path provided.
[default4]:07/03/2024 01:02:20 [INFO|DP=20|PP=0|TP=0|ip-26-0-161-153]: No checkpoint path provided.
[default7]:07/03/2024 01:02:20 [INFO|DP=39|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default6]:07/03/2024 01:02:20 [INFO|DP=22|PP=0|TP=0|ip-26-0-161-153]: No checkpoint path provided.
[default6]:07/03/2024 01:02:20 [INFO|DP=38|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default6]:07/03/2024 01:02:20 [INFO|DP=46|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided.
[default4]:07/03/2024 01:02:20 [INFO|DP=52|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default7]:07/03/2024 01:02:20 [INFO|DP=55|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default1]:07/03/2024 01:02:20 [INFO|DP=49|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default0]:07/03/2024 01:02:20 [INFO|DP=48|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default7]:07/03/2024 01:02:20 [INFO|DP=23|PP=0|TP=0|ip-26-0-161-153]: No checkpoint path provided.
[default5]:07/03/2024 01:02:20 [INFO|DP=53|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default6]:07/03/2024 01:02:20 [INFO|DP=54|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default1]:07/03/2024 01:02:20 [INFO|DP=41|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided.
[default0]:07/03/2024 01:02:20 [INFO|DP=56|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default3]:07/03/2024 01:02:20 [INFO|DP=35|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default4]:07/03/2024 01:02:20 [INFO|DP=36|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default2]:07/03/2024 01:02:20 [INFO|DP=50|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default3]:07/03/2024 01:02:20 [INFO|DP=51|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Optimizer Building] Using LearningRateForSP as learning rate
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] Size of optimizer params per rank:
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 0 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 1 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 2 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 3 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 4 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 5 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 6 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 7 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 8 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 9 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 10 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 11 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 12 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 13 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 14 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 15 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 16 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 17 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 18 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 19 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 20 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 21 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 22 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 23 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 24 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 25 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 26 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 27 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 28 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 29 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 30 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 31 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 32 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 33 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 34 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 35 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 36 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 37 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 38 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 39 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 40 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 41 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 42 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 43 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 44 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 45 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 46 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 47 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 48 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 49 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 50 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 51 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 52 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 53 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 54 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 55 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 56 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 57 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 58 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 59 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 60 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 61 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 62 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 63 has 17.3M out of 1.11G (1.56%) params' optimizer states
[default0]:07/03/2024 01:02:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples
[default0]:07/03/2024 01:02:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Using `datasets` library
[default0]:07/03/2024 01:02:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4')
[default0]:07/03/2024 01:02:30 [WARNING|DP=0|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/03/2024 01:02:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Training Plan] There are 1 training stages 
[default0]:07/03/2024 01:02:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Stage Training Stage] start from step 1 
[default0]:07/03/2024 01:02:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: 
[default0]:07/03/2024 01:02:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Start training] datetime: 2024-07-03 01:02:36.757905 | mbs: 2 | grad_accum: 8 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0
[default0]:07/03/2024 01:02:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps
[default0]:07/03/2024 01:02:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6440.67MiB. Peak allocated 6440.67MiB. Peak reserved: 6626.00MiB
[default0]:07/03/2024 01:02:36 [WARNING|DP=56|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/03/2024 01:02:36 [WARNING|DP=59|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/03/2024 01:02:36 [WARNING|DP=58|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/03/2024 01:02:36 [WARNING|DP=60|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/03/2024 01:02:36 [WARNING|DP=30|PP=0|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/03/2024 01:02:36 [WARNING|DP=25|PP=0|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/03/2024 01:02:36 [WARNING|DP=15|PP=0|TP=0|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/03/2024 01:02:36 [WARNING|DP=44|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/03/2024 01:02:36 [WARNING|DP=45|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/03/2024 01:02:36 [WARNING|DP=40|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/03/2024 01:02:36 [WARNING|DP=43|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/03/2024 01:02:36 [WARNING|DP=47|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/03/2024 01:02:36 [WARNING|DP=11|PP=0|TP=0|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/03/2024 01:02:36 [WARNING|DP=8|PP=0|TP=0|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/03/2024 01:02:36 [WARNING|DP=3|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/03/2024 01:02:36 [WARNING|DP=63|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/03/2024 01:02:36 [WARNING|DP=62|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/03/2024 01:02:36 [WARNING|DP=33|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/03/2024 01:02:36 [WARNING|DP=34|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/03/2024 01:02:36 [WARNING|DP=13|PP=0|TP=0|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/03/2024 01:02:36 [WARNING|DP=32|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/03/2024 01:02:36 [WARNING|DP=26|PP=0|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/03/2024 01:02:36 [WARNING|DP=31|PP=0|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/03/2024 01:02:36 [WARNING|DP=29|PP=0|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/03/2024 01:02:36 [WARNING|DP=7|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/03/2024 01:02:36 [WARNING|DP=36|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/03/2024 01:02:36 [WARNING|DP=18|PP=0|TP=0|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/03/2024 01:02:36 [WARNING|DP=39|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/03/2024 01:02:36 [WARNING|DP=21|PP=0|TP=0|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/03/2024 01:02:36 [WARNING|DP=17|PP=0|TP=0|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/03/2024 01:02:36 [WARNING|DP=42|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/03/2024 01:02:36 [WARNING|DP=20|PP=0|TP=0|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/03/2024 01:02:36 [WARNING|DP=52|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/03/2024 01:02:36 [WARNING|DP=22|PP=0|TP=0|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/03/2024 01:02:36 [WARNING|DP=35|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/03/2024 01:02:36 [WARNING|DP=46|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/03/2024 01:02:36 [WARNING|DP=41|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/03/2024 01:02:36 [WARNING|DP=55|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/03/2024 01:02:36 [WARNING|DP=48|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/03/2024 01:02:36 [WARNING|DP=51|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/03/2024 01:02:36 [WARNING|DP=54|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/03/2024 01:02:36 [WARNING|DP=57|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/03/2024 01:02:36 [WARNING|DP=61|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/03/2024 01:02:36 [WARNING|DP=2|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/03/2024 01:02:37 [WARNING|DP=1|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/03/2024 01:02:37 [WARNING|DP=9|PP=0|TP=0|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/03/2024 01:02:36 [WARNING|DP=12|PP=0|TP=0|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/03/2024 01:02:36 [WARNING|DP=10|PP=0|TP=0|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/03/2024 01:02:36 [WARNING|DP=24|PP=0|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/03/2024 01:02:36 [WARNING|DP=6|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/03/2024 01:02:37 [WARNING|DP=5|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/03/2024 01:02:37 [WARNING|DP=37|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/03/2024 01:02:37 [WARNING|DP=14|PP=0|TP=0|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/03/2024 01:02:36 [WARNING|DP=16|PP=0|TP=0|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/03/2024 01:02:37 [WARNING|DP=19|PP=0|TP=0|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/03/2024 01:02:36 [WARNING|DP=38|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/03/2024 01:02:37 [WARNING|DP=50|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/03/2024 01:02:36 [WARNING|DP=49|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/03/2024 01:02:37 [WARNING|DP=23|PP=0|TP=0|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/03/2024 01:02:37 [WARNING|DP=53|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/03/2024 01:02:37 [WARNING|DP=28|PP=0|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/03/2024 01:02:37 [WARNING|DP=4|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/03/2024 01:02:37 [WARNING|DP=27|PP=0|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default7]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default7]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default7]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default7]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default0]:07/03/2024 01:02:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6516.81MiB. Peak allocated 24339.79MiB. Peak reserved: 25206.00MiB
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default7]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default7]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default7]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default7]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default0]:07/03/2024 01:02:49 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 1 / 20 | consumed_tokens: 4.19M | elapsed_time_per_iteration_ms: 12.6K | tokens_per_sec: 333K | tokens_per_sec_per_gpu: 5.21K | global_batch_size: 1.02K | lm_loss: 11.3 | lr: 0.0001 | model_tflops_per_gpu: 47.2 | hardware_tflops_per_gpu: 47.2 | grad_norm: 33.1 | cuda_memory_allocated: 6.97G | cuda_max_memory_reserved: 28.7G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.7G | hd_free_memory_tb: 245G
[default0]:07/03/2024 01:02:49 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 10915.31MiB. Peak reserved: 27418.00MiB
[default0]:07/03/2024 01:02:51 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.21MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:02:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 2 / 20 | consumed_tokens: 8.39M | elapsed_time_per_iteration_ms: 8.38K | tokens_per_sec: 500K | tokens_per_sec_per_gpu: 7.82K | global_batch_size: 1.02K | lm_loss: 11.3 | lr: 9.53e-05 | model_tflops_per_gpu: 70.9 | hardware_tflops_per_gpu: 70.9 | grad_norm: 33.3 | cuda_memory_allocated: 6.97G | cuda_max_memory_reserved: 28.8G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.7G | hd_free_memory_tb: 245G
[default0]:07/03/2024 01:02:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 10915.32MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:02:59 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.21MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:03:06 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 3 / 20 | consumed_tokens: 12.6M | elapsed_time_per_iteration_ms: 8.25K | tokens_per_sec: 508K | tokens_per_sec_per_gpu: 7.94K | global_batch_size: 1.02K | lm_loss: 16 | lr: 9.05e-05 | model_tflops_per_gpu: 72.1 | hardware_tflops_per_gpu: 72.1 | grad_norm: 249 | cuda_memory_allocated: 6.97G | cuda_max_memory_reserved: 28.8G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.7G | hd_free_memory_tb: 245G
[default0]:07/03/2024 01:03:06 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 10915.32MiB. Peak reserved: 27444.00MiB
[default0]:STAGE:2024-07-03 01:03:06 1777729:1777729 ActivityProfilerController.cpp:314] Completed Stage: Warm Up
[default0]:07/03/2024 01:03:07 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.21MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 4 / 20 | consumed_tokens: 16.8M | elapsed_time_per_iteration_ms: 8.39K | tokens_per_sec: 500K | tokens_per_sec_per_gpu: 7.81K | global_batch_size: 1.02K | lm_loss: 15.1 | lr: 8.58e-05 | model_tflops_per_gpu: 70.9 | hardware_tflops_per_gpu: 70.9 | grad_norm: 41.8 | cuda_memory_allocated: 6.97G | cuda_max_memory_reserved: 28.8G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.7G | hd_free_memory_tb: 245G
[default0]:07/03/2024 01:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 10915.32MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:03:22 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 5 / 20 | consumed_tokens: 21M | elapsed_time_per_iteration_ms: 8.39K | tokens_per_sec: 500K | tokens_per_sec_per_gpu: 7.81K | global_batch_size: 1.02K | lm_loss: 10.8 | lr: 8.11e-05 | model_tflops_per_gpu: 70.9 | hardware_tflops_per_gpu: 70.9 | grad_norm: 25.9
[default0]:07/03/2024 01:03:22 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:03:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 6 / 20 | consumed_tokens: 25.2M | elapsed_time_per_iteration_ms: 8.33K | tokens_per_sec: 503K | tokens_per_sec_per_gpu: 7.86K | global_batch_size: 1.02K | lm_loss: 10.8 | lr: 7.63e-05 | model_tflops_per_gpu: 71.4 | hardware_tflops_per_gpu: 71.4 | grad_norm: 18.8
[default0]:STAGE:2024-07-03 01:03:35 1777729:1777729 ActivityProfilerController.cpp:320] Completed Stage: Collection
[default0]:STAGE:2024-07-03 01:03:36 1777729:1777729 ActivityProfilerController.cpp:324] Completed Stage: Post Processing
[default0]:07/03/2024 01:04:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:04:17 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 7 / 20 | consumed_tokens: 29.4M | elapsed_time_per_iteration_ms: 2.13K | tokens_per_sec: 1.97M | tokens_per_sec_per_gpu: 30.8K | global_batch_size: 1.02K | lm_loss: 10.2 | lr: 7.16e-05 | model_tflops_per_gpu: 279 | hardware_tflops_per_gpu: 279 | grad_norm: 7.96
[default0]:07/03/2024 01:04:17 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:04:26 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 8 / 20 | consumed_tokens: 33.6M | elapsed_time_per_iteration_ms: 8.34K | tokens_per_sec: 503K | tokens_per_sec_per_gpu: 7.86K | global_batch_size: 1.02K | lm_loss: 9.15 | lr: 6.68e-05 | model_tflops_per_gpu: 71.3 | hardware_tflops_per_gpu: 71.3 | grad_norm: 6.46
[default0]:07/03/2024 01:04:26 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:04:34 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 9 / 20 | consumed_tokens: 37.7M | elapsed_time_per_iteration_ms: 8.25K | tokens_per_sec: 509K | tokens_per_sec_per_gpu: 7.95K | global_batch_size: 1.02K | lm_loss: 11.1 | lr: 6.21e-05 | model_tflops_per_gpu: 72.1 | hardware_tflops_per_gpu: 72.1 | grad_norm: 59
[default0]:07/03/2024 01:04:34 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:04:42 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 10 / 20 | consumed_tokens: 41.9M | elapsed_time_per_iteration_ms: 8.26K | tokens_per_sec: 508K | tokens_per_sec_per_gpu: 7.93K | global_batch_size: 1.02K | lm_loss: 9.52 | lr: 5.74e-05 | model_tflops_per_gpu: 72 | hardware_tflops_per_gpu: 72 | grad_norm: 43.1
[default0]:07/03/2024 01:04:42 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:04:51 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 11 / 20 | consumed_tokens: 46.1M | elapsed_time_per_iteration_ms: 8.36K | tokens_per_sec: 502K | tokens_per_sec_per_gpu: 7.84K | global_batch_size: 1.02K | lm_loss: 8.08 | lr: 5.26e-05 | model_tflops_per_gpu: 71.2 | hardware_tflops_per_gpu: 71.2 | grad_norm: 8.48
[default0]:07/03/2024 01:04:51 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:04:59 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 12 / 20 | consumed_tokens: 50.3M | elapsed_time_per_iteration_ms: 8.25K | tokens_per_sec: 509K | tokens_per_sec_per_gpu: 7.95K | global_batch_size: 1.02K | lm_loss: 7.85 | lr: 4.79e-05 | model_tflops_per_gpu: 72.1 | hardware_tflops_per_gpu: 72.1 | grad_norm: 5.1
[default0]:07/03/2024 01:04:59 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:05:07 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 13 / 20 | consumed_tokens: 54.5M | elapsed_time_per_iteration_ms: 8.3K | tokens_per_sec: 506K | tokens_per_sec_per_gpu: 7.9K | global_batch_size: 1.02K | lm_loss: 7.7 | lr: 4.32e-05 | model_tflops_per_gpu: 71.7 | hardware_tflops_per_gpu: 71.7 | grad_norm: 4.76
[default0]:07/03/2024 01:05:07 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:05:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 14 / 20 | consumed_tokens: 58.7M | elapsed_time_per_iteration_ms: 8.39K | tokens_per_sec: 500K | tokens_per_sec_per_gpu: 7.81K | global_batch_size: 1.02K | lm_loss: 7.55 | lr: 3.84e-05 | model_tflops_per_gpu: 70.9 | hardware_tflops_per_gpu: 70.9 | grad_norm: 5.08
[default0]:07/03/2024 01:05:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:05:24 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 15 / 20 | consumed_tokens: 62.9M | elapsed_time_per_iteration_ms: 8.34K | tokens_per_sec: 503K | tokens_per_sec_per_gpu: 7.86K | global_batch_size: 1.02K | lm_loss: 7.4 | lr: 3.37e-05 | model_tflops_per_gpu: 71.3 | hardware_tflops_per_gpu: 71.3 | grad_norm: 5.14
[default0]:07/03/2024 01:05:24 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:05:32 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 16 / 20 | consumed_tokens: 67.1M | elapsed_time_per_iteration_ms: 8.35K | tokens_per_sec: 502K | tokens_per_sec_per_gpu: 7.85K | global_batch_size: 1.02K | lm_loss: 7.3 | lr: 2.89e-05 | model_tflops_per_gpu: 71.2 | hardware_tflops_per_gpu: 71.2 | grad_norm: 5.23
[default0]:07/03/2024 01:05:32 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:05:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 17 / 20 | consumed_tokens: 71.3M | elapsed_time_per_iteration_ms: 8.31K | tokens_per_sec: 505K | tokens_per_sec_per_gpu: 7.88K | global_batch_size: 1.02K | lm_loss: 7.23 | lr: 2.42e-05 | model_tflops_per_gpu: 71.5 | hardware_tflops_per_gpu: 71.5 | grad_norm: 5.28
[default0]:07/03/2024 01:05:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:05:49 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 18 / 20 | consumed_tokens: 75.5M | elapsed_time_per_iteration_ms: 8.23K | tokens_per_sec: 510K | tokens_per_sec_per_gpu: 7.97K | global_batch_size: 1.02K | lm_loss: 7.15 | lr: 1.95e-05 | model_tflops_per_gpu: 72.3 | hardware_tflops_per_gpu: 72.3 | grad_norm: 5.06
[default0]:07/03/2024 01:05:49 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:05:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 19 / 20 | consumed_tokens: 79.7M | elapsed_time_per_iteration_ms: 8.27K | tokens_per_sec: 507K | tokens_per_sec_per_gpu: 7.92K | global_batch_size: 1.02K | lm_loss: 7.08 | lr: 1.47e-05 | model_tflops_per_gpu: 71.9 | hardware_tflops_per_gpu: 71.9 | grad_norm: 3.86
[default0]:07/03/2024 01:05:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]:  Memory usage: 6649.20MiB. Peak allocated 24472.19MiB. Peak reserved: 27444.00MiB
[default0]:07/03/2024 01:06:05 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 20 / 20 | consumed_tokens: 83.9M | elapsed_time_per_iteration_ms: 8.33K | tokens_per_sec: 503K | tokens_per_sec_per_gpu: 7.86K | global_batch_size: 1.02K | lm_loss: 7.03 | lr: 1e-05 | model_tflops_per_gpu: 71.3 | hardware_tflops_per_gpu: 71.3 | grad_norm: 2.87
W0703 01:06:27.076000 140547401557760 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-171-88.ec2.internal_881444_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousTimeoutError.
W0703 01:06:27.146000 140653719533376 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-161-153.ec2.internal_1419394_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
W0703 01:06:27.146000 139880512571200 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-161-103.ec2.internal_868134_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
W0703 01:06:27.151000 140653719533376 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-161-153.ec2.internal_1419394_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
W0703 01:06:27.151000 139880512571200 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-161-103.ec2.internal_868134_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
Saved 1 csv files over 1 completed logs
Processing file: /fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/64_GPUS/dp-64_tp-1_pp-1_mbz-2/profiler/ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json
Results written to /fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/64_GPUS/dp-64_tp-1_pp-1_mbz-2/profiler.csv
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.

ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:   0%|          | 0.00/1.18G [00:00<?, ?B/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:   1%|▏         | 16.0M/1.18G [00:00<00:25, 45.7MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:   3%|β–Ž         | 32.0M/1.18G [00:00<00:21, 53.5MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:   4%|▍         | 48.0M/1.18G [00:00<00:20, 56.3MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:   5%|β–Œ         | 64.0M/1.18G [00:01<00:18, 61.5MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:   7%|β–‹         | 80.0M/1.18G [00:02<00:33, 32.7MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:   8%|β–Š         | 96.0M/1.18G [00:02<00:29, 37.4MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:   9%|β–‰         | 112M/1.18G [00:02<00:24, 44.0MB/s] 
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  11%|β–ˆ         | 128M/1.18G [00:02<00:23, 44.6MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  12%|β–ˆβ–        | 144M/1.18G [00:03<00:21, 48.1MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  14%|β–ˆβ–Ž        | 160M/1.18G [00:03<00:20, 50.5MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  15%|β–ˆβ–        | 176M/1.18G [00:03<00:19, 52.0MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  16%|β–ˆβ–Œ        | 192M/1.18G [00:03<00:17, 55.2MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  18%|β–ˆβ–Š        | 208M/1.18G [00:04<00:16, 57.8MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  19%|β–ˆβ–‰        | 224M/1.18G [00:04<00:16, 59.0MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  20%|β–ˆβ–ˆ        | 240M/1.18G [00:04<00:16, 57.6MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  22%|β–ˆβ–ˆβ–       | 256M/1.18G [00:04<00:15, 61.7MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  23%|β–ˆβ–ˆβ–Ž       | 272M/1.18G [00:05<00:15, 59.8MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  24%|β–ˆβ–ˆβ–       | 288M/1.18G [00:05<00:14, 60.8MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  26%|β–ˆβ–ˆβ–Œ       | 304M/1.18G [00:05<00:15, 55.9MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  27%|β–ˆβ–ˆβ–‹       | 320M/1.18G [00:06<00:14, 57.5MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  28%|β–ˆβ–ˆβ–Š       | 336M/1.18G [00:06<00:17, 47.4MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  30%|β–ˆβ–ˆβ–‰       | 352M/1.18G [00:06<00:16, 50.1MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  31%|β–ˆβ–ˆβ–ˆ       | 368M/1.18G [00:07<00:15, 51.6MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  32%|β–ˆβ–ˆβ–ˆβ–      | 384M/1.18G [00:07<00:15, 51.2MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  34%|β–ˆβ–ˆβ–ˆβ–      | 400M/1.18G [00:07<00:14, 54.7MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  35%|β–ˆβ–ˆβ–ˆβ–Œ      | 416M/1.18G [00:07<00:12, 60.5MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  37%|β–ˆβ–ˆβ–ˆβ–‹      | 432M/1.18G [00:08<00:13, 54.0MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  38%|β–ˆβ–ˆβ–ˆβ–Š      | 448M/1.18G [00:08<00:12, 56.7MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  39%|β–ˆβ–ˆβ–ˆβ–‰      | 464M/1.18G [00:08<00:12, 58.2MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  41%|β–ˆβ–ˆβ–ˆβ–ˆ      | 480M/1.18G [00:09<00:11, 61.0MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  42%|β–ˆβ–ˆβ–ˆβ–ˆβ–     | 496M/1.18G [00:09<00:10, 62.8MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž     | 512M/1.18G [00:09<00:10, 62.2MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  45%|β–ˆβ–ˆβ–ˆβ–ˆβ–     | 528M/1.18G [00:09<00:10, 61.3MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ     | 544M/1.18G [00:10<00:09, 65.2MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹     | 560M/1.18G [00:10<00:10, 58.1MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š     | 576M/1.18G [00:10<00:10, 58.0MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ     | 592M/1.18G [00:11<00:13, 44.4MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–    | 608M/1.18G [00:11<00:12, 47.2MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž    | 624M/1.18G [00:11<00:10, 51.0MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–    | 640M/1.18G [00:12<00:11, 47.9MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ    | 656M/1.18G [00:12<00:10, 51.6MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹    | 672M/1.18G [00:12<00:09, 53.7MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š    | 688M/1.18G [00:12<00:09, 52.0MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰    | 704M/1.18G [00:13<00:09, 51.3MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ    | 720M/1.18G [00:13<00:10, 45.8MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–   | 736M/1.18G [00:14<00:09, 46.1MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž   | 752M/1.18G [00:14<00:09, 44.3MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–   | 768M/1.18G [00:14<00:08, 46.9MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹   | 784M/1.18G [00:15<00:08, 48.8MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š   | 800M/1.18G [00:15<00:07, 53.9MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰   | 816M/1.18G [00:15<00:06, 59.3MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   | 832M/1.18G [00:15<00:05, 61.3MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–  | 848M/1.18G [00:16<00:05, 58.1MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž  | 864M/1.18G [00:16<00:05, 57.5MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–  | 880M/1.18G [00:16<00:07, 41.1MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ  | 896M/1.18G [00:17<00:06, 44.5MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹  | 912M/1.18G [00:17<00:05, 49.1MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š  | 928M/1.18G [00:17<00:04, 51.1MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰  | 944M/1.18G [00:18<00:04, 53.5MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  | 960M/1.18G [00:18<00:04, 51.7MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  83%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 976M/1.18G [00:18<00:03, 52.1MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 992M/1.18G [00:18<00:03, 53.6MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  85%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 1.01G/1.18G [00:19<00:03, 51.2MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 1.02G/1.18G [00:19<00:02, 56.5MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 1.04G/1.18G [00:19<00:02, 61.2MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 1.06G/1.18G [00:19<00:01, 64.1MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  91%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1.07G/1.18G [00:20<00:01, 68.6MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 1.09G/1.18G [00:20<00:01, 71.2MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  93%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 1.10G/1.18G [00:20<00:01, 62.5MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 1.12G/1.18G [00:20<00:01, 61.5MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 1.14G/1.18G [00:21<00:00, 55.4MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 1.15G/1.18G [00:21<00:00, 51.8MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json:  99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 1.17G/1.18G [00:21<00:00, 56.3MB/s]
ip-26-0-160-225_1777729.1719968648431741219.pt.trace.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.18G/1.18G [00:22<00:00, 53.3MB/s]