File size: 44,889 Bytes
7293135
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
========================
START TIME: Tue Jul  2 18:40:11 UTC 2024
python3 version = Python 3.10.14
========================
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
Token is valid (permission: write).
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token
Login successful
Already on 'bench_cluster'
M	examples/config_tiny_llama.py
M	examples/config_tiny_llama.yaml
M	examples/train_tiny_llama.sh
M	src/nanotron/models/llama.py
M	src/nanotron/trainer.py
Your branch is up to date with 'origin/bench_cluster'.
Job status: RUNNING
W0702 18:40:13.689000 140539943585600 torch/distributed/run.py:757] 
W0702 18:40:13.689000 140539943585600 torch/distributed/run.py:757] *****************************************
W0702 18:40:13.689000 140539943585600 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
W0702 18:40:13.689000 140539943585600 torch/distributed/run.py:757] *****************************************
W0702 18:40:13.690000 140293142153024 torch/distributed/run.py:757] 
W0702 18:40:13.690000 140293142153024 torch/distributed/run.py:757] *****************************************
W0702 18:40:13.690000 140293142153024 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
W0702 18:40:13.690000 140293142153024 torch/distributed/run.py:757] *****************************************
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Config:
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Config(general=GeneralArgs(project='bench_cluster',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                            run='%date_%jobid',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                            seed=42,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                            step=None,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                            consumed_train_samples=None,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                            benchmark_csv_path=None,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                            ignore_sanity_checks=True),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:        parallelism=ParallelismArgs(dp=1,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                    pp=16,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                    tp=1,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                    pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7f2067bb0910>,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                    tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                    tp_linear_async_communication=False,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                    expert_parallel_size=1),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:        model=ModelArgs(model_config=LlamaConfig(bos_token_id=1,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 eos_token_id=2,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 hidden_act='silu',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 hidden_size=2048,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 initializer_range=0.02,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 intermediate_size=4096,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 is_llama_config=True,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 max_position_embeddings=4096,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 num_attention_heads=32,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 num_hidden_layers=24,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 num_key_value_heads=32,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 pad_token_id=None,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 pretraining_tp=1,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 rms_norm_eps=1e-05,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 rope_scaling=None,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 rope_theta=10000.0,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 tie_word_embeddings=True,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 use_cache=True,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                 vocab_size=50257),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                        init_method=RandomInit(std=0.025),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                        dtype=torch.bfloat16,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                        make_vocab_size_divisible_by=1,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                        ddp_bucket_cap_mb=25),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:        tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                tokenizer_revision=None,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                tokenizer_max_length=None),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:        checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                    checkpoint_interval=100000,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                    save_initial_state=False,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                    resume_checkpoint_path=None,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                    checkpoints_path_is_shared_file_system=False),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:        logging=LoggingArgs(log_level='info',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                            log_level_replica='info',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                            iteration_step_info_interval=1),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:        tokens=TokensArgs(sequence_length=4096,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                          train_steps=20,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                          micro_batch_size=8,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                          batch_accumulation_per_replica=128,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                          val_check_interval=-1,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                          limit_val_batches=0,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                          limit_test_batches=0),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:        optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                     adam_beta1=0.9,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                     adam_beta2=0.95,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                     torch_adam_is_fused=True,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                     name='adamW'),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                zero_stage=1,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                weight_decay=0.01,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                clip_grad=1.0,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                accumulate_grad_in_fp32=True,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                        lr_warmup_steps=1,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                        lr_warmup_style='linear',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                        lr_decay_style='linear',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                        lr_decay_steps=19,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                        lr_decay_starting_step=None,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                        min_decay_lr=1e-05)),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:        data_stages=[DatasetStageArgs(name='Training Stage',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                      start_training_step=1,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                      data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                                 hf_dataset_splits='train',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                                 hf_dataset_config_name=None,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                                 dataset_processing_num_proc_per_process=64,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                                 dataset_overwrite_cache=False,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                                                 text_column_name='text'),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                    seed=42,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:                                                    num_loading_workers=32))],
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:        profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/16_GPUS/dp-1_tp-1_pp-16_mbz-8')),
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:        lighteval=None)
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Model Config:
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: LlamaConfig(bos_token_id=1,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             eos_token_id=2,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             hidden_act='silu',
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             hidden_size=2048,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             initializer_range=0.02,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             intermediate_size=4096,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             is_llama_config=True,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             max_position_embeddings=4096,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             num_attention_heads=32,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             num_hidden_layers=24,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             num_key_value_heads=32,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             pad_token_id=None,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             pretraining_tp=1,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             rms_norm_eps=1e-05,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             rope_scaling=None,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             rope_theta=10000.0,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             tie_word_embeddings=True,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             use_cache=True,
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:             vocab_size=50257)
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Building model..
[default0]:07/02/2024 18:40:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Setting PP block ranks...
[default0]:07/02/2024 18:40:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Total number of parameters: 1.21G (2312.82MiB)
[default0]:07/02/2024 18:40:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Local number of parameters: 187M (356.33MiB)
[default0]:07/02/2024 18:40:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 358.34MiB. Peak allocated: 360.37MiB Peak reserved: 368.00MiB
[default0]:07/02/2024 18:40:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default0]:07/02/2024 18:40:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Parametrizing model parameters using StandardParametrizator
[default0]:07/02/2024 18:40:48 [INFO|DP=0|PP=8|TP=0|ip-26-0-171-88]: Local number of parameters: 41.9M (80.01MiB)
[default0]:07/02/2024 18:40:48 [INFO|DP=0|PP=8|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default0]:07/02/2024 18:40:48 [INFO|DP=0|PP=8|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default2]:07/02/2024 18:40:48 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-62]: Local number of parameters: 41.9M (80.01MiB)
[default2]:07/02/2024 18:40:48 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default2]:07/02/2024 18:40:48 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default7]:07/02/2024 18:40:48 [INFO|DP=0|PP=7|TP=0|ip-26-0-171-62]: Local number of parameters: 83.9M (160.02MiB)
[default7]:07/02/2024 18:40:48 [INFO|DP=0|PP=7|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default5]:07/02/2024 18:40:48 [INFO|DP=0|PP=5|TP=0|ip-26-0-171-62]: Local number of parameters: 41.9M (80.01MiB)
[default5]:07/02/2024 18:40:48 [INFO|DP=0|PP=5|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default4]:07/02/2024 18:40:48 [INFO|DP=0|PP=4|TP=0|ip-26-0-171-62]: Local number of parameters: 83.9M (160.02MiB)
[default1]:07/02/2024 18:40:48 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-62]: Local number of parameters: 83.9M (160.02MiB)
[default1]:07/02/2024 18:40:48 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default4]:07/02/2024 18:40:48 [INFO|DP=0|PP=4|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default5]:07/02/2024 18:40:48 [INFO|DP=0|PP=5|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default1]:07/02/2024 18:40:48 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default7]:07/02/2024 18:40:48 [INFO|DP=0|PP=7|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default4]:07/02/2024 18:40:48 [INFO|DP=0|PP=4|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default3]:07/02/2024 18:40:48 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-62]: Local number of parameters: 83.9M (160.02MiB)
[default3]:07/02/2024 18:40:48 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default3]:07/02/2024 18:40:48 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default6]:07/02/2024 18:40:48 [INFO|DP=0|PP=6|TP=0|ip-26-0-171-62]: Local number of parameters: 83.9M (160.02MiB)
[default6]:07/02/2024 18:40:48 [INFO|DP=0|PP=6|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default6]:07/02/2024 18:40:48 [INFO|DP=0|PP=6|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default1]:07/02/2024 18:40:48 [INFO|DP=0|PP=9|TP=0|ip-26-0-171-88]: Local number of parameters: 83.9M (160.02MiB)
[default2]:07/02/2024 18:40:48 [INFO|DP=0|PP=10|TP=0|ip-26-0-171-88]: Local number of parameters: 83.9M (160.02MiB)
[default2]:07/02/2024 18:40:48 [INFO|DP=0|PP=10|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default2]:07/02/2024 18:40:48 [INFO|DP=0|PP=10|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default1]:07/02/2024 18:40:48 [INFO|DP=0|PP=9|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default1]:07/02/2024 18:40:48 [INFO|DP=0|PP=9|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default6]:07/02/2024 18:40:48 [INFO|DP=0|PP=14|TP=0|ip-26-0-171-88]: Local number of parameters: 103M (196.32MiB)
[default6]:07/02/2024 18:40:48 [INFO|DP=0|PP=14|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 196.33MiB. Peak allocated: 196.34MiB Peak reserved: 200.00MiB
[default6]:07/02/2024 18:40:48 [INFO|DP=0|PP=14|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default4]:07/02/2024 18:40:48 [INFO|DP=0|PP=12|TP=0|ip-26-0-171-88]: Local number of parameters: 83.9M (160.02MiB)
[default4]:07/02/2024 18:40:48 [INFO|DP=0|PP=12|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default4]:07/02/2024 18:40:48 [INFO|DP=0|PP=12|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default5]:07/02/2024 18:40:48 [INFO|DP=0|PP=13|TP=0|ip-26-0-171-88]: Local number of parameters: 83.9M (160.02MiB)
[default5]:07/02/2024 18:40:48 [INFO|DP=0|PP=13|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default5]:07/02/2024 18:40:48 [INFO|DP=0|PP=13|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default7]:07/02/2024 18:40:48 [INFO|DP=0|PP=15|TP=0|ip-26-0-171-88]: Local number of parameters: 0 (0.00MiB)
[default7]:07/02/2024 18:40:48 [INFO|DP=0|PP=15|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.02MiB Peak reserved: 2.00MiB
[default7]:07/02/2024 18:40:48 [INFO|DP=0|PP=15|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default3]:07/02/2024 18:40:48 [INFO|DP=0|PP=11|TP=0|ip-26-0-171-88]: Local number of parameters: 41.9M (80.01MiB)
[default3]:07/02/2024 18:40:48 [INFO|DP=0|PP=11|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default3]:07/02/2024 18:40:48 [INFO|DP=0|PP=11|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default0]:07/02/2024 18:40:49 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Optimizer Building] Using LearningRateForSP as learning rate
[default0]:07/02/2024 18:40:49 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [ZeRO sharding] Size of optimizer params per rank:
[default0]:07/02/2024 18:40:49 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [ZeRO sharding] DP Rank 0 has 187M out of 187M (100.00%) params' optimizer states
[default0]:07/02/2024 18:40:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples
[default0]:07/02/2024 18:40:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Using `datasets` library
[default0]:07/02/2024 18:40:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4')
[default0]:07/02/2024 18:40:50 [WARNING|DP=0|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/02/2024 18:40:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Training Plan] There are 1 training stages 
[default0]:07/02/2024 18:40:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Stage Training Stage] start from step 1 
[default0]:07/02/2024 18:40:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: 
[default0]:07/02/2024 18:40:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Start training] datetime: 2024-07-02 18:40:50.903488 | mbs: 8 | grad_accum: 128 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0
[default0]:07/02/2024 18:40:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps
[default0]:07/02/2024 18:40:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:  Memory usage: 1783.67MiB. Peak allocated 1783.67MiB. Peak reserved: 1796.00MiB
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/02/2024 18:40:51 [WARNING|DP=0|PP=5|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/02/2024 18:40:51 [WARNING|DP=0|PP=13|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/02/2024 18:40:51 [WARNING|DP=0|PP=9|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/02/2024 18:40:51 [WARNING|DP=0|PP=8|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/02/2024 18:40:51 [WARNING|DP=0|PP=4|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/02/2024 18:40:51 [WARNING|DP=0|PP=2|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/02/2024 18:40:51 [WARNING|DP=0|PP=7|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/02/2024 18:40:51 [WARNING|DP=0|PP=1|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/02/2024 18:40:51 [WARNING|DP=0|PP=3|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/02/2024 18:40:51 [WARNING|DP=0|PP=6|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/02/2024 18:40:51 [WARNING|DP=0|PP=10|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/02/2024 18:40:51 [WARNING|DP=0|PP=12|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/02/2024 18:40:51 [WARNING|DP=0|PP=14|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/02/2024 18:40:51 [WARNING|DP=0|PP=15|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/02/2024 18:40:51 [WARNING|DP=0|PP=11|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:[rank0]: Traceback (most recent call last):
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default0]:[rank0]:     trainer.train(dataloader)
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default0]:[rank0]:     outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default0]:[rank0]:     outputs = self.pipeline_engine.train_batch_iter(
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 252, in train_batch_iter
[default0]:[rank0]:     output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default0]:[rank0]:     output = model(**micro_batch)
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank0]:     return self._call_impl(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank0]:     return forward_call(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default0]:[rank0]:     sharded_logits = self.model(
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank0]:     return self._call_impl(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank0]:     return forward_call(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default0]:[rank0]:     return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default0]:[rank0]:     hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank0]:     return self._call_impl(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank0]:     return forward_call(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward
[default0]:[rank0]:     output = self.pp_block(**new_kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank0]:     return self._call_impl(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank0]:     return forward_call(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 631, in forward
[default0]:[rank0]:     output = self.attn(hidden_states=hidden_states, sequence_mask=sequence_mask)
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank0]:     return self._call_impl(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank0]:     return forward_call(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 360, in forward
[default0]:[rank0]:     qkv_states = self.qkv_proj(
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank0]:     return self._call_impl(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank0]:     return forward_call(*args, **kwargs)
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 87, in forward
[default0]:[rank0]:     return column_linear(
[default0]:[rank0]:   File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 359, in column_linear
[default0]:[rank0]:     return F.linear(input, weight, bias)
[default0]:[rank0]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 384.00 MiB. GPU 
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
W0702 18:41:24.865000 140539943585600 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3707498 closing signal SIGTERM
W0702 18:41:24.865000 140539943585600 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3707499 closing signal SIGTERM
W0702 18:41:24.866000 140539943585600 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3707500 closing signal SIGTERM
W0702 18:41:24.867000 140539943585600 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3707501 closing signal SIGTERM
W0702 18:41:24.867000 140539943585600 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3707502 closing signal SIGTERM
W0702 18:41:24.868000 140539943585600 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3707503 closing signal SIGTERM
W0702 18:41:24.870000 140539943585600 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3707504 closing signal SIGTERM
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
E0702 18:41:27.800000 140539943585600 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 3707497) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
    sys.exit(main())
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
    return f(*args, **kwargs)
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main
    run(args)
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run
    elastic_launch(
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-07-02_18:41:24
  host      : ip-26-0-171-62.ec2.internal
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 3707497)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
srun: error: ip-26-0-171-62: task 0: Exited with exit code 1
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]:  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
W0702 18:41:29.830000 140287475332864 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-171-88.ec2.internal_696127_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError.
W0702 18:41:29.866000 140293142153024 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 696196 closing signal SIGTERM
W0702 18:41:29.866000 140293142153024 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 696197 closing signal SIGTERM
W0702 18:41:29.867000 140293142153024 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 696198 closing signal SIGTERM
W0702 18:41:29.869000 140293142153024 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 696199 closing signal SIGTERM
W0702 18:41:29.870000 140293142153024 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 696200 closing signal SIGTERM
W0702 18:41:29.871000 140293142153024 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 696201 closing signal SIGTERM
W0702 18:41:29.872000 140293142153024 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 696202 closing signal SIGTERM
W0702 18:41:29.873000 140293142153024 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 696203 closing signal SIGTERM
W0702 18:41:32.911000 140293142153024 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-88.ec2.internal_696127_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
W0702 18:41:32.923000 140293142153024 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-88.ec2.internal_696127_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
Traceback (most recent call last):
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store
    return getattr(self._store, store_op)(*args, **kwargs)
torch.distributed.DistNetworkError: Broken pipe

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
    sys.exit(main())
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
    return f(*args, **kwargs)
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main
    run(args)
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run
    elastic_launch(
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 254, in launch_agent
    result = agent.run()
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper
    result = f(*args, **kwargs)
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 733, in run
    result = self._invoke_run(role)
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 908, in _invoke_run
    num_nodes_waiting = rdzv_handler.num_nodes_waiting()
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1174, in num_nodes_waiting
    self._state_holder.sync()
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 419, in sync
    get_response = self._backend.get_state()
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state
    base64_state: bytes = self._call_store("get", self._key)
  File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store
    raise RendezvousConnectionError(
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
srun: error: ip-26-0-171-88: task 1: Exited with exit code 1
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.