3outeille's picture
3outeille HF staff
Upload llama-1B/64_GPUS/dp-1_tp-1_pp-64_mbz-8
c560845 verified
raw
history blame
169 kB
========================
START TIME: Sat Jul 6 09:40:58 UTC 2024
python3 version = Python 3.10.14
========================
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
Token is valid (permission: write).
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token
Login successful
Already on 'bench_cluster'
M examples/config_tiny_llama.py
M examples/config_tiny_llama.yaml
M examples/train_tiny_llama.sh
Your branch is up to date with 'origin/bench_cluster'.
Job status: RUNNING
[2024-07-06 09:41:01,073] torch.distributed.run: [WARNING]
[2024-07-06 09:41:01,073] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,073] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:41:01,073] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,082] torch.distributed.run: [WARNING]
[2024-07-06 09:41:01,082] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,082] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:41:01,082] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,081] torch.distributed.run: [WARNING]
[2024-07-06 09:41:01,081] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,081] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:41:01,081] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,100] torch.distributed.run: [WARNING]
[2024-07-06 09:41:01,100] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,100] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:41:01,100] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,106] torch.distributed.run: [WARNING]
[2024-07-06 09:41:01,106] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,106] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:41:01,106] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,122] torch.distributed.run: [WARNING]
[2024-07-06 09:41:01,122] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,122] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:41:01,122] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,132] torch.distributed.run: [WARNING]
[2024-07-06 09:41:01,132] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,132] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:41:01,132] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,188] torch.distributed.run: [WARNING]
[2024-07-06 09:41:01,188] torch.distributed.run: [WARNING] *****************************************
[2024-07-06 09:41:01,188] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-07-06 09:41:01,188] torch.distributed.run: [WARNING] *****************************************
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Config:
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Config(general=GeneralArgs(project='bench_cluster',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: run='%date_%jobid',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: seed=42,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: step=None,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: consumed_train_samples=None,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: benchmark_csv_path=None,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: ignore_sanity_checks=True),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: parallelism=ParallelismArgs(dp=1,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: pp=64,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: tp=1,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.AllForwardAllBackwardPipelineEngine object at 0x7fd6726f0730>,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: tp_linear_async_communication=False,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: expert_parallel_size=1),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: eos_token_id=2,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: hidden_act='silu',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: hidden_size=2048,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: initializer_range=0.02,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: intermediate_size=4096,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: is_llama_config=True,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: max_position_embeddings=4096,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: num_attention_heads=32,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: num_hidden_layers=24,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: num_key_value_heads=32,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: pad_token_id=None,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: pretraining_tp=1,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: rms_norm_eps=1e-05,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: rope_scaling=None,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: rope_theta=10000.0,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: tie_word_embeddings=True,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: use_cache=True,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: vocab_size=50257),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: init_method=RandomInit(std=0.025),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: dtype=torch.bfloat16,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: make_vocab_size_divisible_by=1,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: ddp_bucket_cap_mb=25),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: tokenizer_revision=None,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: tokenizer_max_length=None),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: checkpoints=CheckpointsArgs(checkpoints_path=PosixPath('/dev/null'),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: checkpoint_interval=100000,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: save_initial_state=False,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: resume_checkpoint_path=None,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: checkpoints_path_is_shared_file_system=False),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: logging=LoggingArgs(log_level='info',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: log_level_replica='info',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: iteration_step_info_interval=1),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: tokens=TokensArgs(sequence_length=4096,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: train_steps=20,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: micro_batch_size=8,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: batch_accumulation_per_replica=128,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: val_check_interval=-1,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: limit_val_batches=0,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: limit_test_batches=0),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: adam_beta1=0.9,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: adam_beta2=0.95,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: torch_adam_is_fused=True,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: name='adamW'),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: zero_stage=1,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: weight_decay=0.01,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: clip_grad=1.0,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: accumulate_grad_in_fp32=True,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: lr_warmup_steps=1,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: lr_warmup_style='linear',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: lr_decay_style='linear',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: lr_decay_steps=19,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: lr_decay_starting_step=None,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: min_decay_lr=1e-05)),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: data_stages=[DatasetStageArgs(name='Training Stage',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: start_training_step=1,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: hf_dataset_splits='train',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: hf_dataset_config_name=None,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: dataset_processing_num_proc_per_process=64,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: dataset_overwrite_cache=False,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: text_column_name='text'),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: seed=42,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: num_loading_workers=0))],
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: profiler=ProfilerArgs(profiler_export_path=PosixPath('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/64_GPUS/dp-1_tp-1_pp-64_mbz-8')),
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: lighteval=None)
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Model Config:
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: LlamaConfig(bos_token_id=1,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: eos_token_id=2,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: hidden_act='silu',
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: hidden_size=2048,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: initializer_range=0.02,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: intermediate_size=4096,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: is_llama_config=True,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: max_position_embeddings=4096,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: num_attention_heads=32,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: num_hidden_layers=24,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: num_key_value_heads=32,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: pad_token_id=None,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: pretraining_tp=1,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: rms_norm_eps=1e-05,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: rope_scaling=None,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: rope_theta=10000.0,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: tie_word_embeddings=True,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: use_cache=True,
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: vocab_size=50257)
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Building model..
[default0]:07/06/2024 09:41:19 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Setting PP block ranks...
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=6|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=7|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=7|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=6|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=6|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=56|TP=0|ip-26-0-175-34]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=56|TP=0|ip-26-0-175-34]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=56|TP=0|ip-26-0-175-34]: No checkpoint path provided.
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=7|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=59|TP=0|ip-26-0-175-34]: Local number of parameters: 0 (0.00MiB)
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=59|TP=0|ip-26-0-175-34]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=59|TP=0|ip-26-0-175-34]: No checkpoint path provided.
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=3|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=3|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=3|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=5|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=5|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=5|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=41|TP=0|ip-26-0-175-170]: Local number of parameters: 0 (0.00MiB)
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=41|TP=0|ip-26-0-175-170]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=41|TP=0|ip-26-0-175-170]: No checkpoint path provided.
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=46|TP=0|ip-26-0-175-170]: Local number of parameters: 0 (0.00MiB)
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=46|TP=0|ip-26-0-175-170]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=46|TP=0|ip-26-0-175-170]: No checkpoint path provided.
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=40|TP=0|ip-26-0-175-170]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=40|TP=0|ip-26-0-175-170]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=40|TP=0|ip-26-0-175-170]: No checkpoint path provided.
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=2|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=2|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=2|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=43|TP=0|ip-26-0-175-170]: Local number of parameters: 0 (0.00MiB)
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=43|TP=0|ip-26-0-175-170]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=61|TP=0|ip-26-0-175-34]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=61|TP=0|ip-26-0-175-34]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=61|TP=0|ip-26-0-175-34]: No checkpoint path provided.
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=48|TP=0|ip-26-0-175-241]: Local number of parameters: 0 (0.00MiB)
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=43|TP=0|ip-26-0-175-170]: No checkpoint path provided.
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=48|TP=0|ip-26-0-175-241]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=48|TP=0|ip-26-0-175-241]: No checkpoint path provided.
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=1|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=1|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=47|TP=0|ip-26-0-175-170]: Local number of parameters: 0 (0.00MiB)
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=47|TP=0|ip-26-0-175-170]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=47|TP=0|ip-26-0-175-170]: No checkpoint path provided.
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=58|TP=0|ip-26-0-175-34]: Local number of parameters: 0 (0.00MiB)
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=52|TP=0|ip-26-0-175-241]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=32|TP=0|ip-26-0-175-165]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=32|TP=0|ip-26-0-175-165]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=32|TP=0|ip-26-0-175-165]: No checkpoint path provided.
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=1|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=58|TP=0|ip-26-0-175-34]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=58|TP=0|ip-26-0-175-34]: No checkpoint path provided.
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=25|TP=0|ip-26-0-175-132]: Local number of parameters: 103M (196.32MiB)
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=25|TP=0|ip-26-0-175-132]: [After model building] Memory usage: 196.33MiB. Peak allocated: 196.35MiB Peak reserved: 200.00MiB
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=25|TP=0|ip-26-0-175-132]: No checkpoint path provided.
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=27|TP=0|ip-26-0-175-132]: Local number of parameters: 0 (0.00MiB)
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=27|TP=0|ip-26-0-175-132]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=54|TP=0|ip-26-0-175-241]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=37|TP=0|ip-26-0-175-165]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Total number of parameters: 1.21G (2312.82MiB)
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=57|TP=0|ip-26-0-175-34]: Local number of parameters: 0 (0.00MiB)
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=57|TP=0|ip-26-0-175-34]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=57|TP=0|ip-26-0-175-34]: No checkpoint path provided.
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=27|TP=0|ip-26-0-175-132]: No checkpoint path provided.
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=54|TP=0|ip-26-0-175-241]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=37|TP=0|ip-26-0-175-165]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=37|TP=0|ip-26-0-175-165]: No checkpoint path provided.
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Local number of parameters: 145M (276.32MiB)
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=42|TP=0|ip-26-0-175-170]: Local number of parameters: 0 (0.00MiB)
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=42|TP=0|ip-26-0-175-170]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=42|TP=0|ip-26-0-175-170]: No checkpoint path provided.
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=62|TP=0|ip-26-0-175-34]: Local number of parameters: 0 (0.00MiB)
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=55|TP=0|ip-26-0-175-241]: Local number of parameters: 0 (0.00MiB)
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=55|TP=0|ip-26-0-175-241]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=35|TP=0|ip-26-0-175-165]: Local number of parameters: 0 (0.00MiB)
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=4|TP=0|ip-26-0-161-78]: Local number of parameters: 41.9M (80.01MiB)
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=62|TP=0|ip-26-0-175-34]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=52|TP=0|ip-26-0-175-241]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=54|TP=0|ip-26-0-175-241]: No checkpoint path provided.
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=35|TP=0|ip-26-0-175-165]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=35|TP=0|ip-26-0-175-165]: No checkpoint path provided.
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=4|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: [After model building] Memory usage: 277.33MiB. Peak allocated: 279.36MiB Peak reserved: 294.00MiB
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=60|TP=0|ip-26-0-175-34]: Local number of parameters: 0 (0.00MiB)
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=55|TP=0|ip-26-0-175-241]: No checkpoint path provided.
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=39|TP=0|ip-26-0-175-165]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=4|TP=0|ip-26-0-161-78]: No checkpoint path provided.
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=62|TP=0|ip-26-0-175-34]: No checkpoint path provided.
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=52|TP=0|ip-26-0-175-241]: No checkpoint path provided.
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=39|TP=0|ip-26-0-175-165]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=39|TP=0|ip-26-0-175-165]: No checkpoint path provided.
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Parametrizing model parameters using StandardParametrizator
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=45|TP=0|ip-26-0-175-170]: Local number of parameters: 0 (0.00MiB)
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=60|TP=0|ip-26-0-175-34]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=49|TP=0|ip-26-0-175-241]: Local number of parameters: 0 (0.00MiB)
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=49|TP=0|ip-26-0-175-241]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=49|TP=0|ip-26-0-175-241]: No checkpoint path provided.
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=38|TP=0|ip-26-0-175-165]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=45|TP=0|ip-26-0-175-170]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=44|TP=0|ip-26-0-175-170]: Local number of parameters: 0 (0.00MiB)
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=60|TP=0|ip-26-0-175-34]: No checkpoint path provided.
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=51|TP=0|ip-26-0-175-241]: Local number of parameters: 0 (0.00MiB)
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=12|TP=0|ip-26-0-171-230]: Local number of parameters: 41.9M (80.01MiB)
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=8|TP=0|ip-26-0-171-230]: Local number of parameters: 41.9M (80.01MiB)
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=38|TP=0|ip-26-0-175-165]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=38|TP=0|ip-26-0-175-165]: No checkpoint path provided.
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=44|TP=0|ip-26-0-175-170]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=45|TP=0|ip-26-0-175-170]: No checkpoint path provided.
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=63|TP=0|ip-26-0-175-34]: Local number of parameters: 0 (0.00MiB)
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=51|TP=0|ip-26-0-175-241]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=51|TP=0|ip-26-0-175-241]: No checkpoint path provided.
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=8|TP=0|ip-26-0-171-230]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=12|TP=0|ip-26-0-171-230]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=12|TP=0|ip-26-0-171-230]: No checkpoint path provided.
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=34|TP=0|ip-26-0-175-165]: Local number of parameters: 0 (0.00MiB)
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=44|TP=0|ip-26-0-175-170]: No checkpoint path provided.
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=63|TP=0|ip-26-0-175-34]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=63|TP=0|ip-26-0-175-34]: No checkpoint path provided.
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=29|TP=0|ip-26-0-175-132]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=29|TP=0|ip-26-0-175-132]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=29|TP=0|ip-26-0-175-132]: No checkpoint path provided.
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=50|TP=0|ip-26-0-175-241]: Local number of parameters: 0 (0.00MiB)
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=8|TP=0|ip-26-0-171-230]: No checkpoint path provided.
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=34|TP=0|ip-26-0-175-165]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=34|TP=0|ip-26-0-175-165]: No checkpoint path provided.
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=50|TP=0|ip-26-0-175-241]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=10|TP=0|ip-26-0-171-230]: Local number of parameters: 41.9M (80.01MiB)
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=33|TP=0|ip-26-0-175-165]: Local number of parameters: 0 (0.00MiB)
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=50|TP=0|ip-26-0-175-241]: No checkpoint path provided.
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=10|TP=0|ip-26-0-171-230]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=10|TP=0|ip-26-0-171-230]: No checkpoint path provided.
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=33|TP=0|ip-26-0-175-165]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=33|TP=0|ip-26-0-175-165]: No checkpoint path provided.
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=24|TP=0|ip-26-0-175-132]: Local number of parameters: 2.05K (0.00MiB)
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=24|TP=0|ip-26-0-175-132]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=24|TP=0|ip-26-0-175-132]: No checkpoint path provided.
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=53|TP=0|ip-26-0-175-241]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=53|TP=0|ip-26-0-175-241]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=14|TP=0|ip-26-0-171-230]: Local number of parameters: 41.9M (80.01MiB)
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=14|TP=0|ip-26-0-171-230]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=14|TP=0|ip-26-0-171-230]: No checkpoint path provided.
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=36|TP=0|ip-26-0-175-165]: Local number of parameters: 0 (0.00MiB)
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=36|TP=0|ip-26-0-175-165]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=26|TP=0|ip-26-0-175-132]: Local number of parameters: 0 (0.00MiB)
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=53|TP=0|ip-26-0-175-241]: No checkpoint path provided.
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=13|TP=0|ip-26-0-171-230]: Local number of parameters: 41.9M (80.01MiB)
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=36|TP=0|ip-26-0-175-165]: No checkpoint path provided.
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=26|TP=0|ip-26-0-175-132]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=26|TP=0|ip-26-0-175-132]: No checkpoint path provided.
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=9|TP=0|ip-26-0-171-230]: Local number of parameters: 41.9M (80.01MiB)
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=28|TP=0|ip-26-0-175-132]: Local number of parameters: 0 (0.00MiB)
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=28|TP=0|ip-26-0-175-132]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=13|TP=0|ip-26-0-171-230]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=28|TP=0|ip-26-0-175-132]: No checkpoint path provided.
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=31|TP=0|ip-26-0-175-132]: Local number of parameters: 0 (0.00MiB)
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=31|TP=0|ip-26-0-175-132]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=13|TP=0|ip-26-0-171-230]: No checkpoint path provided.
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=9|TP=0|ip-26-0-171-230]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=9|TP=0|ip-26-0-171-230]: No checkpoint path provided.
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=31|TP=0|ip-26-0-175-132]: No checkpoint path provided.
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=15|TP=0|ip-26-0-171-230]: Local number of parameters: 41.9M (80.01MiB)
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=15|TP=0|ip-26-0-171-230]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=15|TP=0|ip-26-0-171-230]: No checkpoint path provided.
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=11|TP=0|ip-26-0-171-230]: Local number of parameters: 41.9M (80.01MiB)
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=30|TP=0|ip-26-0-175-132]: Local number of parameters: 0 (0.00MiB)
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=30|TP=0|ip-26-0-175-132]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.03MiB Peak reserved: 2.00MiB
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=11|TP=0|ip-26-0-171-230]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=11|TP=0|ip-26-0-171-230]: No checkpoint path provided.
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=30|TP=0|ip-26-0-175-132]: No checkpoint path provided.
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=23|TP=0|ip-26-0-171-249]: Local number of parameters: 41.9M (80.01MiB)
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=23|TP=0|ip-26-0-171-249]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default7]:07/06/2024 09:41:38 [INFO|DP=0|PP=23|TP=0|ip-26-0-171-249]: No checkpoint path provided.
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=16|TP=0|ip-26-0-171-249]: Local number of parameters: 41.9M (80.01MiB)
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=16|TP=0|ip-26-0-171-249]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=16|TP=0|ip-26-0-171-249]: No checkpoint path provided.
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=21|TP=0|ip-26-0-171-249]: Local number of parameters: 41.9M (80.01MiB)
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=21|TP=0|ip-26-0-171-249]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default5]:07/06/2024 09:41:38 [INFO|DP=0|PP=21|TP=0|ip-26-0-171-249]: No checkpoint path provided.
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=17|TP=0|ip-26-0-171-249]: Local number of parameters: 41.9M (80.01MiB)
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=17|TP=0|ip-26-0-171-249]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default1]:07/06/2024 09:41:38 [INFO|DP=0|PP=17|TP=0|ip-26-0-171-249]: No checkpoint path provided.
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=20|TP=0|ip-26-0-171-249]: Local number of parameters: 41.9M (80.01MiB)
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=20|TP=0|ip-26-0-171-249]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default4]:07/06/2024 09:41:38 [INFO|DP=0|PP=20|TP=0|ip-26-0-171-249]: No checkpoint path provided.
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=18|TP=0|ip-26-0-171-249]: Local number of parameters: 41.9M (80.01MiB)
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=18|TP=0|ip-26-0-171-249]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default2]:07/06/2024 09:41:38 [INFO|DP=0|PP=18|TP=0|ip-26-0-171-249]: No checkpoint path provided.
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=22|TP=0|ip-26-0-171-249]: Local number of parameters: 41.9M (80.01MiB)
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=22|TP=0|ip-26-0-171-249]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default6]:07/06/2024 09:41:38 [INFO|DP=0|PP=22|TP=0|ip-26-0-171-249]: No checkpoint path provided.
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=19|TP=0|ip-26-0-171-249]: Local number of parameters: 41.9M (80.01MiB)
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=19|TP=0|ip-26-0-171-249]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default3]:07/06/2024 09:41:38 [INFO|DP=0|PP=19|TP=0|ip-26-0-171-249]: No checkpoint path provided.
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: [Optimizer Building] Using LearningRateForSP as learning rate
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: [ZeRO sharding] Size of optimizer params per rank:
[default0]:07/06/2024 09:41:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: [ZeRO sharding] DP Rank 0 has 145M out of 145M (100.00%) params' optimizer states
[default0]:07/06/2024 09:41:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples
[default0]:07/06/2024 09:41:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Using `datasets` library
[default0]:07/06/2024 09:41:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4')
[default1]:Traceback (most recent call last):
[default1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 233, in <module>
[default1]: trainer = DistributedTrainer(config_file)
[default1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 185, in __init__
[default1]: self.optimizer, self.grad_accumulator = init_optimizer_and_grad_accumulator(
[default1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/helpers.py", line 401, in init_optimizer_and_grad_accumulator
[default1]: param = model.get_parameter(optim_model_param_name)
[default1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 714, in get_parameter
[default1]: mod: torch.nn.Module = self.get_submodule(module_path)
[default1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 681, in get_submodule
[default1]: raise AttributeError(mod._get_name() + " has no "
[default1]:AttributeError: PipelineBlock has no attribute `pp_block`
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:41:39 [WARNING|DP=0|PP=0|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[2024-07-06 09:41:43,366] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 501362 closing signal SIGTERM
[2024-07-06 09:41:43,366] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 501364 closing signal SIGTERM
[2024-07-06 09:41:43,367] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 501365 closing signal SIGTERM
[2024-07-06 09:41:43,367] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 501366 closing signal SIGTERM
[2024-07-06 09:41:43,368] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 501367 closing signal SIGTERM
[2024-07-06 09:41:43,368] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 501368 closing signal SIGTERM
[2024-07-06 09:41:43,369] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 501369 closing signal SIGTERM
[default4]:07/06/2024 09:41:45 [WARNING|DP=0|PP=12|TP=0|ip-26-0-171-230]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:41:45 [WARNING|DP=0|PP=10|TP=0|ip-26-0-171-230]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:41:45 [WARNING|DP=0|PP=11|TP=0|ip-26-0-171-230]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/06/2024 09:41:45 [WARNING|DP=0|PP=15|TP=0|ip-26-0-171-230]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:41:45 [WARNING|DP=0|PP=13|TP=0|ip-26-0-171-230]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[2024-07-06 09:41:45,287] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 501363) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
[default7]:Repo card metadata block was not found. Setting CardData to empty.
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:41:43
host : ip-26-0-175-132.ec2.internal
rank : 25 (local_rank: 1)
exitcode : 1 (pid: 501363)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
[default0]:07/06/2024 09:41:45 [WARNING|DP=0|PP=8|TP=0|ip-26-0-171-230]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:41:45 [WARNING|DP=0|PP=14|TP=0|ip-26-0-171-230]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:41:45 [WARNING|DP=0|PP=9|TP=0|ip-26-0-171-230]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
srun: error: ip-26-0-175-132: task 4: Exited with exit code 1
[default0]:[rank16]:[E ProcessGroupNCCL.cpp:537] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default0]:[rank16]:[E ProcessGroupNCCL.cpp:543] To avoid data inconsistency, we are taking the entire process down.
[default0]:[rank16]:[E ProcessGroupNCCL.cpp:1182] [Rank 16] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-175-132.ec2.internal<33098>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f04b87fed87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f04b99a5fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f04b99a627b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f04b99a9c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f04b99aa839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7f05036aee95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7f05087b6609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7f0508581353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:terminate called after throwing an instance of 'c10::DistBackendError'
[default0]: what(): [Rank 16] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-175-132.ec2.internal<33098>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f04b87fed87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f04b99a5fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f04b99a627b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f04b99a9c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f04b99aa839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7f05036aee95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7f05087b6609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7f0508581353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f04b87fed87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: <unknown function> + 0xdf6b11 (0x7f04b9700b11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: <unknown function> + 0xd3e95 (0x7f05036aee95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #3: <unknown function> + 0x8609 (0x7f05087b6609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #4: clone + 0x43 (0x7f0508581353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default4]:07/06/2024 09:41:50 [WARNING|DP=0|PP=20|TP=0|ip-26-0-171-249]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:41:50 [WARNING|DP=0|PP=21|TP=0|ip-26-0-171-249]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:41:50 [WARNING|DP=0|PP=17|TP=0|ip-26-0-171-249]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:41:50 [WARNING|DP=0|PP=19|TP=0|ip-26-0-171-249]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:41:50 [WARNING|DP=0|PP=22|TP=0|ip-26-0-171-249]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/06/2024 09:41:50 [WARNING|DP=0|PP=23|TP=0|ip-26-0-171-249]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:41:50 [WARNING|DP=0|PP=18|TP=0|ip-26-0-171-249]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:41:52 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: [Training Plan] There are 1 training stages
[default0]:07/06/2024 09:41:52 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: [Stage Training Stage] start from step 1
[default0]:07/06/2024 09:41:52 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]:
[default0]:07/06/2024 09:41:52 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: [Start training] datetime: 2024-07-06 09:41:52.010876 | mbs: 8 | grad_accum: 128 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0
[default7]:07/06/2024 09:41:52 [WARNING|DP=0|PP=7|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:41:52 [WARNING|DP=0|PP=5|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:41:52 [WARNING|DP=0|PP=41|TP=0|ip-26-0-175-170]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:41:52 [WARNING|DP=0|PP=46|TP=0|ip-26-0-175-170]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:41:52 [WARNING|DP=0|PP=2|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:41:52 [WARNING|DP=0|PP=1|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/06/2024 09:41:52 [WARNING|DP=0|PP=4|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/06/2024 09:41:52 [WARNING|DP=0|PP=60|TP=0|ip-26-0-175-34]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:41:52 [WARNING|DP=0|PP=50|TP=0|ip-26-0-175-241]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:41:52 [WARNING|DP=0|PP=45|TP=0|ip-26-0-175-170]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:41:52 [WARNING|DP=0|PP=6|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:41:52 [WARNING|DP=0|PP=3|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:41:52 [WARNING|DP=0|PP=59|TP=0|ip-26-0-175-34]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:41:52 [WARNING|DP=0|PP=56|TP=0|ip-26-0-175-34]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/06/2024 09:41:52 [WARNING|DP=0|PP=47|TP=0|ip-26-0-175-170]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:41:52 [WARNING|DP=0|PP=40|TP=0|ip-26-0-175-170]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:41:52 [WARNING|DP=0|PP=43|TP=0|ip-26-0-175-170]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/06/2024 09:41:52 [WARNING|DP=0|PP=55|TP=0|ip-26-0-175-241]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:41:52 [WARNING|DP=0|PP=61|TP=0|ip-26-0-175-34]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:41:52 [WARNING|DP=0|PP=58|TP=0|ip-26-0-175-34]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:41:52 [WARNING|DP=0|PP=57|TP=0|ip-26-0-175-34]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/06/2024 09:41:52 [WARNING|DP=0|PP=63|TP=0|ip-26-0-175-34]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/06/2024 09:41:52 [WARNING|DP=0|PP=51|TP=0|ip-26-0-175-241]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/06/2024 09:41:52 [WARNING|DP=0|PP=49|TP=0|ip-26-0-175-241]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/06/2024 09:41:52 [WARNING|DP=0|PP=42|TP=0|ip-26-0-175-170]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/06/2024 09:41:52 [WARNING|DP=0|PP=44|TP=0|ip-26-0-175-170]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/06/2024 09:41:52 [WARNING|DP=0|PP=52|TP=0|ip-26-0-175-241]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/06/2024 09:41:52 [WARNING|DP=0|PP=53|TP=0|ip-26-0-175-241]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:41:52 [WARNING|DP=0|PP=54|TP=0|ip-26-0-175-241]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[2024-07-06 09:41:53,382] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2918287 closing signal SIGTERM
[2024-07-06 09:41:53,382] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2918288 closing signal SIGTERM
[2024-07-06 09:41:53,383] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2918289 closing signal SIGTERM
[2024-07-06 09:41:53,384] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2918290 closing signal SIGTERM
[2024-07-06 09:41:53,385] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2918291 closing signal SIGTERM
[2024-07-06 09:41:53,385] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2918292 closing signal SIGTERM
[2024-07-06 09:41:53,386] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2918293 closing signal SIGTERM
[2024-07-06 09:41:55,803] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 2918286) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:41:53
host : ip-26-0-171-249.ec2.internal
rank : 16 (local_rank: 0)
exitcode : -6 (pid: 2918286)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 2918286
============================================================
srun: error: ip-26-0-171-249: task 2: Exited with exit code 1
[default1]:[rank33]:[E ProcessGroupNCCL.cpp:537] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default1]:[rank33]:[E ProcessGroupNCCL.cpp:543] To avoid data inconsistency, we are taking the entire process down.
[default1]:[rank33]:[E ProcessGroupNCCL.cpp:1182] [Rank 33] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default1]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default1]:Last error:
[default1]:socketProgress: Connection closed by remote peer ip-26-0-171-249.ec2.internal<44526>
[default1]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f1e24037d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f1e251defa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f1e251df27b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f1e251e2c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f1e251e3839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #5: <unknown function> + 0xd3e95 (0x7f1e6eee7e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #6: <unknown function> + 0x8609 (0x7f1e73fef609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #7: clone + 0x43 (0x7f1e73dba353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default1]:terminate called after throwing an instance of 'c10::DistBackendError'
[default1]: what(): [Rank 33] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default1]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default1]:Last error:
[default1]:socketProgress: Connection closed by remote peer ip-26-0-171-249.ec2.internal<44526>
[default1]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f1e24037d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f1e251defa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f1e251df27b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f1e251e2c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f1e251e3839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #5: <unknown function> + 0xd3e95 (0x7f1e6eee7e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #6: <unknown function> + 0x8609 (0x7f1e73fef609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #7: clone + 0x43 (0x7f1e73dba353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default1]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f1e24037d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: <unknown function> + 0xdf6b11 (0x7f1e24f39b11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: <unknown function> + 0xd3e95 (0x7f1e6eee7e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #3: <unknown function> + 0x8609 (0x7f1e73fef609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #4: clone + 0x43 (0x7f1e73dba353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/06/2024 09:41:57 [WARNING|DP=0|PP=62|TP=0|ip-26-0-175-34]: Repo card metadata block was not found. Setting CardData to empty.
[2024-07-06 09:41:58,381] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2147387 closing signal SIGTERM
[2024-07-06 09:41:58,381] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2147389 closing signal SIGTERM
[2024-07-06 09:41:58,381] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2147390 closing signal SIGTERM
[2024-07-06 09:41:58,382] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2147391 closing signal SIGTERM
[2024-07-06 09:41:58,382] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2147392 closing signal SIGTERM
[2024-07-06 09:41:58,383] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2147393 closing signal SIGTERM
[2024-07-06 09:41:58,383] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2147394 closing signal SIGTERM
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:537] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:543] To avoid data inconsistency, we are taking the entire process down.
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:1182] [Rank 8] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-171-249.ec2.internal<44286>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f437d477d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f437e61efa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f437e61f27b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f437e622c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f437e623839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7f43c8327e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7f43cd42f609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7f43cd1fa353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:terminate called after throwing an instance of 'c10::DistBackendError'
[default0]: what(): [Rank 8] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-171-249.ec2.internal<44286>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f437d477d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7f437e61efa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7f437e61f27b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7f437e622c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f437e623839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7f43c8327e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7f43cd42f609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7f43cd1fa353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f437d477d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: <unknown function> + 0xdf6b11 (0x7f437e379b11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: <unknown function> + 0xd3e95 (0x7f43c8327e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #3: <unknown function> + 0x8609 (0x7f43cd42f609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #4: clone + 0x43 (0x7f43cd1fa353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:07/06/2024 09:41:59 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps
[default0]:07/06/2024 09:41:59 [INFO|DP=0|PP=0|TP=0|ip-26-0-161-78]: Memory usage: 1382.63MiB. Peak allocated 1382.63MiB. Peak reserved: 1402.00MiB
[2024-07-06 09:42:00,503] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 1 (pid: 2147388) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:41:58
host : ip-26-0-175-165.ec2.internal
rank : 33 (local_rank: 1)
exitcode : -6 (pid: 2147388)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 2147388
============================================================
srun: error: ip-26-0-175-165: task 5: Exited with exit code 1
[2024-07-06 09:42:03,394] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3249911 closing signal SIGTERM
[2024-07-06 09:42:03,394] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3249912 closing signal SIGTERM
[2024-07-06 09:42:03,395] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3249913 closing signal SIGTERM
[2024-07-06 09:42:03,395] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3249914 closing signal SIGTERM
[2024-07-06 09:42:03,396] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3249915 closing signal SIGTERM
[2024-07-06 09:42:03,396] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3249916 closing signal SIGTERM
[2024-07-06 09:42:03,397] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3249917 closing signal SIGTERM
[default0]:[rank0]:[E ProcessGroupNCCL.cpp:537] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default0]:[rank0]:[E ProcessGroupNCCL.cpp:543] To avoid data inconsistency, we are taking the entire process down.
[default0]:[rank0]:[E ProcessGroupNCCL.cpp:1182] [Rank 0] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-175-165.ec2.internal<52262>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd6e3b7bd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7fd6e4d22fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7fd6e4d2327b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7fd6e4d26c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7fd6e4d27839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7fd72ea2be95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7fd733b33609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7fd7338fe353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:terminate called after throwing an instance of 'c10::DistBackendError'
[default0]: what(): [Rank 0] NCCL watchdog thread terminated with exception: NCCL error: remote process exited or there was a network error, NCCL version 2.19.3
[default0]:ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
[default0]:Last error:
[default0]:socketProgress: Connection closed by remote peer ip-26-0-175-165.ec2.internal<52262>
[default0]:Exception raised from checkForNCCLErrorsInternal at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1436 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd6e3b7bd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::checkForNCCLErrorsInternal(std::vector<std::shared_ptr<c10d::NCCLComm>, std::allocator<std::shared_ptr<c10d::NCCLComm> > > const&) + 0x2f3 (0x7fd6e4d22fa3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkAndSetException() + 0x7b (0x7fd6e4d2327b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x17d (0x7fd6e4d26c1d in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7fd6e4d27839 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #5: <unknown function> + 0xd3e95 (0x7fd72ea2be95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #6: <unknown function> + 0x8609 (0x7fd733b33609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #7: clone + 0x43 (0x7fd7338fe353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd6e3b7bd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: <unknown function> + 0xdf6b11 (0x7fd6e4a7db11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: <unknown function> + 0xd3e95 (0x7fd72ea2be95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #3: <unknown function> + 0x8609 (0x7fd733b33609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #4: clone + 0x43 (0x7fd7338fe353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[2024-07-06 09:42:05,612] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 3249910) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:42:03
host : ip-26-0-171-230.ec2.internal
rank : 8 (local_rank: 0)
exitcode : -6 (pid: 3249910)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 3249910
============================================================
[default4]:Traceback (most recent call last):
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default4]: trainer.train(dataloader)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 430, in train
[default4]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 459, in training_step
[default4]: outputs = self.pipeline_engine.train_batch_iter(
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 187, in train_batch_iter
[default4]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default4]: output = model(**micro_batch)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default4]: return self._call_impl(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default4]: return forward_call(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 890, in forward
[default4]: sharded_logits = self.model(
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default4]: return self._call_impl(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default4]: return forward_call(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default4]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default4]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default4]: return self._call_impl(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default4]: return forward_call(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default4]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default4]: pipeline_state.run_communication()
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default4]: recv_activation_tensor = recv_activation()
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default4]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default4]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default4]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 246, in _recv_meta
[default4]: dist.recv(
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
[default4]: return func(*args, **kwargs)
[default4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1706, in recv
[default4]: pg.recv([tensor], group_src_rank, tag).wait()
[default4]:torch.distributed.DistBackendError: [4] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '3:4', but store->get('3:4') got error: Connection reset by peer
[default4]:Exception raised from recvBytes at ../torch/csrc/distributed/c10d/Utils.hpp:670 (most recent call first):
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7911132d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default4]:frame #1: <unknown function> + 0x589518e (0x7f79490ec18e in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #2: c10d::TCPStore::doWait(c10::ArrayRef<std::string>, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x360 (0x7f79490e69a0 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #3: c10d::TCPStore::doGet(std::string const&) + 0x32 (0x7f79490e6ce2 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #4: c10d::TCPStore::get(std::string const&) + 0xa1 (0x7f79490e7b11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #5: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f794909cf81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #6: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f794909cf81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #7: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f794909cf81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #8: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f794909cf81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #9: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, bool, std::string const&, int) + 0xa9 (0x7f79122dac69 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #10: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, std::vector<c10::Device, std::allocator<c10::Device> > const&, c10d::OpType, int, bool) + 0x22b (0x7f79122e1c5b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #11: c10d::ProcessGroupNCCL::recv(std::vector<at::Tensor, std::allocator<at::Tensor> >&, int, int) + 0x550 (0x7f7912304b60 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #12: <unknown function> + 0x5838439 (0x7f794908f439 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #13: <unknown function> + 0x5843330 (0x7f794909a330 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #14: <unknown function> + 0x58433c5 (0x7f794909a3c5 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #15: <unknown function> + 0x4e893cc (0x7f79486e03cc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #16: <unknown function> + 0x1a08a88 (0x7f794525fa88 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #17: <unknown function> + 0x5849a84 (0x7f79490a0a84 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #18: <unknown function> + 0x584ed35 (0x7f79490a5d35 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default4]:frame #19: <unknown function> + 0xc97eee (0x7f795b957eee in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default4]:frame #20: <unknown function> + 0x413ea4 (0x7f795b0d3ea4 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default4]:frame #21: <unknown function> + 0x1445a6 (0x55f7a67585a6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #22: _PyObject_MakeTpCall + 0x26b (0x55f7a6751a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #23: <unknown function> + 0x150866 (0x55f7a6764866 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #24: _PyEval_EvalFrameDefault + 0x4c12 (0x55f7a674d142 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #25: _PyFunction_Vectorcall + 0x6c (0x55f7a6758a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #26: PyObject_Call + 0xbc (0x55f7a6764f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #27: _PyEval_EvalFrameDefault + 0x2d83 (0x55f7a674b2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #28: _PyFunction_Vectorcall + 0x6c (0x55f7a6758a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #29: _PyEval_EvalFrameDefault + 0x13ca (0x55f7a67498fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #30: <unknown function> + 0x150582 (0x55f7a6764582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #31: _PyEval_EvalFrameDefault + 0x13ca (0x55f7a67498fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #32: <unknown function> + 0x150582 (0x55f7a6764582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #33: _PyEval_EvalFrameDefault + 0x13ca (0x55f7a67498fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #34: <unknown function> + 0x150582 (0x55f7a6764582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #35: _PyEval_EvalFrameDefault + 0x13ca (0x55f7a67498fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #36: _PyObject_FastCallDictTstate + 0xd0 (0x55f7a6750f50 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #37: _PyObject_Call_Prepend + 0x69 (0x55f7a6762c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #38: <unknown function> + 0x211239 (0x55f7a6825239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #39: _PyObject_MakeTpCall + 0x26b (0x55f7a6751a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #40: _PyEval_EvalFrameDefault + 0x4eb6 (0x55f7a674d3e6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #41: _PyFunction_Vectorcall + 0x6c (0x55f7a6758a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #42: _PyEval_EvalFrameDefault + 0x72c (0x55f7a6748c5c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #43: _PyFunction_Vectorcall + 0x6c (0x55f7a6758a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #44: _PyEval_EvalFrameDefault + 0x13ca (0x55f7a67498fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #45: <unknown function> + 0x150582 (0x55f7a6764582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #46: PyObject_Call + 0xbc (0x55f7a6764f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #47: _PyEval_EvalFrameDefault + 0x2d83 (0x55f7a674b2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #48: <unknown function> + 0x150582 (0x55f7a6764582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #49: PyObject_Call + 0xbc (0x55f7a6764f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #50: _PyEval_EvalFrameDefault + 0x2d83 (0x55f7a674b2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #51: _PyFunction_Vectorcall + 0x6c (0x55f7a6758a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #52: _PyObject_FastCallDictTstate + 0x187 (0x55f7a6751007 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #53: _PyObject_Call_Prepend + 0x69 (0x55f7a6762c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #54: <unknown function> + 0x211239 (0x55f7a6825239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #55: PyObject_Call + 0x207 (0x55f7a6765067 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #56: _PyEval_EvalFrameDefault + 0x2d83 (0x55f7a674b2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #57: <unknown function> + 0x150582 (0x55f7a6764582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #58: _PyEval_EvalFrameDefault + 0x13ca (0x55f7a67498fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #59: <unknown function> + 0x150582 (0x55f7a6764582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #60: PyObject_Call + 0xbc (0x55f7a6764f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #61: _PyEval_EvalFrameDefault + 0x2d83 (0x55f7a674b2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #62: <unknown function> + 0x150582 (0x55f7a6764582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:frame #63: PyObject_Call + 0xbc (0x55f7a6764f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default4]:. This may indicate a possible application crash on rank 0 or a network set up issue.
[default5]:Traceback (most recent call last):
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default5]: trainer.train(dataloader)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 430, in train
[default5]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 459, in training_step
[default5]: outputs = self.pipeline_engine.train_batch_iter(
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 187, in train_batch_iter
[default5]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default5]: output = model(**micro_batch)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default5]: return self._call_impl(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default5]: return forward_call(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 890, in forward
[default5]: sharded_logits = self.model(
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default5]: return self._call_impl(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default5]: return forward_call(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default5]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default5]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default5]: return self._call_impl(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default5]: return forward_call(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default5]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default5]: pipeline_state.run_communication()
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default5]: recv_activation_tensor = recv_activation()
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default5]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default5]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default5]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 246, in _recv_meta
[default5]: dist.recv(
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
[default5]: return func(*args, **kwargs)
[default5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1706, in recv
[default5]: pg.recv([tensor], group_src_rank, tag).wait()
[default5]:torch.distributed.DistBackendError: [5] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '4:5', but store->get('4:5') got error: Connection reset by peer
[default5]:Exception raised from recvBytes at ../torch/csrc/distributed/c10d/Utils.hpp:670 (most recent call first):
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f9194dadd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default5]:frame #1: <unknown function> + 0x589518e (0x7f91ccd6718e in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #2: c10d::TCPStore::doWait(c10::ArrayRef<std::string>, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x360 (0x7f91ccd619a0 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #3: c10d::TCPStore::doGet(std::string const&) + 0x32 (0x7f91ccd61ce2 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #4: c10d::TCPStore::get(std::string const&) + 0xa1 (0x7f91ccd62b11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #5: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f91ccd17f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #6: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f91ccd17f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #7: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f91ccd17f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #8: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f91ccd17f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #9: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, bool, std::string const&, int) + 0xa9 (0x7f9195f55c69 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #10: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, std::vector<c10::Device, std::allocator<c10::Device> > const&, c10d::OpType, int, bool) + 0x22b (0x7f9195f5cc5b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #11: c10d::ProcessGroupNCCL::recv(std::vector<at::Tensor, std::allocator<at::Tensor> >&, int, int) + 0x550 (0x7f9195f7fb60 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #12: <unknown function> + 0x5838439 (0x7f91ccd0a439 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #13: <unknown function> + 0x5843330 (0x7f91ccd15330 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #14: <unknown function> + 0x58433c5 (0x7f91ccd153c5 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #15: <unknown function> + 0x4e893cc (0x7f91cc35b3cc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #16: <unknown function> + 0x1a08a88 (0x7f91c8edaa88 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #17: <unknown function> + 0x5849a84 (0x7f91ccd1ba84 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #18: <unknown function> + 0x584ed35 (0x7f91ccd20d35 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default5]:frame #19: <unknown function> + 0xc97eee (0x7f91df5d2eee in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default5]:frame #20: <unknown function> + 0x413ea4 (0x7f91ded4eea4 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default5]:frame #21: <unknown function> + 0x1445a6 (0x55c7d2d1f5a6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #22: _PyObject_MakeTpCall + 0x26b (0x55c7d2d18a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #23: <unknown function> + 0x150866 (0x55c7d2d2b866 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #24: _PyEval_EvalFrameDefault + 0x4c12 (0x55c7d2d14142 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #25: _PyFunction_Vectorcall + 0x6c (0x55c7d2d1fa2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #26: PyObject_Call + 0xbc (0x55c7d2d2bf1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #27: _PyEval_EvalFrameDefault + 0x2d83 (0x55c7d2d122b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #28: _PyFunction_Vectorcall + 0x6c (0x55c7d2d1fa2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #29: _PyEval_EvalFrameDefault + 0x13ca (0x55c7d2d108fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #30: <unknown function> + 0x150582 (0x55c7d2d2b582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #31: _PyEval_EvalFrameDefault + 0x13ca (0x55c7d2d108fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #32: <unknown function> + 0x150582 (0x55c7d2d2b582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #33: _PyEval_EvalFrameDefault + 0x13ca (0x55c7d2d108fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #34: <unknown function> + 0x150582 (0x55c7d2d2b582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #35: _PyEval_EvalFrameDefault + 0x13ca (0x55c7d2d108fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #36: _PyObject_FastCallDictTstate + 0xd0 (0x55c7d2d17f50 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #37: _PyObject_Call_Prepend + 0x69 (0x55c7d2d29c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #38: <unknown function> + 0x211239 (0x55c7d2dec239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #39: _PyObject_MakeTpCall + 0x26b (0x55c7d2d18a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #40: _PyEval_EvalFrameDefault + 0x4eb6 (0x55c7d2d143e6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #41: _PyFunction_Vectorcall + 0x6c (0x55c7d2d1fa2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #42: _PyEval_EvalFrameDefault + 0x72c (0x55c7d2d0fc5c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #43: _PyFunction_Vectorcall + 0x6c (0x55c7d2d1fa2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #44: _PyEval_EvalFrameDefault + 0x13ca (0x55c7d2d108fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #45: <unknown function> + 0x150582 (0x55c7d2d2b582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #46: PyObject_Call + 0xbc (0x55c7d2d2bf1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #47: _PyEval_EvalFrameDefault + 0x2d83 (0x55c7d2d122b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #48: <unknown function> + 0x150582 (0x55c7d2d2b582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #49: PyObject_Call + 0xbc (0x55c7d2d2bf1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #50: _PyEval_EvalFrameDefault + 0x2d83 (0x55c7d2d122b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #51: _PyFunction_Vectorcall + 0x6c (0x55c7d2d1fa2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #52: _PyObject_FastCallDictTstate + 0x187 (0x55c7d2d18007 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #53: _PyObject_Call_Prepend + 0x69 (0x55c7d2d29c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #54: <unknown function> + 0x211239 (0x55c7d2dec239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #55: PyObject_Call + 0x207 (0x55c7d2d2c067 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #56: _PyEval_EvalFrameDefault + 0x2d83 (0x55c7d2d122b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #57: <unknown function> + 0x150582 (0x55c7d2d2b582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #58: _PyEval_EvalFrameDefault + 0x13ca (0x55c7d2d108fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #59: <unknown function> + 0x150582 (0x55c7d2d2b582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #60: PyObject_Call + 0xbc (0x55c7d2d2bf1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #61: _PyEval_EvalFrameDefault + 0x2d83 (0x55c7d2d122b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #62: <unknown function> + 0x150582 (0x55c7d2d2b582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:frame #63: PyObject_Call + 0xbc (0x55c7d2d2bf1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default5]:. This may indicate a possible application crash on rank 0 or a network set up issue.
[default6]:Traceback (most recent call last):
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default6]: trainer.train(dataloader)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 430, in train
[default6]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 459, in training_step
[default6]: outputs = self.pipeline_engine.train_batch_iter(
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 187, in train_batch_iter
[default6]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default6]: output = model(**micro_batch)
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default6]: return self._call_impl(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default7]:Traceback (most recent call last):
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default6]: return forward_call(*args, **kwargs)
[default7]: trainer.train(dataloader)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 430, in train
[default7]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 890, in forward
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 459, in training_step
[default7]: outputs = self.pipeline_engine.train_batch_iter(
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 187, in train_batch_iter
[default6]: sharded_logits = self.model(
[default7]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default7]: output = model(**micro_batch)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default7]: return self._call_impl(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default7]: return forward_call(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 890, in forward
[default7]: sharded_logits = self.model(
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default6]: return self._call_impl(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default7]: return self._call_impl(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default7]: return forward_call(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default7]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default7]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default7]: return self._call_impl(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default6]: return forward_call(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default7]: return forward_call(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default7]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default6]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default7]: pipeline_state.run_communication()
[default6]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default6]: return self._call_impl(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[default6]: return forward_call(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default6]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default6]: pipeline_state.run_communication()
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default6]: recv_activation_tensor = recv_activation()
[default7]: recv_activation_tensor = recv_activation()
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default7]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default7]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default6]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default7]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 246, in _recv_meta
[default6]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default6]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default7]: dist.recv(
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
[default6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 246, in _recv_meta
[default6]: dist.recv(
[default7]: return func(*args, **kwargs)
[default7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1706, in recv
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
[default6]: return func(*args, **kwargs)
[default6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1706, in recv
[default6]: pg.recv([tensor], group_src_rank, tag).wait()
[default7]: pg.recv([tensor], group_src_rank, tag).wait()
[default6]:torch.distributed.DistBackendError: [6] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '5:6', but store->get('5:6') got error: Connection reset by peer
[default6]:Exception raised from recvBytes at ../torch/csrc/distributed/c10d/Utils.hpp:670 (most recent call first):
[default7]:torch.distributed.DistBackendError: [7] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '6:7', but store->get('6:7') got error: Connection reset by peer
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fbf77734d87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default6]:frame #1: <unknown function> + 0x589518e (0x7fbfaf6ee18e in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #2: c10d::TCPStore::doWait(c10::ArrayRef<std::string>, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x360 (0x7fbfaf6e89a0 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #3: c10d::TCPStore::doGet(std::string const&) + 0x32 (0x7fbfaf6e8ce2 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #4: c10d::TCPStore::get(std::string const&) + 0xa1 (0x7fbfaf6e9b11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #5: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7fbfaf69ef81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #6: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7fbfaf69ef81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #7: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7fbfaf69ef81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #8: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7fbfaf69ef81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:Exception raised from recvBytes at ../torch/csrc/distributed/c10d/Utils.hpp:670 (most recent call first):
[default6]:frame #9: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, bool, std::string const&, int) + 0xa9 (0x7fbf788dcc69 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f93846cbd87 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default7]:frame #1: <unknown function> + 0x589518e (0x7f93bc68518e in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #10: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, std::vector<c10::Device, std::allocator<c10::Device> > const&, c10d::OpType, int, bool) + 0x22b (0x7fbf788e3c5b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #2: c10d::TCPStore::doWait(c10::ArrayRef<std::string>, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x360 (0x7f93bc67f9a0 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #11: c10d::ProcessGroupNCCL::recv(std::vector<at::Tensor, std::allocator<at::Tensor> >&, int, int) + 0x550 (0x7fbf78906b60 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #12: <unknown function> + 0x5838439 (0x7fbfaf691439 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #13: <unknown function> + 0x5843330 (0x7fbfaf69c330 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #3: c10d::TCPStore::doGet(std::string const&) + 0x32 (0x7f93bc67fce2 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #4: c10d::TCPStore::get(std::string const&) + 0xa1 (0x7f93bc680b11 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #14: <unknown function> + 0x58433c5 (0x7fbfaf69c3c5 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #15: <unknown function> + 0x4e893cc (0x7fbfaece23cc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #16: <unknown function> + 0x1a08a88 (0x7fbfab861a88 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #5: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f93bc635f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #6: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f93bc635f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #7: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f93bc635f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #17: <unknown function> + 0x5849a84 (0x7fbfaf6a2a84 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #8: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7f93bc635f81 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #9: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, bool, std::string const&, int) + 0xa9 (0x7f9385873c69 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #10: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, std::vector<c10::Device, std::allocator<c10::Device> > const&, c10d::OpType, int, bool) + 0x22b (0x7f938587ac5b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #11: c10d::ProcessGroupNCCL::recv(std::vector<at::Tensor, std::allocator<at::Tensor> >&, int, int) + 0x550 (0x7f938589db60 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #12: <unknown function> + 0x5838439 (0x7f93bc628439 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #13: <unknown function> + 0x5843330 (0x7f93bc633330 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #14: <unknown function> + 0x58433c5 (0x7f93bc6333c5 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #15: <unknown function> + 0x4e893cc (0x7f93bbc793cc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #16: <unknown function> + 0x1a08a88 (0x7f93b87f8a88 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #17: <unknown function> + 0x5849a84 (0x7f93bc639a84 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #18: <unknown function> + 0x584ed35 (0x7fbfaf6a7d35 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default7]:frame #18: <unknown function> + 0x584ed35 (0x7f93bc63ed35 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
[default6]:frame #19: <unknown function> + 0xc97eee (0x7fbfc1f59eee in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default6]:frame #20: <unknown function> + 0x413ea4 (0x7fbfc16d5ea4 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default6]:frame #21: <unknown function> + 0x1445a6 (0x55f59a4415a6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #19: <unknown function> + 0xc97eee (0x7f93ceef0eee in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default6]:frame #22: _PyObject_MakeTpCall + 0x26b (0x55f59a43aa6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #20: <unknown function> + 0x413ea4 (0x7f93ce66cea4 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
[default7]:frame #21: <unknown function> + 0x1445a6 (0x561d618895a6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #23: <unknown function> + 0x150866 (0x55f59a44d866 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #22: _PyObject_MakeTpCall + 0x26b (0x561d61882a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #23: <unknown function> + 0x150866 (0x561d61895866 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #24: _PyEval_EvalFrameDefault + 0x4c12 (0x561d6187e142 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #24: _PyEval_EvalFrameDefault + 0x4c12 (0x55f59a436142 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #25: _PyFunction_Vectorcall + 0x6c (0x561d61889a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #25: _PyFunction_Vectorcall + 0x6c (0x55f59a441a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #26: PyObject_Call + 0xbc (0x561d61895f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #26: PyObject_Call + 0xbc (0x55f59a44df1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #27: _PyEval_EvalFrameDefault + 0x2d83 (0x561d6187c2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #27: _PyEval_EvalFrameDefault + 0x2d83 (0x55f59a4342b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #28: _PyFunction_Vectorcall + 0x6c (0x561d61889a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #28: _PyFunction_Vectorcall + 0x6c (0x55f59a441a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #29: _PyEval_EvalFrameDefault + 0x13ca (0x55f59a4328fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #30: <unknown function> + 0x150582 (0x55f59a44d582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #31: _PyEval_EvalFrameDefault + 0x13ca (0x55f59a4328fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #32: <unknown function> + 0x150582 (0x55f59a44d582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #33: _PyEval_EvalFrameDefault + 0x13ca (0x55f59a4328fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #34: <unknown function> + 0x150582 (0x55f59a44d582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #35: _PyEval_EvalFrameDefault + 0x13ca (0x55f59a4328fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #29: _PyEval_EvalFrameDefault + 0x13ca (0x561d6187a8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #30: <unknown function> + 0x150582 (0x561d61895582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #36: _PyObject_FastCallDictTstate + 0xd0 (0x55f59a439f50 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #37: _PyObject_Call_Prepend + 0x69 (0x55f59a44bc39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #31: _PyEval_EvalFrameDefault + 0x13ca (0x561d6187a8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #38: <unknown function> + 0x211239 (0x55f59a50e239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #39: _PyObject_MakeTpCall + 0x26b (0x55f59a43aa6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #40: _PyEval_EvalFrameDefault + 0x4eb6 (0x55f59a4363e6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #32: <unknown function> + 0x150582 (0x561d61895582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #33: _PyEval_EvalFrameDefault + 0x13ca (0x561d6187a8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #34: <unknown function> + 0x150582 (0x561d61895582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #41: _PyFunction_Vectorcall + 0x6c (0x55f59a441a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #42: _PyEval_EvalFrameDefault + 0x72c (0x55f59a431c5c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #35: _PyEval_EvalFrameDefault + 0x13ca (0x561d6187a8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #36: _PyObject_FastCallDictTstate + 0xd0 (0x561d61881f50 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #37: _PyObject_Call_Prepend + 0x69 (0x561d61893c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #38: <unknown function> + 0x211239 (0x561d61956239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #39: _PyObject_MakeTpCall + 0x26b (0x561d61882a6b in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #40: _PyEval_EvalFrameDefault + 0x4eb6 (0x561d6187e3e6 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #41: _PyFunction_Vectorcall + 0x6c (0x561d61889a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #42: _PyEval_EvalFrameDefault + 0x72c (0x561d61879c5c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #43: _PyFunction_Vectorcall + 0x6c (0x561d61889a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #44: _PyEval_EvalFrameDefault + 0x13ca (0x561d6187a8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #45: <unknown function> + 0x150582 (0x561d61895582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #43: _PyFunction_Vectorcall + 0x6c (0x55f59a441a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #46: PyObject_Call + 0xbc (0x561d61895f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #47: _PyEval_EvalFrameDefault + 0x2d83 (0x561d6187c2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #48: <unknown function> + 0x150582 (0x561d61895582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #49: PyObject_Call + 0xbc (0x561d61895f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #50: _PyEval_EvalFrameDefault + 0x2d83 (0x561d6187c2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #51: _PyFunction_Vectorcall + 0x6c (0x561d61889a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #44: _PyEval_EvalFrameDefault + 0x13ca (0x55f59a4328fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #45: <unknown function> + 0x150582 (0x55f59a44d582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #46: PyObject_Call + 0xbc (0x55f59a44df1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #52: _PyObject_FastCallDictTstate + 0x187 (0x561d61882007 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #47: _PyEval_EvalFrameDefault + 0x2d83 (0x55f59a4342b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #48: <unknown function> + 0x150582 (0x55f59a44d582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #49: PyObject_Call + 0xbc (0x55f59a44df1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #50: _PyEval_EvalFrameDefault + 0x2d83 (0x55f59a4342b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #51: _PyFunction_Vectorcall + 0x6c (0x55f59a441a2c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #53: _PyObject_Call_Prepend + 0x69 (0x561d61893c39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #52: _PyObject_FastCallDictTstate + 0x187 (0x55f59a43a007 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #53: _PyObject_Call_Prepend + 0x69 (0x55f59a44bc39 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #54: <unknown function> + 0x211239 (0x55f59a50e239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #55: PyObject_Call + 0x207 (0x55f59a44e067 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #56: _PyEval_EvalFrameDefault + 0x2d83 (0x55f59a4342b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #57: <unknown function> + 0x150582 (0x55f59a44d582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #54: <unknown function> + 0x211239 (0x561d61956239 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #58: _PyEval_EvalFrameDefault + 0x13ca (0x55f59a4328fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #55: PyObject_Call + 0x207 (0x561d61896067 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #56: _PyEval_EvalFrameDefault + 0x2d83 (0x561d6187c2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #57: <unknown function> + 0x150582 (0x561d61895582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #58: _PyEval_EvalFrameDefault + 0x13ca (0x561d6187a8fa in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #59: <unknown function> + 0x150582 (0x55f59a44d582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #59: <unknown function> + 0x150582 (0x561d61895582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #60: PyObject_Call + 0xbc (0x561d61895f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #61: _PyEval_EvalFrameDefault + 0x2d83 (0x561d6187c2b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #62: <unknown function> + 0x150582 (0x561d61895582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:frame #63: PyObject_Call + 0xbc (0x561d61895f1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default7]:. This may indicate a possible application crash on rank 0 or a network set up issue.
[default6]:frame #60: PyObject_Call + 0xbc (0x55f59a44df1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #61: _PyEval_EvalFrameDefault + 0x2d83 (0x55f59a4342b3 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #62: <unknown function> + 0x150582 (0x55f59a44d582 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:frame #63: PyObject_Call + 0xbc (0x55f59a44df1c in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10)
[default6]:. This may indicate a possible application crash on rank 0 or a network set up issue.
srun: error: ip-26-0-171-230: task 1: Exited with exit code 1
[2024-07-06 09:42:08,382] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 235373 closing signal SIGTERM
[2024-07-06 09:42:08,382] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 235374 closing signal SIGTERM
[2024-07-06 09:42:08,382] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 235375 closing signal SIGTERM
[2024-07-06 09:42:09,811] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 235372) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2024-07-06_09:42:08
host : ip-26-0-161-78.ec2.internal
rank : 4 (local_rank: 4)
exitcode : 1 (pid: 235376)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2024-07-06_09:42:08
host : ip-26-0-161-78.ec2.internal
rank : 5 (local_rank: 5)
exitcode : 1 (pid: 235377)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2024-07-06_09:42:08
host : ip-26-0-161-78.ec2.internal
rank : 6 (local_rank: 6)
exitcode : 1 (pid: 235378)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[4]:
time : 2024-07-06_09:42:08
host : ip-26-0-161-78.ec2.internal
rank : 7 (local_rank: 7)
exitcode : 1 (pid: 235379)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-06_09:42:08
host : ip-26-0-161-78.ec2.internal
rank : 0 (local_rank: 0)
exitcode : -6 (pid: 235372)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 235372
============================================================
srun: error: ip-26-0-161-78: task 0: Exited with exit code 1
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/06/2024 09:42:13 [WARNING|DP=0|PP=48|TP=0|ip-26-0-175-241]: Repo card metadata block was not found. Setting CardData to empty.
[2024-07-06 09:42:13,181] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-26-0-175-241.ec2.internal_1345744_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError.
[2024-07-06 09:42:13,270] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-26-0-175-170.ec2.internal_3655288_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError.
[2024-07-06 09:42:13,358] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-26-0-175-34.ec2.internal_1215796_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError.
[2024-07-06 09:42:13,393] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1345815 closing signal SIGTERM
[2024-07-06 09:42:13,393] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1345816 closing signal SIGTERM
[2024-07-06 09:42:13,394] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1345817 closing signal SIGTERM
[2024-07-06 09:42:13,394] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1345818 closing signal SIGTERM
[2024-07-06 09:42:13,394] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1345819 closing signal SIGTERM
[2024-07-06 09:42:13,395] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1345820 closing signal SIGTERM
[2024-07-06 09:42:13,395] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1345821 closing signal SIGTERM
[2024-07-06 09:42:13,396] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1345822 closing signal SIGTERM
[2024-07-06 09:42:13,397] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1215866 closing signal SIGTERM
[2024-07-06 09:42:13,398] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1215867 closing signal SIGTERM
[2024-07-06 09:42:13,398] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1215868 closing signal SIGTERM
[2024-07-06 09:42:13,399] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1215869 closing signal SIGTERM
[2024-07-06 09:42:13,400] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1215870 closing signal SIGTERM
[2024-07-06 09:42:13,400] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1215871 closing signal SIGTERM
[2024-07-06 09:42:13,401] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1215872 closing signal SIGTERM
[2024-07-06 09:42:13,401] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1215873 closing signal SIGTERM
[2024-07-06 09:42:13,405] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3655358 closing signal SIGTERM
[2024-07-06 09:42:13,405] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3655359 closing signal SIGTERM
[2024-07-06 09:42:13,406] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3655360 closing signal SIGTERM
[2024-07-06 09:42:13,406] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3655361 closing signal SIGTERM
[2024-07-06 09:42:13,407] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3655362 closing signal SIGTERM
[2024-07-06 09:42:13,407] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3655363 closing signal SIGTERM
[2024-07-06 09:42:13,408] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3655364 closing signal SIGTERM
[2024-07-06 09:42:13,408] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3655365 closing signal SIGTERM
[2024-07-06 09:42:15,430] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-26-0-175-34.ec2.internal_1215796_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store
return getattr(self._store, store_op)(*args, **kwargs)
torch.distributed.DistNetworkError: Broken pipe
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
result = agent.run()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper
result = f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 727, in run
result = self._invoke_run(role)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 900, in _invoke_run
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1083, in num_nodes_waiting
self._state_holder.sync()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 409, in sync
get_response = self._backend.get_state()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state
base64_state: bytes = self._call_store("get", self._key)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store
raise RendezvousConnectionError(
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
srun: error: ip-26-0-175-34: task 3: Exited with exit code 1
[2024-07-06 09:42:15,830] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-26-0-175-241.ec2.internal_1345744_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store
return getattr(self._store, store_op)(*args, **kwargs)
torch.distributed.DistNetworkError: Broken pipe
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
result = agent.run()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper
result = f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 727, in run
[2024-07-06 09:42:15,835] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-26-0-175-170.ec2.internal_3655288_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError.
result = self._invoke_run(role)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 900, in _invoke_run
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1083, in num_nodes_waiting
return getattr(self._store, store_op)(*args, **kwargs)
torch.distributed.DistNetworkError: Broken pipe
The above exception was the direct cause of the following exception:
self._state_holder.sync()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 409, in sync
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
get_response = self._backend.get_state()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
base64_state: bytes = self._call_store("get", self._key)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
raise RendezvousConnectionError(
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
result = agent.run()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper
result = f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 727, in run
result = self._invoke_run(role)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 900, in _invoke_run
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1083, in num_nodes_waiting
self._state_holder.sync()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 409, in sync
get_response = self._backend.get_state()
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state
base64_state: bytes = self._call_store("get", self._key)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store
raise RendezvousConnectionError(
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
srun: error: ip-26-0-175-241: task 7: Exited with exit code 1
srun: error: ip-26-0-175-170: task 6: Exited with exit code 1
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.