3outeille's picture
3outeille HF staff
Upload llama-1B/16_GPUS/dp-1_tp-4_pp-4_mbz-1
07ac556 verified
raw
history blame
185 kB
========================
START TIME: Tue Jul 2 19:23:43 UTC 2024
python3 version = Python 3.10.14
========================
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
Token is valid (permission: write).
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token
Login successful
Already on 'bench_cluster'
M examples/config_tiny_llama.py
M examples/config_tiny_llama.yaml
M examples/train_tiny_llama.sh
M src/nanotron/models/llama.py
M src/nanotron/trainer.py
Your branch is up to date with 'origin/bench_cluster'.
Job status: RUNNING
W0702 19:23:45.992000 140235396953920 torch/distributed/run.py:757]
W0702 19:23:45.992000 140235396953920 torch/distributed/run.py:757] *****************************************
W0702 19:23:45.992000 140235396953920 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0702 19:23:45.992000 140235396953920 torch/distributed/run.py:757] *****************************************
W0702 19:23:45.997000 140272733103936 torch/distributed/run.py:757]
W0702 19:23:45.997000 140272733103936 torch/distributed/run.py:757] *****************************************
W0702 19:23:45.997000 140272733103936 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0702 19:23:45.997000 140272733103936 torch/distributed/run.py:757] *****************************************
[default0]:07/02/2024 19:24:03 [WARNING|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Vocab Size Padding] Padded vocab (size: 50257) with 3 dummy tokens (new size: 50260)
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Config:
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Config(general=GeneralArgs(project='bench_cluster',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: run='%date_%jobid',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: seed=42,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: step=None,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: consumed_train_samples=None,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: benchmark_csv_path=None,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: ignore_sanity_checks=True),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: parallelism=ParallelismArgs(dp=1,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pp=4,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tp=4,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7f1897630910>,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tp_linear_async_communication=False,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: expert_parallel_size=1),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: eos_token_id=2,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hidden_act='silu',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hidden_size=2048,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: initializer_range=0.02,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: intermediate_size=4096,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: is_llama_config=True,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: max_position_embeddings=4096,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_attention_heads=32,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_hidden_layers=24,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_key_value_heads=32,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pad_token_id=None,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pretraining_tp=1,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rms_norm_eps=1e-05,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rope_scaling=None,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rope_theta=10000.0,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tie_word_embeddings=True,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: use_cache=True,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: vocab_size=50260),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: init_method=RandomInit(std=0.025),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: dtype=torch.bfloat16,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: make_vocab_size_divisible_by=1,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: ddp_bucket_cap_mb=25),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tokenizer_revision=None,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tokenizer_max_length=None),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: checkpoint_interval=100000,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: save_initial_state=False,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: resume_checkpoint_path=None,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: checkpoints_path_is_shared_file_system=False),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: logging=LoggingArgs(log_level='info',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: log_level_replica='info',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: iteration_step_info_interval=1),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tokens=TokensArgs(sequence_length=4096,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: train_steps=20,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: micro_batch_size=1,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: batch_accumulation_per_replica=1024,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: val_check_interval=-1,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: limit_val_batches=0,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: limit_test_batches=0),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: adam_beta1=0.9,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: adam_beta2=0.95,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: torch_adam_is_fused=True,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: name='adamW'),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: zero_stage=1,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: weight_decay=0.01,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: clip_grad=1.0,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: accumulate_grad_in_fp32=True,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lr_warmup_steps=1,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lr_warmup_style='linear',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lr_decay_style='linear',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lr_decay_steps=19,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lr_decay_starting_step=None,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: min_decay_lr=1e-05)),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: data_stages=[DatasetStageArgs(name='Training Stage',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: start_training_step=1,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hf_dataset_splits='train',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hf_dataset_config_name=None,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: dataset_processing_num_proc_per_process=64,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: dataset_overwrite_cache=False,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: text_column_name='text'),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: seed=42,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_loading_workers=32))],
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/16_GPUS/dp-1_tp-4_pp-4_mbz-1')),
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lighteval=None)
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Model Config:
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: LlamaConfig(bos_token_id=1,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: eos_token_id=2,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hidden_act='silu',
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hidden_size=2048,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: initializer_range=0.02,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: intermediate_size=4096,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: is_llama_config=True,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: max_position_embeddings=4096,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_attention_heads=32,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_hidden_layers=24,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_key_value_heads=32,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pad_token_id=None,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pretraining_tp=1,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rms_norm_eps=1e-05,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rope_scaling=None,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rope_theta=10000.0,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tie_word_embeddings=True,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: use_cache=True,
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: vocab_size=50260)
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Building model..
[default0]:07/02/2024 19:24:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Setting PP block ranks...
[default4]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: Local number of parameters: 67.7M (129.12MiB)
[default1]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=1|ip-26-0-171-88]: Local number of parameters: 62.9M (120.05MiB)
[default1]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=1|ip-26-0-171-88]: [After model building] Memory usage: 126.06MiB. Peak allocated: 128.09MiB Peak reserved: 130.00MiB
[default4]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 134.05MiB. Peak allocated: 136.08MiB Peak reserved: 138.00MiB
[default4]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default1]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=1|ip-26-0-171-88]: No checkpoint path provided.
[default7]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=3|ip-26-0-171-88]: Local number of parameters: 67.7M (129.12MiB)
[default7]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=3|ip-26-0-171-88]: [After model building] Memory usage: 134.05MiB. Peak allocated: 136.08MiB Peak reserved: 138.00MiB
[default7]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=3|ip-26-0-171-88]: No checkpoint path provided.
[default2]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=2|ip-26-0-171-88]: Local number of parameters: 62.9M (120.05MiB)
[default2]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=2|ip-26-0-171-88]: [After model building] Memory usage: 126.06MiB. Peak allocated: 128.09MiB Peak reserved: 130.00MiB
[default2]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=2|ip-26-0-171-88]: No checkpoint path provided.
[default6]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=2|ip-26-0-171-88]: Local number of parameters: 67.7M (129.12MiB)
[default6]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=2|ip-26-0-171-88]: [After model building] Memory usage: 134.05MiB. Peak allocated: 136.08MiB Peak reserved: 138.00MiB
[default6]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=2|ip-26-0-171-88]: No checkpoint path provided.
[default0]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-88]: Local number of parameters: 62.9M (120.05MiB)
[default0]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 126.06MiB. Peak allocated: 128.09MiB Peak reserved: 130.00MiB
[default0]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-88]: No checkpoint path provided.
[default3]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=3|ip-26-0-171-88]: Local number of parameters: 62.9M (120.05MiB)
[default3]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=3|ip-26-0-171-88]: [After model building] Memory usage: 126.06MiB. Peak allocated: 128.09MiB Peak reserved: 130.00MiB
[default3]:07/02/2024 19:24:18 [INFO|DP=0|PP=2|TP=3|ip-26-0-171-88]: No checkpoint path provided.
[default5]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=1|ip-26-0-171-88]: Local number of parameters: 67.7M (129.12MiB)
[default5]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=1|ip-26-0-171-88]: [After model building] Memory usage: 134.05MiB. Peak allocated: 136.08MiB Peak reserved: 138.00MiB
[default5]:07/02/2024 19:24:18 [INFO|DP=0|PP=3|TP=1|ip-26-0-171-88]: No checkpoint path provided.
[default5]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=1|ip-26-0-171-62]: Local number of parameters: 73.4M (140.05MiB)
[default3]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=3|ip-26-0-171-62]: Local number of parameters: 99.2M (189.14MiB)
[default3]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=3|ip-26-0-171-62]: [After model building] Memory usage: 197.07MiB. Peak allocated: 199.10MiB Peak reserved: 200.00MiB
[default4]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-62]: Local number of parameters: 73.4M (140.05MiB)
[default4]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 147.07MiB. Peak allocated: 149.10MiB Peak reserved: 150.00MiB
[default5]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=1|ip-26-0-171-62]: [After model building] Memory usage: 147.07MiB. Peak allocated: 149.10MiB Peak reserved: 150.00MiB
[default4]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default3]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=3|ip-26-0-171-62]: No checkpoint path provided.
[default5]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=1|ip-26-0-171-62]: No checkpoint path provided.
[default2]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=2|ip-26-0-171-62]: Local number of parameters: 99.2M (189.14MiB)
[default2]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=2|ip-26-0-171-62]: [After model building] Memory usage: 197.07MiB. Peak allocated: 199.10MiB Peak reserved: 200.00MiB
[default2]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=2|ip-26-0-171-62]: No checkpoint path provided.
[default6]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=2|ip-26-0-171-62]: Local number of parameters: 73.4M (140.05MiB)
[default6]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=2|ip-26-0-171-62]: [After model building] Memory usage: 147.07MiB. Peak allocated: 149.10MiB Peak reserved: 150.00MiB
[default6]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=2|ip-26-0-171-62]: No checkpoint path provided.
[default0]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Total number of parameters: 1.21G (2313.42MiB)
[default0]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Local number of parameters: 99.2M (189.14MiB)
[default0]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 197.07MiB. Peak allocated: 199.10MiB Peak reserved: 200.00MiB
[default0]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided.
[default0]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Parametrizing model parameters using StandardParametrizator
[default1]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=1|ip-26-0-171-62]: Local number of parameters: 99.2M (189.14MiB)
[default1]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=1|ip-26-0-171-62]: [After model building] Memory usage: 197.07MiB. Peak allocated: 199.10MiB Peak reserved: 200.00MiB
[default1]:07/02/2024 19:24:18 [INFO|DP=0|PP=0|TP=1|ip-26-0-171-62]: No checkpoint path provided.
[default7]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=3|ip-26-0-171-62]: Local number of parameters: 73.4M (140.05MiB)
[default7]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=3|ip-26-0-171-62]: [After model building] Memory usage: 147.07MiB. Peak allocated: 149.10MiB Peak reserved: 150.00MiB
[default7]:07/02/2024 19:24:18 [INFO|DP=0|PP=1|TP=3|ip-26-0-171-62]: No checkpoint path provided.
[default0]:07/02/2024 19:24:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Optimizer Building] Using LearningRateForSP as learning rate
[default0]:07/02/2024 19:24:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [ZeRO sharding] Size of optimizer params per rank:
[default0]:07/02/2024 19:24:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [ZeRO sharding] DP Rank 0 has 99.2M out of 99.2M (100.00%) params' optimizer states
[default0]:07/02/2024 19:24:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples
[default0]:07/02/2024 19:24:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Using `datasets` library
[default0]:07/02/2024 19:24:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4')
[default0]:07/02/2024 19:24:21 [WARNING|DP=0|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/02/2024 19:24:21 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Training Plan] There are 1 training stages
[default0]:07/02/2024 19:24:21 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Stage Training Stage] start from step 1
[default0]:07/02/2024 19:24:21 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]:
[default0]:07/02/2024 19:24:21 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Start training] datetime: 2024-07-02 19:24:21.953192 | mbs: 1 | grad_accum: 1024 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0
[default0]:07/02/2024 19:24:21 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps
[default0]:07/02/2024 19:24:21 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Memory usage: 953.61MiB. Peak allocated 953.61MiB. Peak reserved: 960.00MiB
[default7]:07/02/2024 19:24:22 [WARNING|DP=0|PP=3|TP=3|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/02/2024 19:24:22 [WARNING|DP=0|PP=2|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/02/2024 19:24:22 [WARNING|DP=0|PP=2|TP=2|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/02/2024 19:24:22 [WARNING|DP=0|PP=2|TP=3|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/02/2024 19:24:22 [WARNING|DP=0|PP=3|TP=1|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/02/2024 19:24:22 [WARNING|DP=0|PP=0|TP=2|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/02/2024 19:24:22 [WARNING|DP=0|PP=0|TP=1|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/02/2024 19:24:22 [WARNING|DP=0|PP=1|TP=2|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/02/2024 19:24:22 [WARNING|DP=0|PP=2|TP=1|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/02/2024 19:24:22 [WARNING|DP=0|PP=3|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/02/2024 19:24:22 [WARNING|DP=0|PP=3|TP=2|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/02/2024 19:24:22 [WARNING|DP=0|PP=1|TP=1|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/02/2024 19:24:22 [WARNING|DP=0|PP=1|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/02/2024 19:24:22 [WARNING|DP=0|PP=1|TP=3|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/02/2024 19:24:22 [WARNING|DP=0|PP=0|TP=3|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default5]: warnings.warn(
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default7]: warnings.warn(
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default4]: warnings.warn(
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default1]: warnings.warn(
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default6]: warnings.warn(
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default2]: warnings.warn(
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default0]: warnings.warn(
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default3]: warnings.warn(
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default6]: warnings.warn(
[default0]:07/02/2024 19:25:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Memory usage: 1021.15MiB. Peak allocated 3934.92MiB. Peak reserved: 4010.00MiB
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default0]: warnings.warn(
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default2]: warnings.warn(
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default5]: warnings.warn(
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default1]: warnings.warn(
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default7]: warnings.warn(
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default3]: warnings.warn(
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default4]: warnings.warn(
[default0]:07/02/2024 19:25:44 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Memory usage: 1777.72MiB. Peak allocated 1777.72MiB. Peak reserved: 4010.00MiB
[default4]:07/02/2024 19:25:44 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: iteration: 1 / 20 | consumed_tokens: 4.19M | elapsed_time_per_iteration_ms: 82.1K | tokens_per_sec: 51.1K | tokens_per_sec_per_gpu: 3.19K | global_batch_size: 1.02K | lm_loss: 11.2 | lr: 0.0001 | model_tflops_per_gpu: 29 | hardware_tflops_per_gpu: 29 | grad_norm: 10.9 | cuda_memory_allocated: 1.29G | cuda_max_memory_reserved: 1.94G | hd_total_memory_tb: 312G | hd_used_memory_tb: 67.8G | hd_free_memory_tb: 244G
[default0]:07/02/2024 19:26:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Memory usage: 1777.72MiB. Peak allocated 4501.62MiB. Peak reserved: 4648.00MiB
[default4]:07/02/2024 19:26:30 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: iteration: 2 / 20 | consumed_tokens: 8.39M | elapsed_time_per_iteration_ms: 46K | tokens_per_sec: 91.2K | tokens_per_sec_per_gpu: 5.7K | global_batch_size: 1.02K | lm_loss: 11.2 | lr: 9.53e-05 | model_tflops_per_gpu: 51.7 | hardware_tflops_per_gpu: 51.7 | grad_norm: 11 | cuda_memory_allocated: 1.29G | cuda_max_memory_reserved: 2.35G | hd_total_memory_tb: 312G | hd_used_memory_tb: 67.8G | hd_free_memory_tb: 244G
[default0]:07/02/2024 19:26:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Memory usage: 1777.72MiB. Peak allocated 1777.74MiB. Peak reserved: 4648.00MiB
[default0]:07/02/2024 19:27:23 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Memory usage: 1777.72MiB. Peak allocated 4501.62MiB. Peak reserved: 4648.00MiB
[default4]:07/02/2024 19:27:23 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: iteration: 3 / 20 | consumed_tokens: 12.6M | elapsed_time_per_iteration_ms: 52.7K | tokens_per_sec: 79.6K | tokens_per_sec_per_gpu: 4.97K | global_batch_size: 1.02K | lm_loss: 9.83 | lr: 9.05e-05 | model_tflops_per_gpu: 45.1 | hardware_tflops_per_gpu: 45.1 | grad_norm: 44.3 | cuda_memory_allocated: 1.29G | cuda_max_memory_reserved: 2.35G | hd_total_memory_tb: 312G | hd_used_memory_tb: 67.8G | hd_free_memory_tb: 244G
[default0]:07/02/2024 19:27:23 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Memory usage: 1777.72MiB. Peak allocated 1777.74MiB. Peak reserved: 4648.00MiB
[default0]:STAGE:2024-07-02 19:27:23 3765423:3765423 ActivityProfilerController.cpp:314] Completed Stage: Warm Up
[default0]:07/02/2024 19:28:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Memory usage: 1777.72MiB. Peak allocated 4501.62MiB. Peak reserved: 4648.00MiB
[default4]:07/02/2024 19:28:31 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: iteration: 4 / 20 | consumed_tokens: 16.8M | elapsed_time_per_iteration_ms: 68.2K | tokens_per_sec: 61.5K | tokens_per_sec_per_gpu: 3.84K | global_batch_size: 1.02K | lm_loss: 12.1 | lr: 8.58e-05 | model_tflops_per_gpu: 34.9 | hardware_tflops_per_gpu: 34.9 | grad_norm: 24.8 | cuda_memory_allocated: 1.29G | cuda_max_memory_reserved: 2.35G | hd_total_memory_tb: 312G | hd_used_memory_tb: 67.8G | hd_free_memory_tb: 244G
[default0]:07/02/2024 19:28:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Memory usage: 1777.72MiB. Peak allocated 1777.74MiB. Peak reserved: 4648.00MiB
[default4]:07/02/2024 19:29:39 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: iteration: 5 / 20 | consumed_tokens: 21M | elapsed_time_per_iteration_ms: 67.9K | tokens_per_sec: 61.8K | tokens_per_sec_per_gpu: 3.86K | global_batch_size: 1.02K | lm_loss: 10.1 | lr: 8.11e-05 | model_tflops_per_gpu: 35 | hardware_tflops_per_gpu: 35 | grad_norm: 11.4
[default0]:07/02/2024 19:29:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Memory usage: 1777.72MiB. Peak allocated 4501.62MiB. Peak reserved: 4648.00MiB
[default4]:07/02/2024 19:30:48 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: iteration: 6 / 20 | consumed_tokens: 25.2M | elapsed_time_per_iteration_ms: 68.3K | tokens_per_sec: 61.4K | tokens_per_sec_per_gpu: 3.84K | global_batch_size: 1.02K | lm_loss: 9.39 | lr: 7.63e-05 | model_tflops_per_gpu: 34.8 | hardware_tflops_per_gpu: 34.8 | grad_norm: 7.05
[default0]:STAGE:2024-07-02 19:33:59 3765423:3765423 ActivityProfilerController.cpp:320] Completed Stage: Collection
[default0]:STAGE:2024-07-02 19:34:21 3765423:3765423 ActivityProfilerController.cpp:324] Completed Stage: Post Processing
[default6]:[rank14]:[E ProcessGroupNCCL.cpp:563] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600022 milliseconds before timing out.
[default2]:[rank10]:[E ProcessGroupNCCL.cpp:563] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600044 milliseconds before timing out.
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
[default1]:[rank1]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=356375, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=2097152, Timeout(ms)=600000) ran for 600044 milliseconds before timing out.
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
[default7]:[rank15]:[E ProcessGroupNCCL.cpp:563] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600060 milliseconds before timing out.
[default1]:[rank9]:[E ProcessGroupNCCL.cpp:563] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600069 milliseconds before timing out.
[default3]:[rank11]:[E ProcessGroupNCCL.cpp:563] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600044 milliseconds before timing out.
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:563] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600053 milliseconds before timing out.
[default4]:[rank12]:[E ProcessGroupNCCL.cpp:563] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600080 milliseconds before timing out.
[default5]:[rank13]:[E ProcessGroupNCCL.cpp:563] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600075 milliseconds before timing out.
[default2]:[rank2]:[E ProcessGroupNCCL.cpp:563] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=356375, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=2097152, Timeout(ms)=600000) ran for 600095 milliseconds before timing out.
[default3]:[rank3]:[E ProcessGroupNCCL.cpp:563] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=356375, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=2097152, Timeout(ms)=600000) ran for 600094 milliseconds before timing out.
[default7]:[rank15]: Traceback (most recent call last):
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default7]:[rank15]: trainer.train(dataloader)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default7]:[rank15]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default7]:[rank15]: outputs = self.pipeline_engine.train_batch_iter(
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default7]:[rank15]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default7]:[rank15]: output = model(**micro_batch)
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default7]:[rank15]: return self._call_impl(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default7]:[rank15]: return forward_call(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default7]:[rank15]: sharded_logits = self.model(
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default7]:[rank15]: return self._call_impl(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default7]:[rank15]: return forward_call(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default7]:[rank15]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default7]:[rank15]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default7]:[rank15]: return self._call_impl(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default7]:[rank15]: return forward_call(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default7]:[rank15]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default7]:[rank15]: pipeline_state.run_communication()
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default7]:[rank15]: recv_activation_tensor = recv_activation()
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default7]:[rank15]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default7]:[rank15]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default7]:[rank15]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default7]:[rank15]: dist.recv(
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default7]:[rank15]: return func(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default7]:[rank15]: pg.recv([tensor], group_src_rank, tag).wait()
[default7]:[rank15]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default0]:[rank8]: Traceback (most recent call last):
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default0]:[rank8]: trainer.train(dataloader)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default0]:[rank8]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default0]:[rank8]: outputs = self.pipeline_engine.train_batch_iter(
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 252, in train_batch_iter
[default0]:[rank8]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default0]:[rank8]: output = model(**micro_batch)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank8]: return self._call_impl(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank8]: return forward_call(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default0]:[rank8]: sharded_logits = self.model(
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank8]: return self._call_impl(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank8]: return forward_call(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default0]:[rank8]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default0]:[rank8]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank8]: return self._call_impl(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank8]: return forward_call(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default0]:[rank8]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default0]:[rank8]: pipeline_state.run_communication()
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default0]:[rank8]: recv_activation_tensor = recv_activation()
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default0]:[rank8]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default0]:[rank8]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default0]:[rank8]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default0]:[rank8]: dist.recv(
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default0]:[rank8]: return func(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default0]:[rank8]: pg.recv([tensor], group_src_rank, tag).wait()
[default0]:[rank8]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default6]:[rank14]: Traceback (most recent call last):
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default6]:[rank14]: trainer.train(dataloader)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default6]:[rank14]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default6]:[rank14]: outputs = self.pipeline_engine.train_batch_iter(
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default6]:[rank14]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default6]:[rank14]: output = model(**micro_batch)
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default6]:[rank14]: return self._call_impl(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default6]:[rank14]: return forward_call(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default6]:[rank14]: sharded_logits = self.model(
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default6]:[rank14]: return self._call_impl(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default6]:[rank14]: return forward_call(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default6]:[rank14]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default6]:[rank14]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default6]:[rank14]: return self._call_impl(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default6]:[rank14]: return forward_call(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default6]:[rank14]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default6]:[rank14]: pipeline_state.run_communication()
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default6]:[rank14]: recv_activation_tensor = recv_activation()
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default6]:[rank14]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default6]:[rank14]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default6]:[rank14]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default6]:[rank14]: dist.recv(
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default6]:[rank14]: return func(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default6]:[rank14]: pg.recv([tensor], group_src_rank, tag).wait()
[default6]:[rank14]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default4]:[rank12]: Traceback (most recent call last):
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default4]:[rank12]: trainer.train(dataloader)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default4]:[rank12]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default4]:[rank12]: outputs = self.pipeline_engine.train_batch_iter(
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default4]:[rank12]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default4]:[rank12]: output = model(**micro_batch)
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default4]:[rank12]: return self._call_impl(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default4]:[rank12]: return forward_call(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default4]:[rank12]: sharded_logits = self.model(
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default4]:[rank12]: return self._call_impl(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default4]:[rank12]: return forward_call(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default4]:[rank12]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default4]:[rank12]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default4]:[rank12]: return self._call_impl(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default4]:[rank12]: return forward_call(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default4]:[rank12]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default4]:[rank12]: pipeline_state.run_communication()
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default4]:[rank12]: recv_activation_tensor = recv_activation()
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default4]:[rank12]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default4]:[rank12]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default4]:[rank12]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default4]:[rank12]: dist.recv(
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default4]:[rank12]: return func(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default4]:[rank12]: pg.recv([tensor], group_src_rank, tag).wait()
[default4]:[rank12]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default5]:[rank13]: Traceback (most recent call last):
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default5]:[rank13]: trainer.train(dataloader)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default5]:[rank13]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default5]:[rank13]: outputs = self.pipeline_engine.train_batch_iter(
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default5]:[rank13]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default5]:[rank13]: output = model(**micro_batch)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default5]:[rank13]: return self._call_impl(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default5]:[rank13]: return forward_call(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default5]:[rank13]: sharded_logits = self.model(
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default5]:[rank13]: return self._call_impl(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default5]:[rank13]: return forward_call(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default5]:[rank13]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default5]:[rank13]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default5]:[rank13]: return self._call_impl(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default5]:[rank13]: return forward_call(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default5]:[rank13]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default5]:[rank13]: pipeline_state.run_communication()
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default5]:[rank13]: recv_activation_tensor = recv_activation()
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default5]:[rank13]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default5]:[rank13]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default5]:[rank13]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default5]:[rank13]: dist.recv(
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default5]:[rank13]: return func(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default5]:[rank13]: pg.recv([tensor], group_src_rank, tag).wait()
[default5]:[rank13]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default1]:[rank9]: Traceback (most recent call last):
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default1]:[rank9]: trainer.train(dataloader)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default1]:[rank9]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default1]:[rank9]: outputs = self.pipeline_engine.train_batch_iter(
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 252, in train_batch_iter
[default1]:[rank9]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default1]:[rank9]: output = model(**micro_batch)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default1]:[rank9]: return self._call_impl(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default1]:[rank9]: return forward_call(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default1]:[rank9]: sharded_logits = self.model(
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default1]:[rank9]: return self._call_impl(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default1]:[rank9]: return forward_call(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default1]:[rank9]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default1]:[rank9]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default1]:[rank9]: return self._call_impl(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default1]:[rank9]: return forward_call(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default1]:[rank9]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default1]:[rank9]: pipeline_state.run_communication()
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default1]:[rank9]: recv_activation_tensor = recv_activation()
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default1]:[rank9]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default1]:[rank9]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default1]:[rank9]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default1]:[rank9]: dist.recv(
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default1]:[rank9]: return func(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default1]:[rank9]: pg.recv([tensor], group_src_rank, tag).wait()
[default1]:[rank9]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default4]:[rank4]: Traceback (most recent call last):
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default4]:[rank4]: trainer.train(dataloader)
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default4]:[rank4]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default4]:[rank4]: outputs = self.pipeline_engine.train_batch_iter(
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 252, in train_batch_iter
[default4]:[rank4]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default4]:[rank4]: output = model(**micro_batch)
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default4]:[rank4]: return self._call_impl(*args, **kwargs)
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default4]:[rank4]: return forward_call(*args, **kwargs)
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default4]:[rank4]: sharded_logits = self.model(
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default4]:[rank4]: return self._call_impl(*args, **kwargs)
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default4]:[rank4]: return forward_call(*args, **kwargs)
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default4]:[rank4]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default4]:[rank4]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default4]:[rank4]: return self._call_impl(*args, **kwargs)
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default4]:[rank4]: return forward_call(*args, **kwargs)
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default4]:[rank4]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default4]:[rank4]: pipeline_state.run_communication()
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default4]:[rank4]: recv_activation_tensor = recv_activation()
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default4]:[rank4]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default4]:[rank4]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default4]:[rank4]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default4]:[rank4]: dist.recv(
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default4]:[rank4]: return func(*args, **kwargs)
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default4]:[rank4]: pg.recv([tensor], group_src_rank, tag).wait()
[default4]:[rank4]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default7]:[rank15]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 3] Timeout at NCCL work: 55299, last enqueued NCCL work: 55299, last completed NCCL work: 55298.
[default7]:[rank15]:[E ProcessGroupNCCL.cpp:577] [Rank 3] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default7]:[rank15]:[E ProcessGroupNCCL.cpp:583] [Rank 3] To avoid data inconsistency, we are taking the entire process down.
[default7]:[rank15]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600060 milliseconds before timing out.
[default7]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f5d30d5b897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default7]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f5d32034c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f5d32039a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f5d3203adcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #4: <unknown function> + 0xd3e95 (0x7f5d7dad3e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default7]:frame #5: <unknown function> + 0x8609 (0x7f5d82b1a609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default7]:frame #6: clone + 0x43 (0x7f5d828e5353 in /lib/x86_64-linux-gnu/libc.so.6)
[default7]:
[default7]:terminate called after throwing an instance of 'c10::DistBackendError'
[default7]: what(): [PG 4 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600060 milliseconds before timing out.
[default7]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f5d30d5b897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default7]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f5d32034c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f5d32039a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f5d3203adcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #4: <unknown function> + 0xd3e95 (0x7f5d7dad3e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default7]:frame #5: <unknown function> + 0x8609 (0x7f5d82b1a609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default7]:frame #6: clone + 0x43 (0x7f5d828e5353 in /lib/x86_64-linux-gnu/libc.so.6)
[default7]:
[default7]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f5d30d5b897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default7]:frame #1: <unknown function> + 0xe32119 (0x7f5d31cbe119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #2: <unknown function> + 0xd3e95 (0x7f5d7dad3e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default7]:frame #3: <unknown function> + 0x8609 (0x7f5d82b1a609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default7]:frame #4: clone + 0x43 (0x7f5d828e5353 in /lib/x86_64-linux-gnu/libc.so.6)
[default7]:
[default7]:[rank7]: Traceback (most recent call last):
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default7]:[rank7]: trainer.train(dataloader)
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default7]:[rank7]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default7]:[rank7]: outputs = self.pipeline_engine.train_batch_iter(
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 252, in train_batch_iter
[default7]:[rank7]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default7]:[rank7]: output = model(**micro_batch)
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default7]:[rank7]: return self._call_impl(*args, **kwargs)
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default7]:[rank7]: return forward_call(*args, **kwargs)
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default7]:[rank7]: sharded_logits = self.model(
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default7]:[rank7]: return self._call_impl(*args, **kwargs)
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default7]:[rank7]: return forward_call(*args, **kwargs)
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default7]:[rank7]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default7]:[rank7]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default7]:[rank7]: return self._call_impl(*args, **kwargs)
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default7]:[rank7]: return forward_call(*args, **kwargs)
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default7]:[rank7]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default7]:[rank7]: pipeline_state.run_communication()
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default7]:[rank7]: recv_activation_tensor = recv_activation()
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default7]:[rank7]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default7]:[rank7]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default7]:[rank7]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default7]:[rank7]: dist.recv(
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default7]:[rank7]: return func(*args, **kwargs)
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default7]:[rank7]: pg.recv([tensor], group_src_rank, tag).wait()
[default7]:[rank7]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default5]:[rank5]: Traceback (most recent call last):
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default5]:[rank5]: trainer.train(dataloader)
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default5]:[rank5]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default5]:[rank5]: outputs = self.pipeline_engine.train_batch_iter(
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 252, in train_batch_iter
[default5]:[rank5]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default5]:[rank5]: output = model(**micro_batch)
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default5]:[rank5]: return self._call_impl(*args, **kwargs)
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default5]:[rank5]: return forward_call(*args, **kwargs)
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default5]:[rank5]: sharded_logits = self.model(
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default5]:[rank5]: return self._call_impl(*args, **kwargs)
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default5]:[rank5]: return forward_call(*args, **kwargs)
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default5]:[rank5]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default5]:[rank5]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default5]:[rank5]: return self._call_impl(*args, **kwargs)
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default5]:[rank5]: return forward_call(*args, **kwargs)
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default5]:[rank5]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default5]:[rank5]: pipeline_state.run_communication()
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default5]:[rank5]: recv_activation_tensor = recv_activation()
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default5]:[rank5]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default5]:[rank5]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default5]:[rank5]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default5]:[rank5]: dist.recv(
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default5]:[rank5]: return func(*args, **kwargs)
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default5]:[rank5]: pg.recv([tensor], group_src_rank, tag).wait()
[default5]:[rank5]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default6]:[rank6]: Traceback (most recent call last):
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default6]:[rank6]: trainer.train(dataloader)
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default6]:[rank6]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default6]:[rank6]: outputs = self.pipeline_engine.train_batch_iter(
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 252, in train_batch_iter
[default6]:[rank6]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default6]:[rank6]: output = model(**micro_batch)
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default6]:[rank6]: return self._call_impl(*args, **kwargs)
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default6]:[rank6]: return forward_call(*args, **kwargs)
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default6]:[rank6]: sharded_logits = self.model(
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default6]:[rank6]: return self._call_impl(*args, **kwargs)
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default6]:[rank6]: return forward_call(*args, **kwargs)
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default6]:[rank6]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default6]:[rank6]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default6]:[rank6]: return self._call_impl(*args, **kwargs)
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default6]:[rank6]: return forward_call(*args, **kwargs)
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default6]:[rank6]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default6]:[rank6]: pipeline_state.run_communication()
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default6]:[rank6]: recv_activation_tensor = recv_activation()
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default6]:[rank6]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default6]:[rank6]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default6]:[rank6]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default6]:[rank6]: dist.recv(
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default6]:[rank6]: return func(*args, **kwargs)
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default6]:[rank6]: pg.recv([tensor], group_src_rank, tag).wait()
[default6]:[rank6]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default3]:[rank11]: Traceback (most recent call last):
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default3]:[rank11]: trainer.train(dataloader)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default3]:[rank11]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default3]:[rank11]: outputs = self.pipeline_engine.train_batch_iter(
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 252, in train_batch_iter
[default3]:[rank11]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default3]:[rank11]: output = model(**micro_batch)
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default3]:[rank11]: return self._call_impl(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default3]:[rank11]: return forward_call(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default3]:[rank11]: sharded_logits = self.model(
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default3]:[rank11]: return self._call_impl(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default3]:[rank11]: return forward_call(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default3]:[rank11]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default3]:[rank11]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default3]:[rank11]: return self._call_impl(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default3]:[rank11]: return forward_call(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default3]:[rank11]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default3]:[rank11]: pipeline_state.run_communication()
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default3]:[rank11]: recv_activation_tensor = recv_activation()
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default3]:[rank11]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default3]:[rank11]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default3]:[rank11]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default3]:[rank11]: dist.recv(
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default3]:[rank11]: return func(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default3]:[rank11]: pg.recv([tensor], group_src_rank, tag).wait()
[default3]:[rank11]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default2]:[rank10]: Traceback (most recent call last):
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default2]:[rank10]: trainer.train(dataloader)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default2]:[rank10]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default2]:[rank10]: outputs = self.pipeline_engine.train_batch_iter(
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 252, in train_batch_iter
[default2]:[rank10]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default2]:[rank10]: output = model(**micro_batch)
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default2]:[rank10]: return self._call_impl(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default2]:[rank10]: return forward_call(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default2]:[rank10]: sharded_logits = self.model(
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default2]:[rank10]: return self._call_impl(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default2]:[rank10]: return forward_call(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default2]:[rank10]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default2]:[rank10]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default2]:[rank10]: return self._call_impl(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default2]:[rank10]: return forward_call(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward
[default2]:[rank10]: new_kwargs[name] = recv_from_pipeline_state_buffer(
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer
[default2]:[rank10]: pipeline_state.run_communication()
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication
[default2]:[rank10]: recv_activation_tensor = recv_activation()
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__
[default2]:[rank10]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0]
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors
[default2]:[rank10]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors
[default2]:[rank10]: meta = self._recv_meta(from_rank=from_rank, tag=tag)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta
[default2]:[rank10]: dist.recv(
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[default2]:[rank10]: return func(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv
[default2]:[rank10]: pg.recv([tensor], group_src_rank, tag).wait()
[default2]:[rank10]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1.
[default4]:[rank12]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 3] Timeout at NCCL work: 55299, last enqueued NCCL work: 55299, last completed NCCL work: 55298.
[default4]:[rank12]:[E ProcessGroupNCCL.cpp:577] [Rank 3] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default4]:[rank12]:[E ProcessGroupNCCL.cpp:583] [Rank 3] To avoid data inconsistency, we are taking the entire process down.
[default4]:[rank12]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600080 milliseconds before timing out.
[default4]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fa878df5897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default4]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fa87a0cec62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fa87a0d3a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fa87a0d4dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #4: <unknown function> + 0xd3e95 (0x7fa8c5b6de95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default4]:frame #5: <unknown function> + 0x8609 (0x7fa8cabb4609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default4]:frame #6: clone + 0x43 (0x7fa8ca97f353 in /lib/x86_64-linux-gnu/libc.so.6)
[default4]:
[default4]:terminate called after throwing an instance of 'c10::DistBackendError'
[default4]: what(): [PG 4 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600080 milliseconds before timing out.
[default4]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fa878df5897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default4]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fa87a0cec62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fa87a0d3a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fa87a0d4dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #4: <unknown function> + 0xd3e95 (0x7fa8c5b6de95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default4]:frame #5: <unknown function> + 0x8609 (0x7fa8cabb4609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default4]:frame #6: clone + 0x43 (0x7fa8ca97f353 in /lib/x86_64-linux-gnu/libc.so.6)
[default4]:
[default4]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fa878df5897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default4]:frame #1: <unknown function> + 0xe32119 (0x7fa879d58119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #2: <unknown function> + 0xd3e95 (0x7fa8c5b6de95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default4]:frame #3: <unknown function> + 0x8609 (0x7fa8cabb4609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default4]:frame #4: clone + 0x43 (0x7fa8ca97f353 in /lib/x86_64-linux-gnu/libc.so.6)
[default4]:
[default5]:[rank13]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 3] Timeout at NCCL work: 55299, last enqueued NCCL work: 55299, last completed NCCL work: 55298.
[default5]:[rank13]:[E ProcessGroupNCCL.cpp:577] [Rank 3] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default5]:[rank13]:[E ProcessGroupNCCL.cpp:583] [Rank 3] To avoid data inconsistency, we are taking the entire process down.
[default5]:[rank13]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600075 milliseconds before timing out.
[default5]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f4fadcdb897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default5]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f4faefb4c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f4faefb9a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f4faefbadcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #4: <unknown function> + 0xd3e95 (0x7f4ffaa53e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default5]:frame #5: <unknown function> + 0x8609 (0x7f4fffa9a609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default5]:frame #6: clone + 0x43 (0x7f4fff865353 in /lib/x86_64-linux-gnu/libc.so.6)
[default5]:
[default5]:terminate called after throwing an instance of 'c10::DistBackendError'
[default5]: what(): [PG 4 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600075 milliseconds before timing out.
[default5]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f4fadcdb897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default5]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f4faefb4c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f4faefb9a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f4faefbadcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #4: <unknown function> + 0xd3e95 (0x7f4ffaa53e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default5]:frame #5: <unknown function> + 0x8609 (0x7f4fffa9a609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default5]:frame #6: clone + 0x43 (0x7f4fff865353 in /lib/x86_64-linux-gnu/libc.so.6)
[default5]:
[default5]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f4fadcdb897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default5]:frame #1: <unknown function> + 0xe32119 (0x7f4faec3e119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #2: <unknown function> + 0xd3e95 (0x7f4ffaa53e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default5]:frame #3: <unknown function> + 0x8609 (0x7f4fffa9a609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default5]:frame #4: clone + 0x43 (0x7f4fff865353 in /lib/x86_64-linux-gnu/libc.so.6)
[default5]:
[default6]:[rank14]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 3] Timeout at NCCL work: 55299, last enqueued NCCL work: 55299, last completed NCCL work: 55298.
[default6]:[rank14]:[E ProcessGroupNCCL.cpp:577] [Rank 3] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default6]:[rank14]:[E ProcessGroupNCCL.cpp:583] [Rank 3] To avoid data inconsistency, we are taking the entire process down.
[default6]:[rank14]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[default6]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8d65751897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default6]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f8d66a2ac62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f8d66a2fa80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f8d66a30dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #4: <unknown function> + 0xd3e95 (0x7f8db24c9e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default6]:frame #5: <unknown function> + 0x8609 (0x7f8db7510609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default6]:frame #6: clone + 0x43 (0x7f8db72db353 in /lib/x86_64-linux-gnu/libc.so.6)
[default6]:
[default6]:terminate called after throwing an instance of 'c10::DistBackendError'
[default6]: what(): [PG 4 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=55299, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[default6]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8d65751897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default6]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f8d66a2ac62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f8d66a2fa80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f8d66a30dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #4: <unknown function> + 0xd3e95 (0x7f8db24c9e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default6]:frame #5: <unknown function> + 0x8609 (0x7f8db7510609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default6]:frame #6: clone + 0x43 (0x7f8db72db353 in /lib/x86_64-linux-gnu/libc.so.6)
[default6]:
[default6]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8d65751897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default6]:frame #1: <unknown function> + 0xe32119 (0x7f8d666b4119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #2: <unknown function> + 0xd3e95 (0x7f8db24c9e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default6]:frame #3: <unknown function> + 0x8609 (0x7f8db7510609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default6]:frame #4: clone + 0x43 (0x7f8db72db353 in /lib/x86_64-linux-gnu/libc.so.6)
[default6]:
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 2] Timeout at NCCL work: 110595, last enqueued NCCL work: 110595, last completed NCCL work: 110594.
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:577] [Rank 2] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:583] [Rank 2] To avoid data inconsistency, we are taking the entire process down.
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600053 milliseconds before timing out.
[default0]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8d3fed1897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f8d411aac62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f8d411afa80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f8d411b0dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: <unknown function> + 0xd3e95 (0x7f8d8cc49e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #5: <unknown function> + 0x8609 (0x7f8d91c90609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #6: clone + 0x43 (0x7f8d91a5b353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:terminate called after throwing an instance of 'c10::DistBackendError'
[default0]: what(): [PG 4 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600053 milliseconds before timing out.
[default0]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8d3fed1897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f8d411aac62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f8d411afa80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f8d411b0dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #4: <unknown function> + 0xd3e95 (0x7f8d8cc49e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #5: <unknown function> + 0x8609 (0x7f8d91c90609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #6: clone + 0x43 (0x7f8d91a5b353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default0]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8d3fed1897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default0]:frame #1: <unknown function> + 0xe32119 (0x7f8d40e34119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default0]:frame #2: <unknown function> + 0xd3e95 (0x7f8d8cc49e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default0]:frame #3: <unknown function> + 0x8609 (0x7f8d91c90609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default0]:frame #4: clone + 0x43 (0x7f8d91a5b353 in /lib/x86_64-linux-gnu/libc.so.6)
[default0]:
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 110595, last enqueued NCCL work: 110595, last completed NCCL work: 110594.
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down.
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
[default4]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fa1f58b4897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default4]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fa1f6b8dc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fa1f6b92a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fa1f6b93dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #4: <unknown function> + 0xd3e95 (0x7fa24262ce95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default4]:frame #5: <unknown function> + 0x8609 (0x7fa247673609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default4]:frame #6: clone + 0x43 (0x7fa24743e353 in /lib/x86_64-linux-gnu/libc.so.6)
[default4]:
[default4]:terminate called after throwing an instance of 'c10::DistBackendError'
[default4]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
[default4]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fa1f58b4897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default4]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fa1f6b8dc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fa1f6b92a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fa1f6b93dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #4: <unknown function> + 0xd3e95 (0x7fa24262ce95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default4]:frame #5: <unknown function> + 0x8609 (0x7fa247673609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default4]:frame #6: clone + 0x43 (0x7fa24743e353 in /lib/x86_64-linux-gnu/libc.so.6)
[default4]:
[default4]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fa1f58b4897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default4]:frame #1: <unknown function> + 0xe32119 (0x7fa1f6817119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #2: <unknown function> + 0xd3e95 (0x7fa24262ce95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default4]:frame #3: <unknown function> + 0x8609 (0x7fa247673609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default4]:frame #4: clone + 0x43 (0x7fa24743e353 in /lib/x86_64-linux-gnu/libc.so.6)
[default4]:
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 110595, last enqueued NCCL work: 110595, last completed NCCL work: 110594.
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down.
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[default7]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f07f10c0897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default7]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f07f2399c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f07f239ea80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f07f239fdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #4: <unknown function> + 0xd3e95 (0x7f083de38e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default7]:frame #5: <unknown function> + 0x8609 (0x7f0842e7f609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default7]:frame #6: clone + 0x43 (0x7f0842c4a353 in /lib/x86_64-linux-gnu/libc.so.6)
[default7]:
[default7]:terminate called after throwing an instance of 'c10::DistBackendError'
[default7]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[default7]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f07f10c0897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default7]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f07f2399c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f07f239ea80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f07f239fdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #4: <unknown function> + 0xd3e95 (0x7f083de38e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default7]:frame #5: <unknown function> + 0x8609 (0x7f0842e7f609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default7]:frame #6: clone + 0x43 (0x7f0842c4a353 in /lib/x86_64-linux-gnu/libc.so.6)
[default7]:
[default7]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f07f10c0897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default7]:frame #1: <unknown function> + 0xe32119 (0x7f07f2023119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #2: <unknown function> + 0xd3e95 (0x7f083de38e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default7]:frame #3: <unknown function> + 0x8609 (0x7f0842e7f609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default7]:frame #4: clone + 0x43 (0x7f0842c4a353 in /lib/x86_64-linux-gnu/libc.so.6)
[default7]:
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 110595, last enqueued NCCL work: 110595, last completed NCCL work: 110594.
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down.
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600022 milliseconds before timing out.
[default5]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f1a89304897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default5]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f1a8a5ddc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f1a8a5e2a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f1a8a5e3dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #4: <unknown function> + 0xd3e95 (0x7f1ad607ce95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default5]:frame #5: <unknown function> + 0x8609 (0x7f1adb0c3609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default5]:frame #6: clone + 0x43 (0x7f1adae8e353 in /lib/x86_64-linux-gnu/libc.so.6)
[default5]:
[default5]:terminate called after throwing an instance of 'c10::DistBackendError'
[default5]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600022 milliseconds before timing out.
[default5]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f1a89304897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default5]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f1a8a5ddc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f1a8a5e2a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f1a8a5e3dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #4: <unknown function> + 0xd3e95 (0x7f1ad607ce95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default5]:frame #5: <unknown function> + 0x8609 (0x7f1adb0c3609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default5]:frame #6: clone + 0x43 (0x7f1adae8e353 in /lib/x86_64-linux-gnu/libc.so.6)
[default5]:
[default5]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f1a89304897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default5]:frame #1: <unknown function> + 0xe32119 (0x7f1a8a267119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #2: <unknown function> + 0xd3e95 (0x7f1ad607ce95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default5]:frame #3: <unknown function> + 0x8609 (0x7f1adb0c3609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default5]:frame #4: clone + 0x43 (0x7f1adae8e353 in /lib/x86_64-linux-gnu/libc.so.6)
[default5]:
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 110595, last enqueued NCCL work: 110595, last completed NCCL work: 110594.
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down.
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
[default6]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f2f5f1bd897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default6]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f2f60496c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f2f6049ba80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f2f6049cdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #4: <unknown function> + 0xd3e95 (0x7f2fabf35e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default6]:frame #5: <unknown function> + 0x8609 (0x7f2fb0f7c609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default6]:frame #6: clone + 0x43 (0x7f2fb0d47353 in /lib/x86_64-linux-gnu/libc.so.6)
[default6]:
[default6]:terminate called after throwing an instance of 'c10::DistBackendError'
[default6]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
[default6]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f2f5f1bd897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default6]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f2f60496c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f2f6049ba80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f2f6049cdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #4: <unknown function> + 0xd3e95 (0x7f2fabf35e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default6]:frame #5: <unknown function> + 0x8609 (0x7f2fb0f7c609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default6]:frame #6: clone + 0x43 (0x7f2fb0d47353 in /lib/x86_64-linux-gnu/libc.so.6)
[default6]:
[default6]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f2f5f1bd897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default6]:frame #1: <unknown function> + 0xe32119 (0x7f2f60120119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #2: <unknown function> + 0xd3e95 (0x7f2fabf35e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default6]:frame #3: <unknown function> + 0x8609 (0x7f2fb0f7c609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default6]:frame #4: clone + 0x43 (0x7f2fb0d47353 in /lib/x86_64-linux-gnu/libc.so.6)
[default6]:
[default3]:[rank11]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 2] Timeout at NCCL work: 110595, last enqueued NCCL work: 110595, last completed NCCL work: 110594.
[default3]:[rank11]:[E ProcessGroupNCCL.cpp:577] [Rank 2] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default3]:[rank11]:[E ProcessGroupNCCL.cpp:583] [Rank 2] To avoid data inconsistency, we are taking the entire process down.
[default3]:[rank11]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600044 milliseconds before timing out.
[default3]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f0debf43897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default3]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f0ded21cc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f0ded221a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f0ded222dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #4: <unknown function> + 0xd3e95 (0x7f0e38cbbe95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default3]:frame #5: <unknown function> + 0x8609 (0x7f0e3dd02609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default3]:frame #6: clone + 0x43 (0x7f0e3dacd353 in /lib/x86_64-linux-gnu/libc.so.6)
[default3]:
[default3]:terminate called after throwing an instance of 'c10::DistBackendError'
[default3]: what(): [PG 4 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600044 milliseconds before timing out.
[default3]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f0debf43897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default3]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f0ded21cc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f0ded221a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f0ded222dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #4: <unknown function> + 0xd3e95 (0x7f0e38cbbe95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default3]:frame #5: <unknown function> + 0x8609 (0x7f0e3dd02609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default3]:frame #6: clone + 0x43 (0x7f0e3dacd353 in /lib/x86_64-linux-gnu/libc.so.6)
[default3]:
[default3]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f0debf43897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default3]:frame #1: <unknown function> + 0xe32119 (0x7f0decea6119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #2: <unknown function> + 0xd3e95 (0x7f0e38cbbe95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default3]:frame #3: <unknown function> + 0x8609 (0x7f0e3dd02609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default3]:frame #4: clone + 0x43 (0x7f0e3dacd353 in /lib/x86_64-linux-gnu/libc.so.6)
[default3]:
[default1]:[rank9]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 2] Timeout at NCCL work: 110595, last enqueued NCCL work: 110595, last completed NCCL work: 110594.
[default1]:[rank9]:[E ProcessGroupNCCL.cpp:577] [Rank 2] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default1]:[rank9]:[E ProcessGroupNCCL.cpp:583] [Rank 2] To avoid data inconsistency, we are taking the entire process down.
[default1]:[rank9]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600069 milliseconds before timing out.
[default1]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f41f5420897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f41f66f9c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f41f66fea80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f41f66ffdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #4: <unknown function> + 0xd3e95 (0x7f4242198e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #5: <unknown function> + 0x8609 (0x7f42471df609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #6: clone + 0x43 (0x7f4246faa353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default1]:terminate called after throwing an instance of 'c10::DistBackendError'
[default1]: what(): [PG 4 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600069 milliseconds before timing out.
[default1]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f41f5420897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f41f66f9c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f41f66fea80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f41f66ffdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #4: <unknown function> + 0xd3e95 (0x7f4242198e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #5: <unknown function> + 0x8609 (0x7f42471df609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #6: clone + 0x43 (0x7f4246faa353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default1]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f41f5420897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: <unknown function> + 0xe32119 (0x7f41f6383119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: <unknown function> + 0xd3e95 (0x7f4242198e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #3: <unknown function> + 0x8609 (0x7f42471df609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #4: clone + 0x43 (0x7f4246faa353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default2]:[rank10]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 2] Timeout at NCCL work: 110595, last enqueued NCCL work: 110595, last completed NCCL work: 110594.
[default2]:[rank10]:[E ProcessGroupNCCL.cpp:577] [Rank 2] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default2]:[rank10]:[E ProcessGroupNCCL.cpp:583] [Rank 2] To avoid data inconsistency, we are taking the entire process down.
[default2]:[rank10]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600044 milliseconds before timing out.
[default2]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8446ecf897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default2]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f84481a8c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f84481ada80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f84481aedcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #4: <unknown function> + 0xd3e95 (0x7f8493c47e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default2]:frame #5: <unknown function> + 0x8609 (0x7f8498c8e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default2]:frame #6: clone + 0x43 (0x7f8498a59353 in /lib/x86_64-linux-gnu/libc.so.6)
[default2]:
[default2]:terminate called after throwing an instance of 'c10::DistBackendError'
[default2]: what(): [PG 4 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=110595, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600044 milliseconds before timing out.
[default2]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8446ecf897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default2]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f84481a8c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f84481ada80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f84481aedcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #4: <unknown function> + 0xd3e95 (0x7f8493c47e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default2]:frame #5: <unknown function> + 0x8609 (0x7f8498c8e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default2]:frame #6: clone + 0x43 (0x7f8498a59353 in /lib/x86_64-linux-gnu/libc.so.6)
[default2]:
[default2]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8446ecf897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default2]:frame #1: <unknown function> + 0xe32119 (0x7f8447e32119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #2: <unknown function> + 0xd3e95 (0x7f8493c47e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default2]:frame #3: <unknown function> + 0x8609 (0x7f8498c8e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default2]:frame #4: clone + 0x43 (0x7f8498a59353 in /lib/x86_64-linux-gnu/libc.so.6)
[default2]:
W0702 19:40:53.216000 140272733103936 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 753681 closing signal SIGTERM
W0702 19:40:53.221000 140235396953920 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3765423 closing signal SIGTERM
W0702 19:40:53.224000 140235396953920 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3765424 closing signal SIGTERM
W0702 19:40:53.227000 140235396953920 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3765425 closing signal SIGTERM
W0702 19:40:53.232000 140235396953920 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3765426 closing signal SIGTERM
E0702 19:40:53.544000 140272733103936 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: -6) local_rank: 0 (pid: 753680) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2024-07-02_19:40:53
host : ip-26-0-171-88.ec2.internal
rank : 10 (local_rank: 2)
exitcode : -6 (pid: 753682)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 753682
[2]:
time : 2024-07-02_19:40:53
host : ip-26-0-171-88.ec2.internal
rank : 11 (local_rank: 3)
exitcode : -6 (pid: 753683)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 753683
[3]:
time : 2024-07-02_19:40:53
host : ip-26-0-171-88.ec2.internal
rank : 12 (local_rank: 4)
exitcode : -6 (pid: 753684)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 753684
[4]:
time : 2024-07-02_19:40:53
host : ip-26-0-171-88.ec2.internal
rank : 13 (local_rank: 5)
exitcode : -6 (pid: 753685)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 753685
[5]:
time : 2024-07-02_19:40:53
host : ip-26-0-171-88.ec2.internal
rank : 14 (local_rank: 6)
exitcode : -6 (pid: 753686)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 753686
[6]:
time : 2024-07-02_19:40:53
host : ip-26-0-171-88.ec2.internal
rank : 15 (local_rank: 7)
exitcode : -6 (pid: 753687)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 753687
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-02_19:40:53
host : ip-26-0-171-88.ec2.internal
rank : 8 (local_rank: 0)
exitcode : -6 (pid: 753680)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 753680
============================================================
srun: error: ip-26-0-171-88: task 1: Exited with exit code 1
E0702 19:41:01.426000 140235396953920 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: -6) local_rank: 4 (pid: 3765427) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2024-07-02_19:40:53
host : ip-26-0-171-62.ec2.internal
rank : 5 (local_rank: 5)
exitcode : -6 (pid: 3765428)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 3765428
[2]:
time : 2024-07-02_19:40:53
host : ip-26-0-171-62.ec2.internal
rank : 6 (local_rank: 6)
exitcode : -6 (pid: 3765429)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 3765429
[3]:
time : 2024-07-02_19:40:53
host : ip-26-0-171-62.ec2.internal
rank : 7 (local_rank: 7)
exitcode : -6 (pid: 3765430)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 3765430
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-02_19:40:53
host : ip-26-0-171-62.ec2.internal
rank : 4 (local_rank: 4)
exitcode : -6 (pid: 3765427)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 3765427
============================================================
srun: error: ip-26-0-171-62: task 0: Exited with exit code 1
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.