3outeille's picture
3outeille HF staff
Upload llama-1B/8_GPUS/dp-1_tp-8_pp-1_mbz-1
ff58cad verified
raw
history blame
75.9 kB
========================
START TIME: Wed Jul 3 21:22:46 UTC 2024
python3 version = Python 3.10.14
========================
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
Token is valid (permission: write).
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token
Login successful
Already on 'bench_cluster'
M examples/config_tiny_llama.py
M examples/config_tiny_llama.yaml
M examples/train_tiny_llama.sh
M src/nanotron/models/llama.py
M src/nanotron/trainer.py
Your branch is up to date with 'origin/bench_cluster'.
Job status: RUNNING
W0703 21:22:53.680000 140614011090752 torch/distributed/run.py:757]
W0703 21:22:53.680000 140614011090752 torch/distributed/run.py:757] *****************************************
W0703 21:22:53.680000 140614011090752 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0703 21:22:53.680000 140614011090752 torch/distributed/run.py:757] *****************************************
[default0]:07/03/2024 21:23:15 [WARNING|DP=0|PP=0|TP=0|ip-26-0-162-233]: [Vocab Size Padding] Padded vocab (size: 50257) with 7 dummy tokens (new size: 50264)
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Config:
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Config(general=GeneralArgs(project='bench_cluster',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: run='%date_%jobid',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: seed=42,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: step=None,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: consumed_train_samples=None,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: benchmark_csv_path=None,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: ignore_sanity_checks=True),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: parallelism=ParallelismArgs(dp=1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: pp=1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: tp=8,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7ff2f52b8820>,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: tp_linear_async_communication=False,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: expert_parallel_size=1),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: eos_token_id=2,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: hidden_act='silu',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: hidden_size=2048,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: initializer_range=0.02,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: intermediate_size=4096,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: is_llama_config=True,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: max_position_embeddings=4096,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: num_attention_heads=32,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: num_hidden_layers=24,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: num_key_value_heads=32,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: pad_token_id=None,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: pretraining_tp=1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: rms_norm_eps=1e-05,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: rope_scaling=None,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: rope_theta=10000.0,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: tie_word_embeddings=True,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: use_cache=True,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: vocab_size=50264),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: init_method=RandomInit(std=0.025),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: dtype=torch.bfloat16,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: make_vocab_size_divisible_by=1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: ddp_bucket_cap_mb=25),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: tokenizer_revision=None,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: tokenizer_max_length=None),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: checkpoint_interval=100000,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: save_initial_state=False,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: resume_checkpoint_path=None,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: checkpoints_path_is_shared_file_system=False),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: logging=LoggingArgs(log_level='info',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: log_level_replica='info',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: iteration_step_info_interval=1),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: tokens=TokensArgs(sequence_length=4096,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: train_steps=20,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: micro_batch_size=1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: batch_accumulation_per_replica=1024,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: val_check_interval=-1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: limit_val_batches=0,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: limit_test_batches=0),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: adam_beta1=0.9,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: adam_beta2=0.95,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: torch_adam_is_fused=True,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: name='adamW'),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: zero_stage=1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: weight_decay=0.01,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: clip_grad=1.0,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: accumulate_grad_in_fp32=True,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: lr_warmup_steps=1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: lr_warmup_style='linear',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: lr_decay_style='linear',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: lr_decay_steps=19,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: lr_decay_starting_step=None,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: min_decay_lr=1e-05)),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: data_stages=[DatasetStageArgs(name='Training Stage',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: start_training_step=1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: hf_dataset_splits='train',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: hf_dataset_config_name=None,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: dataset_processing_num_proc_per_process=64,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: dataset_overwrite_cache=False,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: text_column_name='text'),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: seed=42,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: num_loading_workers=0))],
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/8_GPUS/dp-1_tp-8_pp-1_mbz-1')),
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: lighteval=None)
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Model Config:
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: LlamaConfig(bos_token_id=1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: eos_token_id=2,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: hidden_act='silu',
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: hidden_size=2048,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: initializer_range=0.02,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: intermediate_size=4096,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: is_llama_config=True,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: max_position_embeddings=4096,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: num_attention_heads=32,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: num_hidden_layers=24,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: num_key_value_heads=32,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: pad_token_id=None,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: pretraining_tp=1,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: rms_norm_eps=1e-05,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: rope_scaling=None,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: rope_theta=10000.0,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: tie_word_embeddings=True,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: use_cache=True,
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: vocab_size=50264)
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Building model..
[default0]:07/03/2024 21:23:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Setting PP block ranks...
[default4]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=4|ip-26-0-162-233]: Local number of parameters: 139M (264.73MiB)
[default4]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=4|ip-26-0-162-233]: [After model building] Memory usage: 290.76MiB. Peak allocated: 317.33MiB Peak reserved: 324.00MiB
[default4]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=4|ip-26-0-162-233]: No checkpoint path provided.
[default7]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=7|ip-26-0-162-233]: Local number of parameters: 139M (264.73MiB)
[default2]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=2|ip-26-0-162-233]: Local number of parameters: 139M (264.73MiB)
[default2]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=2|ip-26-0-162-233]: [After model building] Memory usage: 290.76MiB. Peak allocated: 317.33MiB Peak reserved: 324.00MiB
[default2]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=2|ip-26-0-162-233]: No checkpoint path provided.
[default7]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=7|ip-26-0-162-233]: [After model building] Memory usage: 290.76MiB. Peak allocated: 317.33MiB Peak reserved: 324.00MiB
[default7]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=7|ip-26-0-162-233]: No checkpoint path provided.
[default6]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=6|ip-26-0-162-233]: Local number of parameters: 139M (264.73MiB)
[default6]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=6|ip-26-0-162-233]: [After model building] Memory usage: 290.76MiB. Peak allocated: 317.33MiB Peak reserved: 324.00MiB
[default6]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=6|ip-26-0-162-233]: No checkpoint path provided.
[default0]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Total number of parameters: 1.11G (2117.88MiB)
[default0]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Local number of parameters: 139M (264.73MiB)
[default0]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: [After model building] Memory usage: 290.76MiB. Peak allocated: 317.33MiB Peak reserved: 324.00MiB
[default0]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default0]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Parametrizing model parameters using StandardParametrizator
[default0]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: [Optimizer Building] Using LearningRateForSP as learning rate
[default0]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: [ZeRO sharding] Size of optimizer params per rank:
[default0]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: [ZeRO sharding] DP Rank 0 has 139M out of 139M (100.00%) params' optimizer states
[default3]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=3|ip-26-0-162-233]: Local number of parameters: 139M (264.73MiB)
[default5]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=5|ip-26-0-162-233]: Local number of parameters: 139M (264.73MiB)
[default5]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=5|ip-26-0-162-233]: [After model building] Memory usage: 290.76MiB. Peak allocated: 317.33MiB Peak reserved: 324.00MiB
[default3]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=3|ip-26-0-162-233]: [After model building] Memory usage: 290.76MiB. Peak allocated: 317.33MiB Peak reserved: 324.00MiB
[default3]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=3|ip-26-0-162-233]: No checkpoint path provided.
[default5]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=5|ip-26-0-162-233]: No checkpoint path provided.
[default1]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=1|ip-26-0-162-233]: Local number of parameters: 139M (264.73MiB)
[default1]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=1|ip-26-0-162-233]: [After model building] Memory usage: 290.76MiB. Peak allocated: 317.33MiB Peak reserved: 324.00MiB
[default1]:07/03/2024 21:23:31 [INFO|DP=0|PP=0|TP=1|ip-26-0-162-233]: No checkpoint path provided.
[default0]:07/03/2024 21:23:33 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples
[default0]:07/03/2024 21:23:33 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Using `datasets` library
[default0]:07/03/2024 21:23:33 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4')
[default0]:07/03/2024 21:23:33 [WARNING|DP=0|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/03/2024 21:23:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: [Training Plan] There are 1 training stages
[default0]:07/03/2024 21:23:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: [Stage Training Stage] start from step 1
[default0]:07/03/2024 21:23:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]:
[default0]:07/03/2024 21:23:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: [Start training] datetime: 2024-07-03 21:23:36.163052 | mbs: 1 | grad_accum: 1024 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0
[default0]:07/03/2024 21:23:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps
[default0]:07/03/2024 21:23:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Memory usage: 1350.75MiB. Peak allocated 1350.76MiB. Peak reserved: 1384.00MiB
[default4]:07/03/2024 21:23:36 [WARNING|DP=0|PP=0|TP=4|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/03/2024 21:23:36 [WARNING|DP=0|PP=0|TP=2|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/03/2024 21:23:36 [WARNING|DP=0|PP=0|TP=6|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/03/2024 21:23:36 [WARNING|DP=0|PP=0|TP=3|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/03/2024 21:23:36 [WARNING|DP=0|PP=0|TP=1|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/03/2024 21:23:36 [WARNING|DP=0|PP=0|TP=5|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/03/2024 21:23:36 [WARNING|DP=0|PP=0|TP=7|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default6]: warnings.warn(
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default7]: warnings.warn(
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default2]: warnings.warn(
[default0]:07/03/2024 21:26:04 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Memory usage: 1427.33MiB. Peak allocated 3240.63MiB. Peak reserved: 3496.00MiB
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default0]: warnings.warn(
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default3]: warnings.warn(
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default1]: warnings.warn(
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default4]: warnings.warn(
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default5]: warnings.warn(
[default0]:07/03/2024 21:26:04 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: iteration: 1 / 20 | consumed_tokens: 4.19M | elapsed_time_per_iteration_ms: 148K | tokens_per_sec: 28.3K | tokens_per_sec_per_gpu: 3.53K | global_batch_size: 1.02K | lm_loss: 11.5 | lr: 0.0001 | model_tflops_per_gpu: 32.1 | hardware_tflops_per_gpu: 32.1 | grad_norm: 15.7 | cuda_memory_allocated: 2.61G | cuda_max_memory_reserved: 3.67G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.1G | hd_free_memory_tb: 246G
[default0]:07/03/2024 21:26:04 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Memory usage: 2487.25MiB. Peak allocated 2487.25MiB. Peak reserved: 3496.00MiB
[default0]:07/03/2024 21:28:25 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Memory usage: 2487.75MiB. Peak allocated 4241.93MiB. Peak reserved: 4330.00MiB
[default0]:07/03/2024 21:28:25 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: iteration: 2 / 20 | consumed_tokens: 8.39M | elapsed_time_per_iteration_ms: 141K | tokens_per_sec: 29.8K | tokens_per_sec_per_gpu: 3.73K | global_batch_size: 1.02K | lm_loss: 11.5 | lr: 9.53e-05 | model_tflops_per_gpu: 33.9 | hardware_tflops_per_gpu: 33.9 | grad_norm: 16 | cuda_memory_allocated: 2.61G | cuda_max_memory_reserved: 4.54G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.1G | hd_free_memory_tb: 246G
[default0]:07/03/2024 21:28:25 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Memory usage: 2487.25MiB. Peak allocated 2487.82MiB. Peak reserved: 4330.00MiB
[default0]:STAGE:2024-07-03 21:31:05 1814910:1814910 ActivityProfilerController.cpp:314] Completed Stage: Warm Up
[default0]:07/03/2024 21:31:05 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Memory usage: 2487.75MiB. Peak allocated 4241.93MiB. Peak reserved: 4330.00MiB
[default0]:07/03/2024 21:31:05 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: iteration: 3 / 20 | consumed_tokens: 12.6M | elapsed_time_per_iteration_ms: 160K | tokens_per_sec: 26.2K | tokens_per_sec_per_gpu: 3.27K | global_batch_size: 1.02K | lm_loss: 12.8 | lr: 9.05e-05 | model_tflops_per_gpu: 29.7 | hardware_tflops_per_gpu: 29.7 | grad_norm: 137 | cuda_memory_allocated: 2.61G | cuda_max_memory_reserved: 4.54G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.1G | hd_free_memory_tb: 246G
[default0]:07/03/2024 21:31:05 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Memory usage: 2487.25MiB. Peak allocated 2487.82MiB. Peak reserved: 4330.00MiB
[default0]:07/03/2024 21:34:43 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Memory usage: 2487.75MiB. Peak allocated 4241.93MiB. Peak reserved: 4330.00MiB
[default0]:07/03/2024 21:34:43 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: iteration: 4 / 20 | consumed_tokens: 16.8M | elapsed_time_per_iteration_ms: 218K | tokens_per_sec: 19.2K | tokens_per_sec_per_gpu: 2.41K | global_batch_size: 1.02K | lm_loss: 12.2 | lr: 8.58e-05 | model_tflops_per_gpu: 21.8 | hardware_tflops_per_gpu: 21.8 | grad_norm: 22.4 | cuda_memory_allocated: 2.61G | cuda_max_memory_reserved: 4.54G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.1G | hd_free_memory_tb: 246G
[default0]:07/03/2024 21:34:43 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Memory usage: 2487.25MiB. Peak allocated 2487.82MiB. Peak reserved: 4330.00MiB
[default0]:07/03/2024 21:38:22 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: iteration: 5 / 20 | consumed_tokens: 21M | elapsed_time_per_iteration_ms: 219K | tokens_per_sec: 19.1K | tokens_per_sec_per_gpu: 2.39K | global_batch_size: 1.02K | lm_loss: 12.4 | lr: 8.11e-05 | model_tflops_per_gpu: 21.7 | hardware_tflops_per_gpu: 21.7 | grad_norm: 42.8
[default0]:07/03/2024 21:38:22 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: Memory usage: 2487.25MiB. Peak allocated 4241.93MiB. Peak reserved: 4330.00MiB
[default0]:07/03/2024 21:42:04 [INFO|DP=0|PP=0|TP=0|ip-26-0-162-233]: iteration: 6 / 20 | consumed_tokens: 25.2M | elapsed_time_per_iteration_ms: 222K | tokens_per_sec: 18.9K | tokens_per_sec_per_gpu: 2.36K | global_batch_size: 1.02K | lm_loss: 11.1 | lr: 7.63e-05 | model_tflops_per_gpu: 21.4 | hardware_tflops_per_gpu: 21.4 | grad_norm: 24.8
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:563] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600024 milliseconds before timing out.
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:563] [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[default2]:[rank2]:[E ProcessGroupNCCL.cpp:563] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600022 milliseconds before timing out.
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:563] [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600017 milliseconds before timing out.
[default1]:[rank1]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600074 milliseconds before timing out.
[default3]:[rank3]:[E ProcessGroupNCCL.cpp:563] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600089 milliseconds before timing out.
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:563] [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600090 milliseconds before timing out.
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 6] Timeout at NCCL work: 1222720, last enqueued NCCL work: 1222842, last completed NCCL work: 1222719.
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:577] [Rank 6] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:583] [Rank 6] To avoid data inconsistency, we are taking the entire process down.
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 6] Process group watchdog thread terminated with exception: [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600090 milliseconds before timing out.
[default6]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fdf364d0897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default6]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fdf377a9c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fdf377aea80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fdf377afdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #4: <unknown function> + 0xd3e95 (0x7fdf83248e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default6]:frame #5: <unknown function> + 0x8609 (0x7fdf8828f609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default6]:frame #6: clone + 0x43 (0x7fdf8805a353 in /lib/x86_64-linux-gnu/libc.so.6)
[default6]:
[default6]:terminate called after throwing an instance of 'c10::DistBackendError'
[default6]: what(): [PG 2 Rank 6] Process group watchdog thread terminated with exception: [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600090 milliseconds before timing out.
[default6]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fdf364d0897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default6]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fdf377a9c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fdf377aea80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fdf377afdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #4: <unknown function> + 0xd3e95 (0x7fdf83248e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default6]:frame #5: <unknown function> + 0x8609 (0x7fdf8828f609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default6]:frame #6: clone + 0x43 (0x7fdf8805a353 in /lib/x86_64-linux-gnu/libc.so.6)
[default6]:
[default6]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fdf364d0897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default6]:frame #1: <unknown function> + 0xe32119 (0x7fdf37433119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default6]:frame #2: <unknown function> + 0xd3e95 (0x7fdf83248e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default6]:frame #3: <unknown function> + 0x8609 (0x7fdf8828f609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default6]:frame #4: clone + 0x43 (0x7fdf8805a353 in /lib/x86_64-linux-gnu/libc.so.6)
[default6]:
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 7] Timeout at NCCL work: 1222720, last enqueued NCCL work: 1222842, last completed NCCL work: 1222719.
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:577] [Rank 7] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:583] [Rank 7] To avoid data inconsistency, we are taking the entire process down.
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 7] Process group watchdog thread terminated with exception: [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[default7]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fb64e5ed897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default7]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fb64f8c6c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fb64f8cba80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fb64f8ccdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #4: <unknown function> + 0xd3e95 (0x7fb69b365e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default7]:frame #5: <unknown function> + 0x8609 (0x7fb6a03ac609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default7]:frame #6: clone + 0x43 (0x7fb6a0177353 in /lib/x86_64-linux-gnu/libc.so.6)
[default7]:
[default7]:terminate called after throwing an instance of 'c10::DistBackendError'
[default7]: what(): [PG 2 Rank 7] Process group watchdog thread terminated with exception: [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[default7]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fb64e5ed897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default7]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fb64f8c6c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fb64f8cba80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fb64f8ccdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #4: <unknown function> + 0xd3e95 (0x7fb69b365e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default7]:frame #5: <unknown function> + 0x8609 (0x7fb6a03ac609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default7]:frame #6: clone + 0x43 (0x7fb6a0177353 in /lib/x86_64-linux-gnu/libc.so.6)
[default7]:
[default7]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fb64e5ed897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default7]:frame #1: <unknown function> + 0xe32119 (0x7fb64f550119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default7]:frame #2: <unknown function> + 0xd3e95 (0x7fb69b365e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default7]:frame #3: <unknown function> + 0x8609 (0x7fb6a03ac609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default7]:frame #4: clone + 0x43 (0x7fb6a0177353 in /lib/x86_64-linux-gnu/libc.so.6)
[default7]:
[default2]:[rank2]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 2] Timeout at NCCL work: 1222720, last enqueued NCCL work: 1222842, last completed NCCL work: 1222719.
[default2]:[rank2]:[E ProcessGroupNCCL.cpp:577] [Rank 2] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default2]:[rank2]:[E ProcessGroupNCCL.cpp:583] [Rank 2] To avoid data inconsistency, we are taking the entire process down.
[default2]:[rank2]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600022 milliseconds before timing out.
[default2]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fb58ccb2897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default2]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fb58df8bc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fb58df90a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fb58df91dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #4: <unknown function> + 0xd3e95 (0x7fb5d9a2ae95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default2]:frame #5: <unknown function> + 0x8609 (0x7fb5dea71609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default2]:frame #6: clone + 0x43 (0x7fb5de83c353 in /lib/x86_64-linux-gnu/libc.so.6)
[default2]:
[default2]:terminate called after throwing an instance of 'c10::DistBackendError'
[default2]: what(): [PG 2 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600022 milliseconds before timing out.
[default2]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fb58ccb2897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default2]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fb58df8bc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fb58df90a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fb58df91dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #4: <unknown function> + 0xd3e95 (0x7fb5d9a2ae95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default2]:frame #5: <unknown function> + 0x8609 (0x7fb5dea71609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default2]:frame #6: clone + 0x43 (0x7fb5de83c353 in /lib/x86_64-linux-gnu/libc.so.6)
[default2]:
[default2]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fb58ccb2897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default2]:frame #1: <unknown function> + 0xe32119 (0x7fb58dc15119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default2]:frame #2: <unknown function> + 0xd3e95 (0x7fb5d9a2ae95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default2]:frame #3: <unknown function> + 0x8609 (0x7fb5dea71609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default2]:frame #4: clone + 0x43 (0x7fb5de83c353 in /lib/x86_64-linux-gnu/libc.so.6)
[default2]:
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 5] Timeout at NCCL work: 1222720, last enqueued NCCL work: 1222842, last completed NCCL work: 1222719.
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:577] [Rank 5] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:583] [Rank 5] To avoid data inconsistency, we are taking the entire process down.
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 5] Process group watchdog thread terminated with exception: [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600017 milliseconds before timing out.
[default5]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd9b8b9a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default5]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fd9b9e73c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fd9b9e78a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fd9b9e79dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #4: <unknown function> + 0xd3e95 (0x7fda05912e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default5]:frame #5: <unknown function> + 0x8609 (0x7fda0a959609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default5]:frame #6: clone + 0x43 (0x7fda0a724353 in /lib/x86_64-linux-gnu/libc.so.6)
[default5]:
[default5]:terminate called after throwing an instance of 'c10::DistBackendError'
[default5]: what(): [PG 2 Rank 5] Process group watchdog thread terminated with exception: [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600017 milliseconds before timing out.
[default5]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd9b8b9a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default5]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fd9b9e73c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fd9b9e78a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fd9b9e79dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #4: <unknown function> + 0xd3e95 (0x7fda05912e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default5]:frame #5: <unknown function> + 0x8609 (0x7fda0a959609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default5]:frame #6: clone + 0x43 (0x7fda0a724353 in /lib/x86_64-linux-gnu/libc.so.6)
[default5]:
[default5]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd9b8b9a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default5]:frame #1: <unknown function> + 0xe32119 (0x7fd9b9afd119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default5]:frame #2: <unknown function> + 0xd3e95 (0x7fda05912e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default5]:frame #3: <unknown function> + 0x8609 (0x7fda0a959609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default5]:frame #4: clone + 0x43 (0x7fda0a724353 in /lib/x86_64-linux-gnu/libc.so.6)
[default5]:
[default1]:[rank1]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 1] Timeout at NCCL work: 1222720, last enqueued NCCL work: 1222842, last completed NCCL work: 1222719.
[default1]:[rank1]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default1]:[rank1]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down.
[default1]:[rank1]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600074 milliseconds before timing out.
[default1]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7ffad9416897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7ffada6efc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7ffada6f4a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7ffada6f5dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #4: <unknown function> + 0xd3e95 (0x7ffb2618ee95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #5: <unknown function> + 0x8609 (0x7ffb2b1d5609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #6: clone + 0x43 (0x7ffb2afa0353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default1]:terminate called after throwing an instance of 'c10::DistBackendError'
[default1]: what(): [PG 2 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600074 milliseconds before timing out.
[default1]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7ffad9416897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7ffada6efc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7ffada6f4a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7ffada6f5dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #4: <unknown function> + 0xd3e95 (0x7ffb2618ee95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #5: <unknown function> + 0x8609 (0x7ffb2b1d5609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #6: clone + 0x43 (0x7ffb2afa0353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default1]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7ffad9416897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default1]:frame #1: <unknown function> + 0xe32119 (0x7ffada379119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default1]:frame #2: <unknown function> + 0xd3e95 (0x7ffb2618ee95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default1]:frame #3: <unknown function> + 0x8609 (0x7ffb2b1d5609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default1]:frame #4: clone + 0x43 (0x7ffb2afa0353 in /lib/x86_64-linux-gnu/libc.so.6)
[default1]:
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 4] Timeout at NCCL work: 1222720, last enqueued NCCL work: 1222842, last completed NCCL work: 1222719.
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:577] [Rank 4] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:583] [Rank 4] To avoid data inconsistency, we are taking the entire process down.
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 4] Process group watchdog thread terminated with exception: [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600024 milliseconds before timing out.
[default4]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f91375e9897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default4]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f91388c2c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f91388c7a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f91388c8dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #4: <unknown function> + 0xd3e95 (0x7f9184361e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default4]:frame #5: <unknown function> + 0x8609 (0x7f91893a8609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default4]:frame #6: clone + 0x43 (0x7f9189173353 in /lib/x86_64-linux-gnu/libc.so.6)
[default4]:
[default4]:terminate called after throwing an instance of 'c10::DistBackendError'
[default4]: what(): [PG 2 Rank 4] Process group watchdog thread terminated with exception: [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600024 milliseconds before timing out.
[default4]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f91375e9897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default4]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f91388c2c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f91388c7a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f91388c8dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #4: <unknown function> + 0xd3e95 (0x7f9184361e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default4]:frame #5: <unknown function> + 0x8609 (0x7f91893a8609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default4]:frame #6: clone + 0x43 (0x7f9189173353 in /lib/x86_64-linux-gnu/libc.so.6)
[default4]:
[default4]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f91375e9897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default4]:frame #1: <unknown function> + 0xe32119 (0x7f913854c119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default4]:frame #2: <unknown function> + 0xd3e95 (0x7f9184361e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default4]:frame #3: <unknown function> + 0x8609 (0x7f91893a8609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default4]:frame #4: clone + 0x43 (0x7f9189173353 in /lib/x86_64-linux-gnu/libc.so.6)
[default4]:
[default3]:[rank3]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 3] Timeout at NCCL work: 1222720, last enqueued NCCL work: 1222842, last completed NCCL work: 1222719.
[default3]:[rank3]:[E ProcessGroupNCCL.cpp:577] [Rank 3] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[default3]:[rank3]:[E ProcessGroupNCCL.cpp:583] [Rank 3] To avoid data inconsistency, we are taking the entire process down.
[default3]:[rank3]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600089 milliseconds before timing out.
[default3]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fc7c1cbd897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default3]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fc7c2f96c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fc7c2f9ba80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fc7c2f9cdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #4: <unknown function> + 0xd3e95 (0x7fc80ea35e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default3]:frame #5: <unknown function> + 0x8609 (0x7fc813a7c609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default3]:frame #6: clone + 0x43 (0x7fc813847353 in /lib/x86_64-linux-gnu/libc.so.6)
[default3]:
[default3]:terminate called after throwing an instance of 'c10::DistBackendError'
[default3]: what(): [PG 2 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1222720, OpType=_REDUCE_SCATTER_BASE, NumelIn=8388608, NumelOut=1048576, Timeout(ms)=600000) ran for 600089 milliseconds before timing out.
[default3]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first):
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fc7c1cbd897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default3]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fc7c2f96c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fc7c2f9ba80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fc7c2f9cdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #4: <unknown function> + 0xd3e95 (0x7fc80ea35e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default3]:frame #5: <unknown function> + 0x8609 (0x7fc813a7c609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default3]:frame #6: clone + 0x43 (0x7fc813847353 in /lib/x86_64-linux-gnu/libc.so.6)
[default3]:
[default3]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first):
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fc7c1cbd897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so)
[default3]:frame #1: <unknown function> + 0xe32119 (0x7fc7c2c20119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
[default3]:frame #2: <unknown function> + 0xd3e95 (0x7fc80ea35e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6)
[default3]:frame #3: <unknown function> + 0x8609 (0x7fc813a7c609 in /lib/x86_64-linux-gnu/libpthread.so.0)
[default3]:frame #4: clone + 0x43 (0x7fc813847353 in /lib/x86_64-linux-gnu/libc.so.6)
[default3]:
W0703 21:52:55.522000 140614011090752 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1814910 closing signal SIGTERM
E0703 21:53:09.170000 140614011090752 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: -6) local_rank: 1 (pid: 1814911) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2024-07-03_21:52:55
host : ip-26-0-162-233.ec2.internal
rank : 2 (local_rank: 2)
exitcode : -6 (pid: 1814912)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 1814912
[2]:
time : 2024-07-03_21:52:55
host : ip-26-0-162-233.ec2.internal
rank : 3 (local_rank: 3)
exitcode : -6 (pid: 1814913)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 1814913
[3]:
time : 2024-07-03_21:52:55
host : ip-26-0-162-233.ec2.internal
rank : 4 (local_rank: 4)
exitcode : -6 (pid: 1814914)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 1814914
[4]:
time : 2024-07-03_21:52:55
host : ip-26-0-162-233.ec2.internal
rank : 5 (local_rank: 5)
exitcode : -6 (pid: 1814915)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 1814915
[5]:
time : 2024-07-03_21:52:55
host : ip-26-0-162-233.ec2.internal
rank : 6 (local_rank: 6)
exitcode : -6 (pid: 1814916)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 1814916
[6]:
time : 2024-07-03_21:52:55
host : ip-26-0-162-233.ec2.internal
rank : 7 (local_rank: 7)
exitcode : -6 (pid: 1814917)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 1814917
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-03_21:52:55
host : ip-26-0-162-233.ec2.internal
rank : 1 (local_rank: 1)
exitcode : -6 (pid: 1814911)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 1814911
============================================================
srun: error: ip-26-0-162-233: task 0: Exited with exit code 1
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.