|
======================== |
|
START TIME: Tue Jul 2 19:35:24 UTC 2024 |
|
python3 version = Python 3.10.14 |
|
======================== |
|
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well. |
|
Token is valid (permission: write). |
|
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token |
|
Login successful |
|
Already on 'bench_cluster' |
|
M examples/config_tiny_llama.py |
|
M examples/config_tiny_llama.yaml |
|
M examples/train_tiny_llama.sh |
|
M src/nanotron/models/llama.py |
|
M src/nanotron/trainer.py |
|
Your branch is up to date with 'origin/bench_cluster'. |
|
Job status: RUNNING |
|
W0702 19:35:27.088000 140269937952576 torch/distributed/run.py:757] |
|
W0702 19:35:27.088000 140269937952576 torch/distributed/run.py:757] ***************************************** |
|
W0702 19:35:27.088000 140269937952576 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0702 19:35:27.088000 140269937952576 torch/distributed/run.py:757] ***************************************** |
|
W0702 19:35:27.095000 139728973952832 torch/distributed/run.py:757] |
|
W0702 19:35:27.095000 139728973952832 torch/distributed/run.py:757] ***************************************** |
|
W0702 19:35:27.095000 139728973952832 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0702 19:35:27.095000 139728973952832 torch/distributed/run.py:757] ***************************************** |
|
[default0]:07/02/2024 19:35:45 [WARNING|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Vocab Size Padding] Padded vocab (size: 50257) with 7 dummy tokens (new size: 50264) |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Config: |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Config(general=GeneralArgs(project='bench_cluster', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: run='%date_%jobid', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: seed=42, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: step=None, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: consumed_train_samples=None, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: benchmark_csv_path=None, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: ignore_sanity_checks=True), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: parallelism=ParallelismArgs(dp=1, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pp=2, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tp=8, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7fa5deab0910>, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tp_linear_async_communication=False, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: expert_parallel_size=1), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: eos_token_id=2, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hidden_act='silu', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hidden_size=2048, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: initializer_range=0.02, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: intermediate_size=4096, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: is_llama_config=True, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: max_position_embeddings=4096, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_attention_heads=32, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_hidden_layers=24, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_key_value_heads=32, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pad_token_id=None, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pretraining_tp=1, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rms_norm_eps=1e-05, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rope_scaling=None, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rope_theta=10000.0, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tie_word_embeddings=True, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: use_cache=True, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: vocab_size=50264), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: init_method=RandomInit(std=0.025), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: dtype=torch.bfloat16, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: make_vocab_size_divisible_by=1, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: ddp_bucket_cap_mb=25), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tokenizer_revision=None, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tokenizer_max_length=None), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: checkpoint_interval=100000, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: save_initial_state=False, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: resume_checkpoint_path=None, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: checkpoints_path_is_shared_file_system=False), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: logging=LoggingArgs(log_level='info', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: log_level_replica='info', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: iteration_step_info_interval=1), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tokens=TokensArgs(sequence_length=4096, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: train_steps=20, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: micro_batch_size=4, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: batch_accumulation_per_replica=256, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: val_check_interval=-1, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: limit_val_batches=0, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: limit_test_batches=0), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: adam_beta1=0.9, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: adam_beta2=0.95, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: torch_adam_is_fused=True, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: name='adamW'), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: zero_stage=1, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: weight_decay=0.01, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: clip_grad=1.0, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: accumulate_grad_in_fp32=True, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lr_warmup_steps=1, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lr_warmup_style='linear', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lr_decay_style='linear', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lr_decay_steps=19, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lr_decay_starting_step=None, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: min_decay_lr=1e-05)), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: data_stages=[DatasetStageArgs(name='Training Stage', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: start_training_step=1, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hf_dataset_splits='train', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hf_dataset_config_name=None, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: dataset_processing_num_proc_per_process=64, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: dataset_overwrite_cache=False, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: text_column_name='text'), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: seed=42, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_loading_workers=32))], |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/16_GPUS/dp-1_tp-8_pp-2_mbz-4')), |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lighteval=None) |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Model Config: |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: LlamaConfig(bos_token_id=1, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: eos_token_id=2, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hidden_act='silu', |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hidden_size=2048, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: initializer_range=0.02, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: intermediate_size=4096, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: is_llama_config=True, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: max_position_embeddings=4096, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_attention_heads=32, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_hidden_layers=24, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_key_value_heads=32, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pad_token_id=None, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pretraining_tp=1, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rms_norm_eps=1e-05, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rope_scaling=None, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rope_theta=10000.0, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tie_word_embeddings=True, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: use_cache=True, |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: vocab_size=50264) |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Building model.. |
|
[default0]:07/02/2024 19:35:45 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Setting PP block ranks... |
|
[default2]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=2|ip-26-0-172-73]: Local number of parameters: 65.3M (124.62MiB) |
|
[default6]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=6|ip-26-0-172-73]: Local number of parameters: 65.3M (124.62MiB) |
|
[default2]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=2|ip-26-0-172-73]: [After model building] Memory usage: 135.64MiB. Peak allocated: 137.67MiB Peak reserved: 150.00MiB |
|
[default2]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=2|ip-26-0-172-73]: No checkpoint path provided. |
|
[default6]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=6|ip-26-0-172-73]: [After model building] Memory usage: 135.64MiB. Peak allocated: 137.67MiB Peak reserved: 150.00MiB |
|
[default6]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=6|ip-26-0-172-73]: No checkpoint path provided. |
|
[default1]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=1|ip-26-0-172-73]: Local number of parameters: 65.3M (124.62MiB) |
|
[default1]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=1|ip-26-0-172-73]: [After model building] Memory usage: 135.64MiB. Peak allocated: 137.67MiB Peak reserved: 150.00MiB |
|
[default1]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=1|ip-26-0-172-73]: No checkpoint path provided. |
|
[default0]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=0|ip-26-0-172-73]: Local number of parameters: 65.3M (124.62MiB) |
|
[default0]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=0|ip-26-0-172-73]: [After model building] Memory usage: 135.64MiB. Peak allocated: 137.67MiB Peak reserved: 150.00MiB |
|
[default0]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=0|ip-26-0-172-73]: No checkpoint path provided. |
|
[default3]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=3|ip-26-0-172-73]: Local number of parameters: 65.3M (124.62MiB) |
|
[default4]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=4|ip-26-0-172-73]: Local number of parameters: 65.3M (124.62MiB) |
|
[default5]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=5|ip-26-0-172-73]: Local number of parameters: 65.3M (124.62MiB) |
|
[default5]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=5|ip-26-0-172-73]: [After model building] Memory usage: 135.64MiB. Peak allocated: 137.67MiB Peak reserved: 150.00MiB |
|
[default4]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=4|ip-26-0-172-73]: [After model building] Memory usage: 135.64MiB. Peak allocated: 137.67MiB Peak reserved: 150.00MiB |
|
[default4]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=4|ip-26-0-172-73]: No checkpoint path provided. |
|
[default5]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=5|ip-26-0-172-73]: No checkpoint path provided. |
|
[default3]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=3|ip-26-0-172-73]: [After model building] Memory usage: 135.64MiB. Peak allocated: 137.67MiB Peak reserved: 150.00MiB |
|
[default3]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=3|ip-26-0-172-73]: No checkpoint path provided. |
|
[default7]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=7|ip-26-0-172-73]: Local number of parameters: 65.3M (124.62MiB) |
|
[default7]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=7|ip-26-0-172-73]: [After model building] Memory usage: 135.64MiB. Peak allocated: 137.67MiB Peak reserved: 150.00MiB |
|
[default7]:07/02/2024 19:36:02 [INFO|DP=0|PP=1|TP=7|ip-26-0-172-73]: No checkpoint path provided. |
|
[default6]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=6|ip-26-0-169-139]: Local number of parameters: 86.3M (164.65MiB) |
|
[default6]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=6|ip-26-0-169-139]: [After model building] Memory usage: 179.67MiB. Peak allocated: 181.70MiB Peak reserved: 198.00MiB |
|
[default6]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=6|ip-26-0-169-139]: No checkpoint path provided. |
|
[default0]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Total number of parameters: 1.21G (2314.22MiB) |
|
[default0]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Local number of parameters: 86.3M (164.65MiB) |
|
[default0]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [After model building] Memory usage: 179.67MiB. Peak allocated: 181.70MiB Peak reserved: 198.00MiB |
|
[default0]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: No checkpoint path provided. |
|
[default0]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Parametrizing model parameters using StandardParametrizator |
|
[default2]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=2|ip-26-0-169-139]: Local number of parameters: 86.3M (164.65MiB) |
|
[default2]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=2|ip-26-0-169-139]: [After model building] Memory usage: 179.67MiB. Peak allocated: 181.70MiB Peak reserved: 198.00MiB |
|
[default3]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=3|ip-26-0-169-139]: Local number of parameters: 86.3M (164.65MiB) |
|
[default3]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=3|ip-26-0-169-139]: [After model building] Memory usage: 179.67MiB. Peak allocated: 181.70MiB Peak reserved: 198.00MiB |
|
[default5]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=5|ip-26-0-169-139]: Local number of parameters: 86.3M (164.65MiB) |
|
[default2]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=2|ip-26-0-169-139]: No checkpoint path provided. |
|
[default3]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=3|ip-26-0-169-139]: No checkpoint path provided. |
|
[default4]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=4|ip-26-0-169-139]: Local number of parameters: 86.3M (164.65MiB) |
|
[default4]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=4|ip-26-0-169-139]: [After model building] Memory usage: 179.67MiB. Peak allocated: 181.70MiB Peak reserved: 198.00MiB |
|
[default5]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=5|ip-26-0-169-139]: [After model building] Memory usage: 179.67MiB. Peak allocated: 181.70MiB Peak reserved: 198.00MiB |
|
[default5]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=5|ip-26-0-169-139]: No checkpoint path provided. |
|
[default4]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=4|ip-26-0-169-139]: No checkpoint path provided. |
|
[default1]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=1|ip-26-0-169-139]: Local number of parameters: 86.3M (164.65MiB) |
|
[default1]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=1|ip-26-0-169-139]: [After model building] Memory usage: 179.67MiB. Peak allocated: 181.70MiB Peak reserved: 198.00MiB |
|
[default1]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=1|ip-26-0-169-139]: No checkpoint path provided. |
|
[default7]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=7|ip-26-0-169-139]: Local number of parameters: 86.3M (164.65MiB) |
|
[default7]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=7|ip-26-0-169-139]: [After model building] Memory usage: 179.67MiB. Peak allocated: 181.70MiB Peak reserved: 198.00MiB |
|
[default7]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=7|ip-26-0-169-139]: No checkpoint path provided. |
|
[default0]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Optimizer Building] Using LearningRateForSP as learning rate |
|
[default0]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [ZeRO sharding] Size of optimizer params per rank: |
|
[default0]:07/02/2024 19:36:02 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [ZeRO sharding] DP Rank 0 has 86.3M out of 86.3M (100.00%) params' optimizer states |
|
[default0]:07/02/2024 19:36:04 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples |
|
[default0]:07/02/2024 19:36:04 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Using `datasets` library |
|
[default0]:07/02/2024 19:36:04 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4') |
|
[default0]:07/02/2024 19:36:04 [WARNING|DP=0|PP=0|TP=0|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/02/2024 19:36:05 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Training Plan] There are 1 training stages |
|
[default0]:07/02/2024 19:36:05 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Stage Training Stage] start from step 1 |
|
[default0]:07/02/2024 19:36:05 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: |
|
[default0]:07/02/2024 19:36:05 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Start training] datetime: 2024-07-02 19:36:05.061196 | mbs: 4 | grad_accum: 256 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0 |
|
[default0]:07/02/2024 19:36:05 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps |
|
[default0]:07/02/2024 19:36:05 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Memory usage: 839.67MiB. Peak allocated 839.67MiB. Peak reserved: 858.00MiB |
|
[default3]:07/02/2024 19:36:05 [WARNING|DP=0|PP=1|TP=3|ip-26-0-172-73]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/02/2024 19:36:05 [WARNING|DP=0|PP=1|TP=4|ip-26-0-172-73]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/02/2024 19:36:05 [WARNING|DP=0|PP=1|TP=6|ip-26-0-172-73]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/02/2024 19:36:05 [WARNING|DP=0|PP=1|TP=1|ip-26-0-172-73]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/02/2024 19:36:05 [WARNING|DP=0|PP=1|TP=5|ip-26-0-172-73]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/02/2024 19:36:05 [WARNING|DP=0|PP=1|TP=0|ip-26-0-172-73]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/02/2024 19:36:05 [WARNING|DP=0|PP=1|TP=7|ip-26-0-172-73]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/02/2024 19:36:05 [WARNING|DP=0|PP=0|TP=6|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/02/2024 19:36:05 [WARNING|DP=0|PP=0|TP=4|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/02/2024 19:36:05 [WARNING|DP=0|PP=0|TP=3|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/02/2024 19:36:05 [WARNING|DP=0|PP=0|TP=2|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/02/2024 19:36:05 [WARNING|DP=0|PP=0|TP=1|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/02/2024 19:36:05 [WARNING|DP=0|PP=1|TP=2|ip-26-0-172-73]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/02/2024 19:36:05 [WARNING|DP=0|PP=0|TP=7|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/02/2024 19:36:06 [WARNING|DP=0|PP=0|TP=5|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default0]: warnings.warn( |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default1]: warnings.warn( |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default2]: warnings.warn( |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default3]: warnings.warn( |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default5]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default6]: warnings.warn( |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default7]: warnings.warn( |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default4]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default6]: warnings.warn( |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default0]: warnings.warn( |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default2]: warnings.warn( |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default1]: warnings.warn( |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default5]: warnings.warn( |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default7]: warnings.warn( |
|
[default0]:07/02/2024 19:36:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Memory usage: 910.81MiB. Peak allocated 8458.18MiB. Peak reserved: 8784.00MiB |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default3]: warnings.warn( |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default4]: warnings.warn( |
|
[default0]:07/02/2024 19:36:55 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Memory usage: 1572.29MiB. Peak allocated 1572.29MiB. Peak reserved: 8784.00MiB |
|
[default0]:07/02/2024 19:36:55 [INFO|DP=0|PP=1|TP=0|ip-26-0-172-73]: iteration: 1 / 20 | consumed_tokens: 4.19M | elapsed_time_per_iteration_ms: 44.3K | tokens_per_sec: 94.6K | tokens_per_sec_per_gpu: 5.91K | global_batch_size: 1.02K | lm_loss: 11.2 | lr: 0.0001 | model_tflops_per_gpu: 53.6 | hardware_tflops_per_gpu: 53.6 | grad_norm: 12.1 | cuda_memory_allocated: 1.26G | cuda_max_memory_reserved: 5.08G | hd_total_memory_tb: 312G | hd_used_memory_tb: 65.5G | hd_free_memory_tb: 247G |
|
[default0]:07/02/2024 19:37:21 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Memory usage: 1572.29MiB. Peak allocated 9062.06MiB. Peak reserved: 9290.00MiB |
|
[default0]:07/02/2024 19:37:21 [INFO|DP=0|PP=1|TP=0|ip-26-0-172-73]: iteration: 2 / 20 | consumed_tokens: 8.39M | elapsed_time_per_iteration_ms: 26.4K | tokens_per_sec: 159K | tokens_per_sec_per_gpu: 9.92K | global_batch_size: 1.02K | lm_loss: 11.2 | lr: 9.53e-05 | model_tflops_per_gpu: 90 | hardware_tflops_per_gpu: 90 | grad_norm: 12.2 | cuda_memory_allocated: 1.26G | cuda_max_memory_reserved: 5.08G | hd_total_memory_tb: 312G | hd_used_memory_tb: 65.5G | hd_free_memory_tb: 247G |
|
[default0]:07/02/2024 19:37:21 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Memory usage: 1572.29MiB. Peak allocated 1572.33MiB. Peak reserved: 9290.00MiB |
|
[default0]:07/02/2024 19:37:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Memory usage: 1572.29MiB. Peak allocated 9062.06MiB. Peak reserved: 9290.00MiB |
|
[default0]:STAGE:2024-07-02 19:37:50 2521965:2521965 ActivityProfilerController.cpp:314] Completed Stage: Warm Up |
|
[default0]:07/02/2024 19:37:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Memory usage: 1572.29MiB. Peak allocated 1572.33MiB. Peak reserved: 9290.00MiB |
|
[default0]:07/02/2024 19:37:50 [INFO|DP=0|PP=1|TP=0|ip-26-0-172-73]: iteration: 3 / 20 | consumed_tokens: 12.6M | elapsed_time_per_iteration_ms: 28.6K | tokens_per_sec: 146K | tokens_per_sec_per_gpu: 9.15K | global_batch_size: 1.02K | lm_loss: 10 | lr: 9.05e-05 | model_tflops_per_gpu: 83.1 | hardware_tflops_per_gpu: 83.1 | grad_norm: 51.6 | cuda_memory_allocated: 1.26G | cuda_max_memory_reserved: 5.08G | hd_total_memory_tb: 312G | hd_used_memory_tb: 65.5G | hd_free_memory_tb: 247G |
|
[default0]:07/02/2024 19:38:26 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Memory usage: 1572.29MiB. Peak allocated 9062.06MiB. Peak reserved: 9290.00MiB |
|
[default0]:07/02/2024 19:38:26 [INFO|DP=0|PP=1|TP=0|ip-26-0-172-73]: iteration: 4 / 20 | consumed_tokens: 16.8M | elapsed_time_per_iteration_ms: 36.6K | tokens_per_sec: 115K | tokens_per_sec_per_gpu: 7.17K | global_batch_size: 1.02K | lm_loss: 11.7 | lr: 8.58e-05 | model_tflops_per_gpu: 65 | hardware_tflops_per_gpu: 65 | grad_norm: 18.3 | cuda_memory_allocated: 1.26G | cuda_max_memory_reserved: 5.08G | hd_total_memory_tb: 312G | hd_used_memory_tb: 65.5G | hd_free_memory_tb: 247G |
|
[default0]:07/02/2024 19:38:26 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Memory usage: 1572.29MiB. Peak allocated 1572.33MiB. Peak reserved: 9290.00MiB |
|
[default0]:07/02/2024 19:39:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Memory usage: 1572.29MiB. Peak allocated 9062.06MiB. Peak reserved: 9290.00MiB |
|
[default0]:07/02/2024 19:39:03 [INFO|DP=0|PP=1|TP=0|ip-26-0-172-73]: iteration: 5 / 20 | consumed_tokens: 21M | elapsed_time_per_iteration_ms: 36.1K | tokens_per_sec: 116K | tokens_per_sec_per_gpu: 7.26K | global_batch_size: 1.02K | lm_loss: 10.4 | lr: 8.11e-05 | model_tflops_per_gpu: 65.8 | hardware_tflops_per_gpu: 65.8 | grad_norm: 16 |
|
[default0]:07/02/2024 19:39:39 [INFO|DP=0|PP=1|TP=0|ip-26-0-172-73]: iteration: 6 / 20 | consumed_tokens: 25.2M | elapsed_time_per_iteration_ms: 36.1K | tokens_per_sec: 116K | tokens_per_sec_per_gpu: 7.27K | global_batch_size: 1.02K | lm_loss: 9.9 | lr: 7.63e-05 | model_tflops_per_gpu: 65.9 | hardware_tflops_per_gpu: 65.9 | grad_norm: 9.07 |
|
[default0]:STAGE:2024-07-02 19:41:10 2521965:2521965 ActivityProfilerController.cpp:320] Completed Stage: Collection |
|
[default0]:STAGE:2024-07-02 19:41:21 2521965:2521965 ActivityProfilerController.cpp:324] Completed Stage: Post Processing |
|
[default3]:[rank3]:[E ProcessGroupNCCL.cpp:563] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600036 milliseconds before timing out. |
|
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:563] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600033 milliseconds before timing out. |
|
[default5]:[rank13]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600028 milliseconds before timing out. |
|
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600058 milliseconds before timing out. |
|
[default6]:[rank14]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600058 milliseconds before timing out. |
|
[default2]:[rank10]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600029 milliseconds before timing out. |
|
[default1]:[rank9]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600078 milliseconds before timing out. |
|
[default4]:[rank12]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600041 milliseconds before timing out. |
|
[default7]:[rank15]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600055 milliseconds before timing out. |
|
[default3]:[rank11]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600051 milliseconds before timing out. |
|
[default1]:[rank1]:[E ProcessGroupNCCL.cpp:563] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600062 milliseconds before timing out. |
|
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:563] [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600085 milliseconds before timing out. |
|
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:563] [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600026 milliseconds before timing out. |
|
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:563] [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600092 milliseconds before timing out. |
|
[default2]:[rank2]:[E ProcessGroupNCCL.cpp:563] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600086 milliseconds before timing out. |
|
[default5]:[rank13]: Traceback (most recent call last): |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default5]:[rank13]: trainer.train(dataloader) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default5]:[rank13]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default5]:[rank13]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default5]:[rank13]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default5]:[rank13]: output = model(**micro_batch) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default5]:[rank13]: sharded_logits = self.model( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default5]:[rank13]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default5]:[rank13]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward |
|
[default5]:[rank13]: new_kwargs[name] = recv_from_pipeline_state_buffer( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer |
|
[default5]:[rank13]: pipeline_state.run_communication() |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication |
|
[default5]:[rank13]: recv_activation_tensor = recv_activation() |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__ |
|
[default5]:[rank13]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0] |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors |
|
[default5]:[rank13]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors |
|
[default5]:[rank13]: meta = self._recv_meta(from_rank=from_rank, tag=tag) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta |
|
[default5]:[rank13]: dist.recv( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper |
|
[default5]:[rank13]: return func(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv |
|
[default5]:[rank13]: pg.recv([tensor], group_src_rank, tag).wait() |
|
[default5]:[rank13]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1. |
|
[default4]:[rank12]: Traceback (most recent call last): |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default4]:[rank12]: trainer.train(dataloader) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default4]:[rank12]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default4]:[rank12]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default4]:[rank12]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default4]:[rank12]: output = model(**micro_batch) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank12]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank12]: return forward_call(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default4]:[rank12]: sharded_logits = self.model( |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank12]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank12]: return forward_call(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default4]:[rank12]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default4]:[rank12]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank12]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank12]: return forward_call(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward |
|
[default4]:[rank12]: new_kwargs[name] = recv_from_pipeline_state_buffer( |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer |
|
[default4]:[rank12]: pipeline_state.run_communication() |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication |
|
[default4]:[rank12]: recv_activation_tensor = recv_activation() |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__ |
|
[default4]:[rank12]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0] |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors |
|
[default4]:[rank12]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors |
|
[default4]:[rank12]: meta = self._recv_meta(from_rank=from_rank, tag=tag) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta |
|
[default4]:[rank12]: dist.recv( |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper |
|
[default4]:[rank12]: return func(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv |
|
[default4]:[rank12]: pg.recv([tensor], group_src_rank, tag).wait() |
|
[default4]:[rank12]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1. |
|
[default1]:[rank9]: Traceback (most recent call last): |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default1]:[rank9]: trainer.train(dataloader) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default1]:[rank9]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default1]:[rank9]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default1]:[rank9]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default1]:[rank9]: output = model(**micro_batch) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank9]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank9]: return forward_call(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default1]:[rank9]: sharded_logits = self.model( |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank9]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank9]: return forward_call(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default1]:[rank9]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default1]:[rank9]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank9]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank9]: return forward_call(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward |
|
[default1]:[rank9]: new_kwargs[name] = recv_from_pipeline_state_buffer( |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer |
|
[default1]:[rank9]: pipeline_state.run_communication() |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication |
|
[default1]:[rank9]: recv_activation_tensor = recv_activation() |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__ |
|
[default1]:[rank9]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0] |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors |
|
[default1]:[rank9]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors |
|
[default1]:[rank9]: meta = self._recv_meta(from_rank=from_rank, tag=tag) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta |
|
[default1]:[rank9]: dist.recv( |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper |
|
[default1]:[rank9]: return func(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv |
|
[default1]:[rank9]: pg.recv([tensor], group_src_rank, tag).wait() |
|
[default1]:[rank9]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1. |
|
[default0]:[rank8]: Traceback (most recent call last): |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default0]:[rank8]: trainer.train(dataloader) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default0]:[rank8]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default0]:[rank8]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default0]:[rank8]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default0]:[rank8]: output = model(**micro_batch) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank8]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank8]: return forward_call(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default0]:[rank8]: sharded_logits = self.model( |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank8]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank8]: return forward_call(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default0]:[rank8]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default0]:[rank8]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank8]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank8]: return forward_call(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward |
|
[default0]:[rank8]: new_kwargs[name] = recv_from_pipeline_state_buffer( |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer |
|
[default0]:[rank8]: pipeline_state.run_communication() |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication |
|
[default0]:[rank8]: recv_activation_tensor = recv_activation() |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__ |
|
[default0]:[rank8]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0] |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors |
|
[default0]:[rank8]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors |
|
[default0]:[rank8]: meta = self._recv_meta(from_rank=from_rank, tag=tag) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta |
|
[default0]:[rank8]: dist.recv( |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper |
|
[default0]:[rank8]: return func(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv |
|
[default0]:[rank8]: pg.recv([tensor], group_src_rank, tag).wait() |
|
[default0]:[rank8]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1. |
|
[default5]:[rank13]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 13834, last enqueued NCCL work: 13834, last completed NCCL work: 13833. |
|
[default5]:[rank13]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default5]:[rank13]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down. |
|
[default5]:[rank13]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600028 milliseconds before timing out. |
|
[default5]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f04a285c897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default5]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f04a3b35c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f04a3b3aa80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f04a3b3bdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #4: <unknown function> + 0xd3e95 (0x7f04ef5d4e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default5]:frame #5: <unknown function> + 0x8609 (0x7f04f461b609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default5]:frame #6: clone + 0x43 (0x7f04f43e6353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default5]: |
|
[default5]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default5]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600028 milliseconds before timing out. |
|
[default5]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f04a285c897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default5]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f04a3b35c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f04a3b3aa80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f04a3b3bdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #4: <unknown function> + 0xd3e95 (0x7f04ef5d4e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default5]:frame #5: <unknown function> + 0x8609 (0x7f04f461b609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default5]:frame #6: clone + 0x43 (0x7f04f43e6353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default5]: |
|
[default5]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f04a285c897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default5]:frame #1: <unknown function> + 0xe32119 (0x7f04a37bf119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #2: <unknown function> + 0xd3e95 (0x7f04ef5d4e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default5]:frame #3: <unknown function> + 0x8609 (0x7f04f461b609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default5]:frame #4: clone + 0x43 (0x7f04f43e6353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default5]: |
|
[default4]:[rank12]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 13834, last enqueued NCCL work: 13834, last completed NCCL work: 13833. |
|
[default4]:[rank12]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default4]:[rank12]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down. |
|
[default4]:[rank12]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600041 milliseconds before timing out. |
|
[default4]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fe6427d1897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default4]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fe643aaac62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fe643aafa80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fe643ab0dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #4: <unknown function> + 0xd3e95 (0x7fe68f549e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default4]:frame #5: <unknown function> + 0x8609 (0x7fe694590609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default4]:frame #6: clone + 0x43 (0x7fe69435b353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default4]: |
|
[default4]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default4]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600041 milliseconds before timing out. |
|
[default4]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fe6427d1897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default4]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fe643aaac62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fe643aafa80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fe643ab0dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #4: <unknown function> + 0xd3e95 (0x7fe68f549e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default4]:frame #5: <unknown function> + 0x8609 (0x7fe694590609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default4]:frame #6: clone + 0x43 (0x7fe69435b353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default4]: |
|
[default4]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fe6427d1897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default4]:frame #1: <unknown function> + 0xe32119 (0x7fe643734119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #2: <unknown function> + 0xd3e95 (0x7fe68f549e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default4]:frame #3: <unknown function> + 0x8609 (0x7fe694590609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default4]:frame #4: clone + 0x43 (0x7fe69435b353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default4]: |
|
[default3]:[rank11]: Traceback (most recent call last): |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default3]:[rank11]: trainer.train(dataloader) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default3]:[rank11]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default3]:[rank11]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default3]:[rank11]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default3]:[rank11]: output = model(**micro_batch) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default3]:[rank11]: sharded_logits = self.model( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default3]:[rank11]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default3]:[rank11]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward |
|
[default3]:[rank11]: new_kwargs[name] = recv_from_pipeline_state_buffer( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer |
|
[default3]:[rank11]: pipeline_state.run_communication() |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication |
|
[default3]:[rank11]: recv_activation_tensor = recv_activation() |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__ |
|
[default3]:[rank11]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0] |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors |
|
[default3]:[rank11]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors |
|
[default3]:[rank11]: meta = self._recv_meta(from_rank=from_rank, tag=tag) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta |
|
[default3]:[rank11]: dist.recv( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper |
|
[default3]:[rank11]: return func(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv |
|
[default3]:[rank11]: pg.recv([tensor], group_src_rank, tag).wait() |
|
[default3]:[rank11]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1. |
|
[default6]:[rank14]: Traceback (most recent call last): |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default6]:[rank14]: trainer.train(dataloader) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default6]:[rank14]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default6]:[rank14]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default6]:[rank14]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default6]:[rank14]: output = model(**micro_batch) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default6]:[rank14]: sharded_logits = self.model( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default6]:[rank14]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default6]:[rank14]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward |
|
[default6]:[rank14]: new_kwargs[name] = recv_from_pipeline_state_buffer( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer |
|
[default6]:[rank14]: pipeline_state.run_communication() |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication |
|
[default6]:[rank14]: recv_activation_tensor = recv_activation() |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__ |
|
[default6]:[rank14]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0] |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors |
|
[default6]:[rank14]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors |
|
[default6]:[rank14]: meta = self._recv_meta(from_rank=from_rank, tag=tag) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta |
|
[default6]:[rank14]: dist.recv( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper |
|
[default6]:[rank14]: return func(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv |
|
[default6]:[rank14]: pg.recv([tensor], group_src_rank, tag).wait() |
|
[default6]:[rank14]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1. |
|
[default7]:[rank15]: Traceback (most recent call last): |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default7]:[rank15]: trainer.train(dataloader) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default7]:[rank15]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default7]:[rank15]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default7]:[rank15]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default7]:[rank15]: output = model(**micro_batch) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank15]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank15]: return forward_call(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default7]:[rank15]: sharded_logits = self.model( |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank15]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank15]: return forward_call(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default7]:[rank15]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default7]:[rank15]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank15]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank15]: return forward_call(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward |
|
[default7]:[rank15]: new_kwargs[name] = recv_from_pipeline_state_buffer( |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer |
|
[default7]:[rank15]: pipeline_state.run_communication() |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication |
|
[default7]:[rank15]: recv_activation_tensor = recv_activation() |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__ |
|
[default7]:[rank15]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0] |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors |
|
[default7]:[rank15]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors |
|
[default7]:[rank15]: meta = self._recv_meta(from_rank=from_rank, tag=tag) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta |
|
[default7]:[rank15]: dist.recv( |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper |
|
[default7]:[rank15]: return func(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv |
|
[default7]:[rank15]: pg.recv([tensor], group_src_rank, tag).wait() |
|
[default7]:[rank15]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1. |
|
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 13834, last enqueued NCCL work: 13834, last completed NCCL work: 13833. |
|
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down. |
|
[default0]:[rank8]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600058 milliseconds before timing out. |
|
[default0]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fc8ba759897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default0]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fc8bba32c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default0]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fc8bba37a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default0]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fc8bba38dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default0]:frame #4: <unknown function> + 0xd3e95 (0x7fc9074d1e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default0]:frame #5: <unknown function> + 0x8609 (0x7fc90c518609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default0]:frame #6: clone + 0x43 (0x7fc90c2e3353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default0]: |
|
[default0]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default0]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600058 milliseconds before timing out. |
|
[default0]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fc8ba759897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default0]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fc8bba32c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default0]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fc8bba37a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default0]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fc8bba38dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default0]:frame #4: <unknown function> + 0xd3e95 (0x7fc9074d1e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default0]:frame #5: <unknown function> + 0x8609 (0x7fc90c518609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default0]:frame #6: clone + 0x43 (0x7fc90c2e3353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default0]: |
|
[default0]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default0]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fc8ba759897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default0]:frame #1: <unknown function> + 0xe32119 (0x7fc8bb6bc119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default0]:frame #2: <unknown function> + 0xd3e95 (0x7fc9074d1e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default0]:frame #3: <unknown function> + 0x8609 (0x7fc90c518609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default0]:frame #4: clone + 0x43 (0x7fc90c2e3353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default0]: |
|
[default2]:[rank10]: Traceback (most recent call last): |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default2]:[rank10]: trainer.train(dataloader) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default2]:[rank10]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default2]:[rank10]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default2]:[rank10]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default2]:[rank10]: output = model(**micro_batch) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default2]:[rank10]: sharded_logits = self.model( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default2]:[rank10]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default2]:[rank10]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward |
|
[default2]:[rank10]: new_kwargs[name] = recv_from_pipeline_state_buffer( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer |
|
[default2]:[rank10]: pipeline_state.run_communication() |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 150, in run_communication |
|
[default2]:[rank10]: recv_activation_tensor = recv_activation() |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 31, in __call__ |
|
[default2]:[rank10]: return self.p2p.recv_tensors(num_tensors=1, from_rank=self.from_rank)[0] |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 353, in recv_tensors |
|
[default2]:[rank10]: buffers, futures = self.irecv_tensors(num_tensors=num_tensors, from_rank=from_rank, tag=tag) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 326, in irecv_tensors |
|
[default2]:[rank10]: meta = self._recv_meta(from_rank=from_rank, tag=tag) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 269, in _recv_meta |
|
[default2]:[rank10]: dist.recv( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper |
|
[default2]:[rank10]: return func(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1932, in recv |
|
[default2]:[rank10]: pg.recv([tensor], group_src_rank, tag).wait() |
|
[default2]:[rank10]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1. |
|
[default1]:[rank9]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 13834, last enqueued NCCL work: 13834, last completed NCCL work: 13833. |
|
[default1]:[rank9]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default1]:[rank9]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down. |
|
[default1]:[rank9]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600078 milliseconds before timing out. |
|
[default1]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f0f5409a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default1]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f0f55373c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f0f55378a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f0f55379dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #4: <unknown function> + 0xd3e95 (0x7f0fa0e12e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default1]:frame #5: <unknown function> + 0x8609 (0x7f0fa5e59609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default1]:frame #6: clone + 0x43 (0x7f0fa5c24353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default1]: |
|
[default1]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default1]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600078 milliseconds before timing out. |
|
[default1]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f0f5409a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default1]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f0f55373c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f0f55378a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f0f55379dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #4: <unknown function> + 0xd3e95 (0x7f0fa0e12e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default1]:frame #5: <unknown function> + 0x8609 (0x7f0fa5e59609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default1]:frame #6: clone + 0x43 (0x7f0fa5c24353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default1]: |
|
[default1]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f0f5409a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default1]:frame #1: <unknown function> + 0xe32119 (0x7f0f54ffd119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #2: <unknown function> + 0xd3e95 (0x7f0fa0e12e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default1]:frame #3: <unknown function> + 0x8609 (0x7f0fa5e59609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default1]:frame #4: clone + 0x43 (0x7f0fa5c24353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default1]: |
|
[default3]:[rank11]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 13834, last enqueued NCCL work: 13834, last completed NCCL work: 13833. |
|
[default3]:[rank11]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default3]:[rank11]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down. |
|
[default3]:[rank11]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600051 milliseconds before timing out. |
|
[default3]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f22361b9897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default3]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f2237492c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f2237497a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f2237498dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #4: <unknown function> + 0xd3e95 (0x7f2282f31e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default3]:frame #5: <unknown function> + 0x8609 (0x7f2287f78609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default3]:frame #6: clone + 0x43 (0x7f2287d43353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default3]: |
|
[default3]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default3]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600051 milliseconds before timing out. |
|
[default3]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f22361b9897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default3]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f2237492c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f2237497a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f2237498dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #4: <unknown function> + 0xd3e95 (0x7f2282f31e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default3]:frame #5: <unknown function> + 0x8609 (0x7f2287f78609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default3]:frame #6: clone + 0x43 (0x7f2287d43353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default3]: |
|
[default3]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f22361b9897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default3]:frame #1: <unknown function> + 0xe32119 (0x7f223711c119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #2: <unknown function> + 0xd3e95 (0x7f2282f31e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default3]:frame #3: <unknown function> + 0x8609 (0x7f2287f78609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default3]:frame #4: clone + 0x43 (0x7f2287d43353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default3]: |
|
[default6]:[rank14]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 13834, last enqueued NCCL work: 13834, last completed NCCL work: 13833. |
|
[default6]:[rank14]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default6]:[rank14]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down. |
|
[default6]:[rank14]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600058 milliseconds before timing out. |
|
[default6]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8e58b09897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default6]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f8e59de2c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f8e59de7a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f8e59de8dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #4: <unknown function> + 0xd3e95 (0x7f8ea5881e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default6]:frame #5: <unknown function> + 0x8609 (0x7f8eaa8c8609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default6]:frame #6: clone + 0x43 (0x7f8eaa693353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default6]: |
|
[default6]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default6]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600058 milliseconds before timing out. |
|
[default6]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8e58b09897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default6]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f8e59de2c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f8e59de7a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f8e59de8dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #4: <unknown function> + 0xd3e95 (0x7f8ea5881e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default6]:frame #5: <unknown function> + 0x8609 (0x7f8eaa8c8609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default6]:frame #6: clone + 0x43 (0x7f8eaa693353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default6]: |
|
[default6]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f8e58b09897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default6]:frame #1: <unknown function> + 0xe32119 (0x7f8e59a6c119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #2: <unknown function> + 0xd3e95 (0x7f8ea5881e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default6]:frame #3: <unknown function> + 0x8609 (0x7f8eaa8c8609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default6]:frame #4: clone + 0x43 (0x7f8eaa693353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default6]: |
|
[default7]:[rank15]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 13834, last enqueued NCCL work: 13834, last completed NCCL work: 13833. |
|
[default7]:[rank15]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default7]:[rank15]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down. |
|
[default7]:[rank15]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600055 milliseconds before timing out. |
|
[default7]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fb12bea8897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default7]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fb12d181c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fb12d186a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fb12d187dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #4: <unknown function> + 0xd3e95 (0x7fb178c20e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default7]:frame #5: <unknown function> + 0x8609 (0x7fb17dc67609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default7]:frame #6: clone + 0x43 (0x7fb17da32353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default7]: |
|
[default7]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default7]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600055 milliseconds before timing out. |
|
[default7]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fb12bea8897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default7]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fb12d181c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fb12d186a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fb12d187dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #4: <unknown function> + 0xd3e95 (0x7fb178c20e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default7]:frame #5: <unknown function> + 0x8609 (0x7fb17dc67609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default7]:frame #6: clone + 0x43 (0x7fb17da32353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default7]: |
|
[default7]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fb12bea8897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default7]:frame #1: <unknown function> + 0xe32119 (0x7fb12ce0b119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #2: <unknown function> + 0xd3e95 (0x7fb178c20e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default7]:frame #3: <unknown function> + 0x8609 (0x7fb17dc67609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default7]:frame #4: clone + 0x43 (0x7fb17da32353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default7]: |
|
[default2]:[rank10]:[E ProcessGroupNCCL.cpp:1537] [PG 4 Rank 1] Timeout at NCCL work: 13834, last enqueued NCCL work: 13834, last completed NCCL work: 13833. |
|
[default2]:[rank10]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default2]:[rank10]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down. |
|
[default2]:[rank10]:[E ProcessGroupNCCL.cpp:1414] [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600029 milliseconds before timing out. |
|
[default2]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f020b4a2897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default2]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f020c77bc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f020c780a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f020c781dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #4: <unknown function> + 0xd3e95 (0x7f025821ae95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default2]:frame #5: <unknown function> + 0x8609 (0x7f025d261609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default2]:frame #6: clone + 0x43 (0x7f025d02c353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default2]: |
|
[default2]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default2]: what(): [PG 4 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=13834, OpType=RECV, NumelIn=7, NumelOut=7, Timeout(ms)=600000) ran for 600029 milliseconds before timing out. |
|
[default2]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f020b4a2897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default2]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f020c77bc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f020c780a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f020c781dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #4: <unknown function> + 0xd3e95 (0x7f025821ae95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default2]:frame #5: <unknown function> + 0x8609 (0x7f025d261609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default2]:frame #6: clone + 0x43 (0x7f025d02c353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default2]: |
|
[default2]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f020b4a2897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default2]:frame #1: <unknown function> + 0xe32119 (0x7f020c405119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #2: <unknown function> + 0xd3e95 (0x7f025821ae95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default2]:frame #3: <unknown function> + 0x8609 (0x7f025d261609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default2]:frame #4: clone + 0x43 (0x7f025d02c353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default2]: |
|
E0702 19:49:44.063000 140269937952576 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: -6) local_rank: 0 (pid: 734125) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 |
|
Traceback (most recent call last): |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module> |
|
sys.exit(main()) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper |
|
return f(*args, **kwargs) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main |
|
run(args) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run |
|
elastic_launch( |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ |
|
return launch_agent(self._config, self._entrypoint, list(args)) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent |
|
raise ChildFailedError( |
|
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: |
|
============================================================ |
|
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED |
|
------------------------------------------------------------ |
|
Failures: |
|
[1]: |
|
time : 2024-07-02_19:49:43 |
|
host : ip-26-0-172-73.ec2.internal |
|
rank : 9 (local_rank: 1) |
|
exitcode : -6 (pid: 734126) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 734126 |
|
[2]: |
|
time : 2024-07-02_19:49:43 |
|
host : ip-26-0-172-73.ec2.internal |
|
rank : 10 (local_rank: 2) |
|
exitcode : -6 (pid: 734127) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 734127 |
|
[3]: |
|
time : 2024-07-02_19:49:43 |
|
host : ip-26-0-172-73.ec2.internal |
|
rank : 11 (local_rank: 3) |
|
exitcode : -6 (pid: 734128) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 734128 |
|
[4]: |
|
time : 2024-07-02_19:49:43 |
|
host : ip-26-0-172-73.ec2.internal |
|
rank : 12 (local_rank: 4) |
|
exitcode : -6 (pid: 734129) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 734129 |
|
[5]: |
|
time : 2024-07-02_19:49:43 |
|
host : ip-26-0-172-73.ec2.internal |
|
rank : 13 (local_rank: 5) |
|
exitcode : -6 (pid: 734130) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 734130 |
|
[6]: |
|
time : 2024-07-02_19:49:43 |
|
host : ip-26-0-172-73.ec2.internal |
|
rank : 14 (local_rank: 6) |
|
exitcode : -6 (pid: 734131) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 734131 |
|
[7]: |
|
time : 2024-07-02_19:49:43 |
|
host : ip-26-0-172-73.ec2.internal |
|
rank : 15 (local_rank: 7) |
|
exitcode : -6 (pid: 734132) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 734132 |
|
------------------------------------------------------------ |
|
Root Cause (first observed failure): |
|
[0]: |
|
time : 2024-07-02_19:49:43 |
|
host : ip-26-0-172-73.ec2.internal |
|
rank : 8 (local_rank: 0) |
|
exitcode : -6 (pid: 734125) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 734125 |
|
============================================================ |
|
srun: error: ip-26-0-172-73: task 1: Exited with exit code 1 |
|
[default1]:[rank1]: Traceback (most recent call last): |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default1]:[rank1]: trainer.train(dataloader) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default1]:[rank1]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default1]:[rank1]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 258, in train_batch_iter |
|
[default1]:[rank1]: send_activation() |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 22, in __call__ |
|
[default1]:[rank1]: self.p2p.send_tensors([self.activation], to_rank=self.to_rank) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 348, in send_tensors |
|
[default1]:[rank1]: futures = self.isend_tensors(tensors=tensors, to_rank=to_rank, tag=tag) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 306, in isend_tensors |
|
[default1]:[rank1]: dist.isend( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1809, in isend |
|
[default1]:[rank1]: return pg.send([tensor], dst, tag) |
|
[default1]:[rank1]: RuntimeError: Unconvertible NCCL type |
|
[default6]:[rank6]: Traceback (most recent call last): |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default6]:[rank6]: trainer.train(dataloader) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default6]:[rank6]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default6]:[rank6]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 258, in train_batch_iter |
|
[default6]:[rank6]: send_activation() |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 22, in __call__ |
|
[default6]:[rank6]: self.p2p.send_tensors([self.activation], to_rank=self.to_rank) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 348, in send_tensors |
|
[default6]:[rank6]: futures = self.isend_tensors(tensors=tensors, to_rank=to_rank, tag=tag) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 306, in isend_tensors |
|
[default6]:[rank6]: dist.isend( |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1809, in isend |
|
[default6]:[rank6]: return pg.send([tensor], dst, tag) |
|
[default6]:[rank6]: RuntimeError: Unconvertible NCCL type |
|
[default7]:[rank7]: Traceback (most recent call last): |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default7]:[rank7]: trainer.train(dataloader) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default7]:[rank7]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default7]:[rank7]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 258, in train_batch_iter |
|
[default7]:[rank7]: send_activation() |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 22, in __call__ |
|
[default7]:[rank7]: self.p2p.send_tensors([self.activation], to_rank=self.to_rank) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 348, in send_tensors |
|
[default7]:[rank7]: futures = self.isend_tensors(tensors=tensors, to_rank=to_rank, tag=tag) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 306, in isend_tensors |
|
[default7]:[rank7]: dist.isend( |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1809, in isend |
|
[default7]:[rank7]: return pg.send([tensor], dst, tag) |
|
[default7]:[rank7]: RuntimeError: Unconvertible NCCL type |
|
[default5]:[rank5]: Traceback (most recent call last): |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default5]:[rank5]: trainer.train(dataloader) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default5]:[rank5]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default5]:[rank5]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 258, in train_batch_iter |
|
[default5]:[rank5]: send_activation() |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 22, in __call__ |
|
[default5]:[rank5]: self.p2p.send_tensors([self.activation], to_rank=self.to_rank) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 348, in send_tensors |
|
[default5]:[rank5]: futures = self.isend_tensors(tensors=tensors, to_rank=to_rank, tag=tag) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 306, in isend_tensors |
|
[default5]:[rank5]: dist.isend( |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1809, in isend |
|
[default5]:[rank5]: return pg.send([tensor], dst, tag) |
|
[default5]:[rank5]: RuntimeError: Unconvertible NCCL type |
|
[default4]:[rank4]: Traceback (most recent call last): |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default4]:[rank4]: trainer.train(dataloader) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default4]:[rank4]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default4]:[rank4]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 258, in train_batch_iter |
|
[default4]:[rank4]: send_activation() |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 22, in __call__ |
|
[default4]:[rank4]: self.p2p.send_tensors([self.activation], to_rank=self.to_rank) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 348, in send_tensors |
|
[default4]:[rank4]: futures = self.isend_tensors(tensors=tensors, to_rank=to_rank, tag=tag) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 306, in isend_tensors |
|
[default4]:[rank4]: dist.isend( |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1809, in isend |
|
[default4]:[rank4]: return pg.send([tensor], dst, tag) |
|
[default4]:[rank4]: RuntimeError: Unconvertible NCCL type |
|
[default2]:[rank2]: Traceback (most recent call last): |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default2]:[rank2]: trainer.train(dataloader) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default2]:[rank2]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default2]:[rank2]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 258, in train_batch_iter |
|
[default2]:[rank2]: send_activation() |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 22, in __call__ |
|
[default2]:[rank2]: self.p2p.send_tensors([self.activation], to_rank=self.to_rank) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 348, in send_tensors |
|
[default2]:[rank2]: futures = self.isend_tensors(tensors=tensors, to_rank=to_rank, tag=tag) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 306, in isend_tensors |
|
[default2]:[rank2]: dist.isend( |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1809, in isend |
|
[default2]:[rank2]: return pg.send([tensor], dst, tag) |
|
[default2]:[rank2]: RuntimeError: Unconvertible NCCL type |
|
[default3]:[rank3]: Traceback (most recent call last): |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default3]:[rank3]: trainer.train(dataloader) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default3]:[rank3]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default3]:[rank3]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 258, in train_batch_iter |
|
[default3]:[rank3]: send_activation() |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 22, in __call__ |
|
[default3]:[rank3]: self.p2p.send_tensors([self.activation], to_rank=self.to_rank) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 348, in send_tensors |
|
[default3]:[rank3]: futures = self.isend_tensors(tensors=tensors, to_rank=to_rank, tag=tag) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 306, in isend_tensors |
|
[default3]:[rank3]: dist.isend( |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1809, in isend |
|
[default3]:[rank3]: return pg.send([tensor], dst, tag) |
|
[default3]:[rank3]: RuntimeError: Unconvertible NCCL type |
|
[default3]:[rank3]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 3] Timeout at NCCL work: 175141, last enqueued NCCL work: 175197, last completed NCCL work: 175140. |
|
[default3]:[rank3]:[E ProcessGroupNCCL.cpp:577] [Rank 3] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default3]:[rank3]:[E ProcessGroupNCCL.cpp:583] [Rank 3] To avoid data inconsistency, we are taking the entire process down. |
|
[default3]:[rank3]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600036 milliseconds before timing out. |
|
[default3]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f3d49dba897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default3]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f3d4b093c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f3d4b098a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f3d4b099dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #4: <unknown function> + 0xd3e95 (0x7f3d96b32e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default3]:frame #5: <unknown function> + 0x8609 (0x7f3d9bb79609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default3]:frame #6: clone + 0x43 (0x7f3d9b944353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default3]: |
|
[default3]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default3]: what(): [PG 2 Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600036 milliseconds before timing out. |
|
[default3]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f3d49dba897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default3]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f3d4b093c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f3d4b098a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f3d4b099dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #4: <unknown function> + 0xd3e95 (0x7f3d96b32e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default3]:frame #5: <unknown function> + 0x8609 (0x7f3d9bb79609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default3]:frame #6: clone + 0x43 (0x7f3d9b944353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default3]: |
|
[default3]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default3]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f3d49dba897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default3]:frame #1: <unknown function> + 0xe32119 (0x7f3d4ad1d119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default3]:frame #2: <unknown function> + 0xd3e95 (0x7f3d96b32e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default3]:frame #3: <unknown function> + 0x8609 (0x7f3d9bb79609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default3]:frame #4: clone + 0x43 (0x7f3d9b944353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default3]: |
|
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 6] Timeout at NCCL work: 175141, last enqueued NCCL work: 175197, last completed NCCL work: 175140. |
|
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:577] [Rank 6] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:583] [Rank 6] To avoid data inconsistency, we are taking the entire process down. |
|
[default6]:[rank6]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 6] Process group watchdog thread terminated with exception: [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600085 milliseconds before timing out. |
|
[default6]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f00a568a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default6]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f00a6963c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f00a6968a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f00a6969dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #4: <unknown function> + 0xd3e95 (0x7f00f2402e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default6]:frame #5: <unknown function> + 0x8609 (0x7f00f7449609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default6]:frame #6: clone + 0x43 (0x7f00f7214353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default6]: |
|
[default6]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default6]: what(): [PG 2 Rank 6] Process group watchdog thread terminated with exception: [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600085 milliseconds before timing out. |
|
[default6]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f00a568a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default6]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f00a6963c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f00a6968a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f00a6969dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #4: <unknown function> + 0xd3e95 (0x7f00f2402e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default6]:frame #5: <unknown function> + 0x8609 (0x7f00f7449609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default6]:frame #6: clone + 0x43 (0x7f00f7214353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default6]: |
|
[default6]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default6]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f00a568a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default6]:frame #1: <unknown function> + 0xe32119 (0x7f00a65ed119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default6]:frame #2: <unknown function> + 0xd3e95 (0x7f00f2402e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default6]:frame #3: <unknown function> + 0x8609 (0x7f00f7449609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default6]:frame #4: clone + 0x43 (0x7f00f7214353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default6]: |
|
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 5] Timeout at NCCL work: 175141, last enqueued NCCL work: 175197, last completed NCCL work: 175140. |
|
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:577] [Rank 5] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:583] [Rank 5] To avoid data inconsistency, we are taking the entire process down. |
|
[default5]:[rank5]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 5] Process group watchdog thread terminated with exception: [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600026 milliseconds before timing out. |
|
[default5]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f5185b95897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default5]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f5186e6ec62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f5186e73a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f5186e74dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #4: <unknown function> + 0xd3e95 (0x7f51d290de95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default5]:frame #5: <unknown function> + 0x8609 (0x7f51d7954609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default5]:frame #6: clone + 0x43 (0x7f51d771f353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default5]: |
|
[default5]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default5]: what(): [PG 2 Rank 5] Process group watchdog thread terminated with exception: [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600026 milliseconds before timing out. |
|
[default5]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f5185b95897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default5]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f5186e6ec62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f5186e73a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f5186e74dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #4: <unknown function> + 0xd3e95 (0x7f51d290de95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default5]:frame #5: <unknown function> + 0x8609 (0x7f51d7954609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default5]:frame #6: clone + 0x43 (0x7f51d771f353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default5]: |
|
[default5]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default5]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f5185b95897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default5]:frame #1: <unknown function> + 0xe32119 (0x7f5186af8119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default5]:frame #2: <unknown function> + 0xd3e95 (0x7f51d290de95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default5]:frame #3: <unknown function> + 0x8609 (0x7f51d7954609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default5]:frame #4: clone + 0x43 (0x7f51d771f353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default5]: |
|
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 4] Timeout at NCCL work: 175141, last enqueued NCCL work: 175197, last completed NCCL work: 175140. |
|
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:577] [Rank 4] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:583] [Rank 4] To avoid data inconsistency, we are taking the entire process down. |
|
[default4]:[rank4]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 4] Process group watchdog thread terminated with exception: [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600033 milliseconds before timing out. |
|
[default4]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fa10f75d897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default4]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fa110a36c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fa110a3ba80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fa110a3cdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #4: <unknown function> + 0xd3e95 (0x7fa15c4d5e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default4]:frame #5: <unknown function> + 0x8609 (0x7fa16151c609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default4]:frame #6: clone + 0x43 (0x7fa1612e7353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default4]: |
|
[default4]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default4]: what(): [PG 2 Rank 4] Process group watchdog thread terminated with exception: [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600033 milliseconds before timing out. |
|
[default4]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fa10f75d897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default4]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fa110a36c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fa110a3ba80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fa110a3cdcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #4: <unknown function> + 0xd3e95 (0x7fa15c4d5e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default4]:frame #5: <unknown function> + 0x8609 (0x7fa16151c609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default4]:frame #6: clone + 0x43 (0x7fa1612e7353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default4]: |
|
[default4]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default4]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fa10f75d897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default4]:frame #1: <unknown function> + 0xe32119 (0x7fa1106c0119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default4]:frame #2: <unknown function> + 0xd3e95 (0x7fa15c4d5e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default4]:frame #3: <unknown function> + 0x8609 (0x7fa16151c609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default4]:frame #4: clone + 0x43 (0x7fa1612e7353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default4]: |
|
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 7] Timeout at NCCL work: 175141, last enqueued NCCL work: 175197, last completed NCCL work: 175140. |
|
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:577] [Rank 7] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:583] [Rank 7] To avoid data inconsistency, we are taking the entire process down. |
|
[default7]:[rank7]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 7] Process group watchdog thread terminated with exception: [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600092 milliseconds before timing out. |
|
[default7]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f4060c1a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default7]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f4061ef3c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f4061ef8a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f4061ef9dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #4: <unknown function> + 0xd3e95 (0x7f40ad992e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default7]:frame #5: <unknown function> + 0x8609 (0x7f40b29d9609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default7]:frame #6: clone + 0x43 (0x7f40b27a4353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default7]: |
|
[default7]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default7]: what(): [PG 2 Rank 7] Process group watchdog thread terminated with exception: [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600092 milliseconds before timing out. |
|
[default7]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f4060c1a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default7]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f4061ef3c62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f4061ef8a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f4061ef9dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #4: <unknown function> + 0xd3e95 (0x7f40ad992e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default7]:frame #5: <unknown function> + 0x8609 (0x7f40b29d9609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default7]:frame #6: clone + 0x43 (0x7f40b27a4353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default7]: |
|
[default7]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default7]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f4060c1a897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default7]:frame #1: <unknown function> + 0xe32119 (0x7f4061b7d119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default7]:frame #2: <unknown function> + 0xd3e95 (0x7f40ad992e95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default7]:frame #3: <unknown function> + 0x8609 (0x7f40b29d9609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default7]:frame #4: clone + 0x43 (0x7f40b27a4353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default7]: |
|
[default2]:[rank2]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 2] Timeout at NCCL work: 175141, last enqueued NCCL work: 175197, last completed NCCL work: 175140. |
|
[default2]:[rank2]:[E ProcessGroupNCCL.cpp:577] [Rank 2] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default2]:[rank2]:[E ProcessGroupNCCL.cpp:583] [Rank 2] To avoid data inconsistency, we are taking the entire process down. |
|
[default2]:[rank2]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600086 milliseconds before timing out. |
|
[default2]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd62b112897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default2]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fd62c3ebc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fd62c3f0a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fd62c3f1dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #4: <unknown function> + 0xd3e95 (0x7fd677e8ae95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default2]:frame #5: <unknown function> + 0x8609 (0x7fd67ced1609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default2]:frame #6: clone + 0x43 (0x7fd67cc9c353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default2]: |
|
[default2]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default2]: what(): [PG 2 Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600086 milliseconds before timing out. |
|
[default2]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd62b112897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default2]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7fd62c3ebc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7fd62c3f0a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7fd62c3f1dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #4: <unknown function> + 0xd3e95 (0x7fd677e8ae95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default2]:frame #5: <unknown function> + 0x8609 (0x7fd67ced1609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default2]:frame #6: clone + 0x43 (0x7fd67cc9c353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default2]: |
|
[default2]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default2]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fd62b112897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default2]:frame #1: <unknown function> + 0xe32119 (0x7fd62c075119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default2]:frame #2: <unknown function> + 0xd3e95 (0x7fd677e8ae95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default2]:frame #3: <unknown function> + 0x8609 (0x7fd67ced1609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default2]:frame #4: clone + 0x43 (0x7fd67cc9c353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default2]: |
|
[default1]:[rank1]:[E ProcessGroupNCCL.cpp:1537] [PG 2 Rank 1] Timeout at NCCL work: 175141, last enqueued NCCL work: 175197, last completed NCCL work: 175140. |
|
[default1]:[rank1]:[E ProcessGroupNCCL.cpp:577] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. |
|
[default1]:[rank1]:[E ProcessGroupNCCL.cpp:583] [Rank 1] To avoid data inconsistency, we are taking the entire process down. |
|
[default1]:[rank1]:[E ProcessGroupNCCL.cpp:1414] [PG 2 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600062 milliseconds before timing out. |
|
[default1]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f6b97aa6897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default1]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f6b98d7fc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f6b98d84a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f6b98d85dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #4: <unknown function> + 0xd3e95 (0x7f6be481ee95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default1]:frame #5: <unknown function> + 0x8609 (0x7f6be9865609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default1]:frame #6: clone + 0x43 (0x7f6be9630353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default1]: |
|
[default1]:terminate called after throwing an instance of 'c10::DistBackendError' |
|
[default1]: what(): [PG 2 Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=175141, OpType=_REDUCE_SCATTER_BASE, NumelIn=33554432, NumelOut=4194304, Timeout(ms)=600000) ran for 600062 milliseconds before timing out. |
|
[default1]:Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:565 (most recent call first): |
|
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f6b97aa6897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default1]:frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7f6b98d7fc62 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x1a0 (0x7f6b98d84a80 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f6b98d85dcc in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #4: <unknown function> + 0xd3e95 (0x7f6be481ee95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default1]:frame #5: <unknown function> + 0x8609 (0x7f6be9865609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default1]:frame #6: clone + 0x43 (0x7f6be9630353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default1]: |
|
[default1]:Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1418 (most recent call first): |
|
[default1]:frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f6b97aa6897 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libc10.so) |
|
[default1]:frame #1: <unknown function> + 0xe32119 (0x7f6b98a09119 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so) |
|
[default1]:frame #2: <unknown function> + 0xd3e95 (0x7f6be481ee95 in /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/../lib/libstdc++.so.6) |
|
[default1]:frame #3: <unknown function> + 0x8609 (0x7f6be9865609 in /lib/x86_64-linux-gnu/libpthread.so.0) |
|
[default1]:frame #4: clone + 0x43 (0x7f6be9630353 in /lib/x86_64-linux-gnu/libc.so.6) |
|
[default1]: |
|
W0702 19:49:58.964000 139728973952832 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 2521965 closing signal SIGTERM |
|
E0702 19:50:04.249000 139728973952832 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: -6) local_rank: 1 (pid: 2521966) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 |
|
Traceback (most recent call last): |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module> |
|
sys.exit(main()) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper |
|
return f(*args, **kwargs) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main |
|
run(args) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run |
|
elastic_launch( |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ |
|
return launch_agent(self._config, self._entrypoint, list(args)) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent |
|
raise ChildFailedError( |
|
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: |
|
============================================================ |
|
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED |
|
------------------------------------------------------------ |
|
Failures: |
|
[1]: |
|
time : 2024-07-02_19:49:58 |
|
host : ip-26-0-169-139.ec2.internal |
|
rank : 2 (local_rank: 2) |
|
exitcode : -6 (pid: 2521967) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 2521967 |
|
[2]: |
|
time : 2024-07-02_19:49:58 |
|
host : ip-26-0-169-139.ec2.internal |
|
rank : 3 (local_rank: 3) |
|
exitcode : -6 (pid: 2521968) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 2521968 |
|
[3]: |
|
time : 2024-07-02_19:49:58 |
|
host : ip-26-0-169-139.ec2.internal |
|
rank : 4 (local_rank: 4) |
|
exitcode : -6 (pid: 2521969) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 2521969 |
|
[4]: |
|
time : 2024-07-02_19:49:58 |
|
host : ip-26-0-169-139.ec2.internal |
|
rank : 5 (local_rank: 5) |
|
exitcode : -6 (pid: 2521970) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 2521970 |
|
[5]: |
|
time : 2024-07-02_19:49:58 |
|
host : ip-26-0-169-139.ec2.internal |
|
rank : 6 (local_rank: 6) |
|
exitcode : -6 (pid: 2521971) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 2521971 |
|
[6]: |
|
time : 2024-07-02_19:49:58 |
|
host : ip-26-0-169-139.ec2.internal |
|
rank : 7 (local_rank: 7) |
|
exitcode : -6 (pid: 2521972) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 2521972 |
|
------------------------------------------------------------ |
|
Root Cause (first observed failure): |
|
[0]: |
|
time : 2024-07-02_19:49:58 |
|
host : ip-26-0-169-139.ec2.internal |
|
rank : 1 (local_rank: 1) |
|
exitcode : -6 (pid: 2521966) |
|
error_file: <N/A> |
|
traceback : Signal 6 (SIGABRT) received by PID 2521966 |
|
============================================================ |
|
srun: error: ip-26-0-169-139: task 0: Exited with exit code 1 |
|
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details. |
|
|