======================== START TIME: Tue Jul 2 19:53:08 UTC 2024 python3 version = Python 3.10.14 ======================== The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well. Token is valid (permission: write). Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token Login successful Already on 'bench_cluster' M examples/config_tiny_llama.py M examples/config_tiny_llama.yaml M examples/train_tiny_llama.sh M src/nanotron/models/llama.py M src/nanotron/trainer.py Your branch is up to date with 'origin/bench_cluster'. Job status: RUNNING W0702 19:53:11.664000 140081734354752 torch/distributed/run.py:757] W0702 19:53:11.664000 140081734354752 torch/distributed/run.py:757] ***************************************** W0702 19:53:11.664000 140081734354752 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0702 19:53:11.664000 140081734354752 torch/distributed/run.py:757] ***************************************** W0702 19:53:15.542000 139790828439360 torch/distributed/run.py:757] W0702 19:53:15.542000 139790828439360 torch/distributed/run.py:757] ***************************************** W0702 19:53:15.542000 139790828439360 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0702 19:53:15.542000 139790828439360 torch/distributed/run.py:757] ***************************************** [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Config: [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Config(general=GeneralArgs(project='bench_cluster', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: run='%date_%jobid', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: seed=42, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: step=None, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: consumed_train_samples=None, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: benchmark_csv_path=None, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: ignore_sanity_checks=True), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: parallelism=ParallelismArgs(dp=2, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pp=8, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tp=1, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pp_engine=, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tp_mode=, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tp_linear_async_communication=False, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: expert_parallel_size=1), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: eos_token_id=2, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hidden_act='silu', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hidden_size=2048, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: initializer_range=0.02, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: intermediate_size=4096, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: is_llama_config=True, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: max_position_embeddings=4096, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_attention_heads=32, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_hidden_layers=24, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_key_value_heads=32, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pad_token_id=None, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pretraining_tp=1, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rms_norm_eps=1e-05, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rope_scaling=None, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rope_theta=10000.0, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tie_word_embeddings=True, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: use_cache=True, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: vocab_size=50257), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: init_method=RandomInit(std=0.025), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: dtype=torch.bfloat16, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: make_vocab_size_divisible_by=1, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: ddp_bucket_cap_mb=25), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tokenizer_revision=None, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tokenizer_max_length=None), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: checkpoint_interval=100000, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: save_initial_state=False, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: resume_checkpoint_path=None, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: checkpoints_path_is_shared_file_system=False), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: logging=LoggingArgs(log_level='info', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: log_level_replica='info', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: iteration_step_info_interval=1), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tokens=TokensArgs(sequence_length=4096, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: train_steps=20, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: micro_batch_size=8, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: batch_accumulation_per_replica=64, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: val_check_interval=-1, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: limit_val_batches=0, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: limit_test_batches=0), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: adam_beta1=0.9, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: adam_beta2=0.95, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: torch_adam_is_fused=True, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: name='adamW'), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: zero_stage=1, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: weight_decay=0.01, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: clip_grad=1.0, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: accumulate_grad_in_fp32=True, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lr_warmup_steps=1, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lr_warmup_style='linear', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lr_decay_style='linear', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lr_decay_steps=19, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lr_decay_starting_step=None, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: min_decay_lr=1e-05)), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: data_stages=[DatasetStageArgs(name='Training Stage', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: start_training_step=1, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hf_dataset_splits='train', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hf_dataset_config_name=None, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: dataset_processing_num_proc_per_process=64, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: dataset_overwrite_cache=False, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: text_column_name='text'), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: seed=42, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_loading_workers=32))], [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/16_GPUS/dp-2_tp-1_pp-8_mbz-8')), [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: lighteval=None) [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Model Config: [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: LlamaConfig(bos_token_id=1, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: eos_token_id=2, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hidden_act='silu', [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: hidden_size=2048, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: initializer_range=0.02, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: intermediate_size=4096, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: is_llama_config=True, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: max_position_embeddings=4096, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_attention_heads=32, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_hidden_layers=24, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: num_key_value_heads=32, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pad_token_id=None, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: pretraining_tp=1, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rms_norm_eps=1e-05, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rope_scaling=None, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: rope_theta=10000.0, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: tie_word_embeddings=True, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: use_cache=True, [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: vocab_size=50257) [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Building model.. [default0]:07/02/2024 19:53:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Setting PP block ranks... [default2]:07/02/2024 19:53:50 [INFO|DP=0|PP=5|TP=0|ip-26-0-171-88]: Local number of parameters: 126M (240.02MiB) [default4]:07/02/2024 19:53:50 [INFO|DP=0|PP=6|TP=0|ip-26-0-171-88]: Local number of parameters: 168M (320.03MiB) [default4]:07/02/2024 19:53:50 [INFO|DP=0|PP=6|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 324.04MiB. Peak allocated: 326.07MiB Peak reserved: 336.00MiB [default4]:07/02/2024 19:53:50 [INFO|DP=0|PP=6|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default0]:07/02/2024 19:53:50 [INFO|DP=0|PP=4|TP=0|ip-26-0-171-88]: Local number of parameters: 126M (240.02MiB) [default0]:07/02/2024 19:53:50 [INFO|DP=0|PP=4|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 243.03MiB. Peak allocated: 245.06MiB Peak reserved: 262.00MiB [default0]:07/02/2024 19:53:50 [INFO|DP=0|PP=4|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default6]:07/02/2024 19:53:50 [INFO|DP=0|PP=7|TP=0|ip-26-0-171-88]: Local number of parameters: 103M (196.32MiB) [default6]:07/02/2024 19:53:50 [INFO|DP=0|PP=7|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 196.33MiB. Peak allocated: 196.34MiB Peak reserved: 200.00MiB [default6]:07/02/2024 19:53:50 [INFO|DP=0|PP=7|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default6]:07/02/2024 19:53:50 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-62]: Local number of parameters: 168M (320.03MiB) [default6]:07/02/2024 19:53:50 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 324.04MiB. Peak allocated: 326.07MiB Peak reserved: 336.00MiB [default6]:07/02/2024 19:53:50 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default4]:07/02/2024 19:53:50 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-62]: Local number of parameters: 126M (240.02MiB) [default4]:07/02/2024 19:53:50 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 243.03MiB. Peak allocated: 245.06MiB Peak reserved: 262.00MiB [default4]:07/02/2024 19:53:50 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default2]:07/02/2024 19:53:50 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-62]: Local number of parameters: 126M (240.02MiB) [default2]:07/02/2024 19:53:50 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 243.03MiB. Peak allocated: 245.06MiB Peak reserved: 262.00MiB [default2]:07/02/2024 19:53:50 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default0]:07/02/2024 19:53:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Total number of parameters: 1.21G (2312.82MiB) [default0]:07/02/2024 19:53:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Local number of parameters: 271M (516.35MiB) [default0]:07/02/2024 19:53:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [After model building] Memory usage: 520.36MiB. Peak allocated: 522.39MiB Peak reserved: 534.00MiB [default0]:07/02/2024 19:53:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default0]:07/02/2024 19:53:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Parametrizing model parameters using StandardParametrizator [default2]:07/02/2024 19:53:50 [INFO|DP=0|PP=5|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 243.03MiB. Peak allocated: 245.06MiB Peak reserved: 262.00MiB [default2]:07/02/2024 19:53:50 [INFO|DP=0|PP=5|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default1]:07/02/2024 19:53:51 [INFO|DP=1|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default5]:07/02/2024 19:53:51 [INFO|DP=1|PP=2|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default7]:07/02/2024 19:53:51 [INFO|DP=1|PP=3|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default3]:07/02/2024 19:53:51 [INFO|DP=1|PP=1|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default3]:07/02/2024 19:53:51 [INFO|DP=1|PP=5|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default5]:07/02/2024 19:53:51 [INFO|DP=1|PP=6|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default1]:07/02/2024 19:53:51 [INFO|DP=1|PP=4|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default7]:07/02/2024 19:53:51 [INFO|DP=1|PP=7|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default0]:07/02/2024 19:53:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Optimizer Building] Using LearningRateForSP as learning rate [default0]:07/02/2024 19:53:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [ZeRO sharding] Size of optimizer params per rank: [default0]:07/02/2024 19:53:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [ZeRO sharding] DP Rank 0 has 135M out of 271M (50.00%) params' optimizer states [default0]:07/02/2024 19:53:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [ZeRO sharding] DP Rank 1 has 135M out of 271M (50.00%) params' optimizer states [default0]:07/02/2024 19:53:55 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples [default0]:07/02/2024 19:53:55 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Using `datasets` library [default0]:07/02/2024 19:53:55 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4') [default0]:Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 19:53:55 [WARNING|DP=0|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 19:53:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Training Plan] There are 1 training stages [default0]:07/02/2024 19:53:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Stage Training Stage] start from step 1 [default0]:07/02/2024 19:53:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [default0]:07/02/2024 19:53:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: [Start training] datetime: 2024-07-02 19:53:56.695094 | mbs: 8 | grad_accum: 64 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0 [default0]:07/02/2024 19:53:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps [default0]:07/02/2024 19:53:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-62]: Memory usage: 2069.40MiB. Peak allocated 2069.40MiB. Peak reserved: 2086.00MiB [default4]:07/02/2024 19:53:56 [WARNING|DP=0|PP=2|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default6]:07/02/2024 19:53:56 [WARNING|DP=0|PP=3|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default1]:07/02/2024 19:53:56 [WARNING|DP=1|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default1]:Repo card metadata block was not found. Setting CardData to empty. [default4]:Repo card metadata block was not found. Setting CardData to empty. [default7]:07/02/2024 19:53:56 [WARNING|DP=1|PP=3|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default7]:Repo card metadata block was not found. Setting CardData to empty. [default6]:Repo card metadata block was not found. Setting CardData to empty. [default2]:07/02/2024 19:53:56 [WARNING|DP=0|PP=1|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default3]:Repo card metadata block was not found. Setting CardData to empty. [default2]:Repo card metadata block was not found. Setting CardData to empty. [default5]:07/02/2024 19:53:56 [WARNING|DP=1|PP=2|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default3]:07/02/2024 19:53:56 [WARNING|DP=1|PP=1|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default3]:07/02/2024 19:53:56 [WARNING|DP=1|PP=5|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default2]:07/02/2024 19:53:56 [WARNING|DP=0|PP=5|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default5]:07/02/2024 19:53:56 [WARNING|DP=1|PP=6|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default5]:Repo card metadata block was not found. Setting CardData to empty. [default2]:Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 19:53:56 [WARNING|DP=0|PP=4|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default3]:Repo card metadata block was not found. Setting CardData to empty. [default6]:07/02/2024 19:53:56 [WARNING|DP=0|PP=7|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default1]:07/02/2024 19:53:56 [WARNING|DP=1|PP=4|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default7]:07/02/2024 19:53:56 [WARNING|DP=1|PP=7|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default7]:Repo card metadata block was not found. Setting CardData to empty. [default6]:Repo card metadata block was not found. Setting CardData to empty. [default0]:Repo card metadata block was not found. Setting CardData to empty. [default5]:Repo card metadata block was not found. Setting CardData to empty. [default1]:Repo card metadata block was not found. Setting CardData to empty. [default4]:07/02/2024 19:53:57 [WARNING|DP=0|PP=6|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default4]:Repo card metadata block was not found. Setting CardData to empty. [default0]:[rank0]: Traceback (most recent call last): [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default0]:[rank0]: trainer.train(dataloader) [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default0]:[rank0]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default0]:[rank0]: outputs = self.pipeline_engine.train_batch_iter( [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter [default0]:[rank0]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward [default0]:[rank0]: output = model(**micro_batch) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default0]:[rank0]: return self._call_impl(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default0]:[rank0]: return forward_call(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward [default0]:[rank0]: sharded_logits = self.model( [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default0]:[rank0]: return self._call_impl(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default0]:[rank0]: return forward_call(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward [default0]:[rank0]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states [default0]:[rank0]: hidden_encoder_states = encoder_block(**hidden_encoder_states) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default0]:[rank0]: return self._call_impl(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default0]:[rank0]: return forward_call(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward [default0]:[rank0]: output = self.pp_block(**new_kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default0]:[rank0]: return self._call_impl(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default0]:[rank0]: return forward_call(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 631, in forward [default0]:[rank0]: output = self.attn(hidden_states=hidden_states, sequence_mask=sequence_mask) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default0]:[rank0]: return self._call_impl(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default0]:[rank0]: return forward_call(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 360, in forward [default0]:[rank0]: qkv_states = self.qkv_proj( [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default0]:[rank0]: return self._call_impl(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default0]:[rank0]: return forward_call(*args, **kwargs) [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 87, in forward [default0]:[rank0]: return column_linear( [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 359, in column_linear [default0]:[rank0]: return F.linear(input, weight, bias) [default0]:[rank0]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 384.00 MiB. GPU [default1]:[rank1]: Traceback (most recent call last): [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default1]:[rank1]: trainer.train(dataloader) [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default1]:[rank1]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default1]:[rank1]: outputs = self.pipeline_engine.train_batch_iter( [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter [default1]:[rank1]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward [default1]:[rank1]: output = model(**micro_batch) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default1]:[rank1]: return self._call_impl(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default1]:[rank1]: return forward_call(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward [default1]:[rank1]: sharded_logits = self.model( [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default1]:[rank1]: return self._call_impl(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default1]:[rank1]: return forward_call(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward [default1]:[rank1]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states [default1]:[rank1]: hidden_encoder_states = encoder_block(**hidden_encoder_states) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default1]:[rank1]: return self._call_impl(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default1]:[rank1]: return forward_call(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward [default1]:[rank1]: output = self.pp_block(**new_kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default1]:[rank1]: return self._call_impl(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default1]:[rank1]: return forward_call(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward [default1]:[rank1]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default1]:[rank1]: return self._call_impl(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default1]:[rank1]: return forward_call(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward [default1]:[rank1]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default1]:[rank1]: return self._call_impl(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default1]:[rank1]: return forward_call(*args, **kwargs) [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 128, in forward [default1]:[rank1]: return self.act(gate_states) * up_states [default1]:[rank1]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB. GPU  has a total capacity of 79.33 GiB of which 93.94 MiB is free. Including non-PyTorch memory, this process has 79.22 GiB memory in use. Of the allocated memory 70.93 GiB is allocated by PyTorch, and 239.24 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) [default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) [default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass W0702 19:54:21.065000 140081734354752 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3797700 closing signal SIGTERM W0702 19:54:21.066000 140081734354752 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3797701 closing signal SIGTERM W0702 19:54:21.066000 140081734354752 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3797702 closing signal SIGTERM W0702 19:54:21.066000 140081734354752 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3797703 closing signal SIGTERM W0702 19:54:21.068000 140081734354752 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3797704 closing signal SIGTERM W0702 19:54:21.070000 140081734354752 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3797705 closing signal SIGTERM W0702 19:54:21.070000 140081734354752 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3797706 closing signal SIGTERM [default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) [default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) [default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) [default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass E0702 19:54:23.987000 140081734354752 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 3797699) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in sys.exit(main()) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ /fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED ------------------------------------------------------------ Failures: ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-07-02_19:54:21 host : ip-26-0-171-62.ec2.internal rank : 0 (local_rank: 0) exitcode : 1 (pid: 3797699) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ [default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Attempting to run cuBLAS, but there was no current CUDA context! Attempting to set the primary context... (Triggered internally at ../aten/src/ATen/cuda/CublasHandlePool.cpp:135.) [default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) [default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass srun: error: ip-26-0-171-62: task 0: Exited with exit code 1 [default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) [default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass W0702 19:54:25.875000 139785161619200 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-171-88.ec2.internal_786089_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. [default0]:[rank8]: Traceback (most recent call last): [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default0]:[rank8]: trainer.train(dataloader) [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default0]:[rank8]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default0]:[rank8]: outputs = self.pipeline_engine.train_batch_iter( [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter [default0]:[rank8]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward [default0]:[rank8]: output = model(**micro_batch) [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default0]:[rank8]: return self._call_impl(*args, **kwargs) [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default0]:[rank8]: return forward_call(*args, **kwargs) [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward [default0]:[rank8]: sharded_logits = self.model( [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default0]:[rank8]: return self._call_impl(*args, **kwargs) [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default0]:[rank8]: return forward_call(*args, **kwargs) [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward [default0]:[rank8]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states [default0]:[rank8]: hidden_encoder_states = encoder_block(**hidden_encoder_states) [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [default0]:[rank8]: return self._call_impl(*args, **kwargs) [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [default0]:[rank8]: return forward_call(*args, **kwargs) [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 126, in forward [default0]:[rank8]: new_kwargs[name] = recv_from_pipeline_state_buffer( [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/functional.py", line 117, in recv_from_pipeline_state_buffer [default0]:[rank8]: pipeline_state.run_communication() [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 160, in run_communication [default0]:[rank8]: send_grad() [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/state.py", line 41, in __call__ [default0]:[rank8]: self.p2p.send_tensors([self.grad], to_rank=self.to_rank) [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 348, in send_tensors [default0]:[rank8]: futures = self.isend_tensors(tensors=tensors, to_rank=to_rank, tag=tag) [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 295, in isend_tensors [default0]:[rank8]: self._send_meta(tensor, to_rank=to_rank, tag=tag) [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/p2p.py", line 221, in _send_meta [default0]:[rank8]: dist.send( [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper [default0]:[rank8]: return func(*args, **kwargs) [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1886, in send [default0]:[rank8]: group.send([tensor], group_dst_rank, tag).wait() [default0]:[rank8]: torch.distributed.DistBackendError: NCCL communicator was aborted on rank 1. W0702 19:54:26.074000 139790828439360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 786163 closing signal SIGTERM W0702 19:54:26.074000 139790828439360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 786164 closing signal SIGTERM W0702 19:54:26.075000 139790828439360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 786165 closing signal SIGTERM W0702 19:54:26.076000 139790828439360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 786166 closing signal SIGTERM W0702 19:54:26.077000 139790828439360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 786167 closing signal SIGTERM W0702 19:54:26.077000 139790828439360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 786168 closing signal SIGTERM W0702 19:54:26.078000 139790828439360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 786169 closing signal SIGTERM W0702 19:54:26.079000 139790828439360 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 786170 closing signal SIGTERM W0702 19:54:29.506000 139790828439360 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-88.ec2.internal_786089_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 19:54:29.517000 139790828439360 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-88.ec2.internal_786089_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store return getattr(self._store, store_op)(*args, **kwargs) torch.distributed.DistNetworkError: Broken pipe The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in sys.exit(main()) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 254, in launch_agent result = agent.run() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper result = f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 733, in run result = self._invoke_run(role) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 908, in _invoke_run num_nodes_waiting = rdzv_handler.num_nodes_waiting() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1174, in num_nodes_waiting self._state_holder.sync() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 419, in sync get_response = self._backend.get_state() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state base64_state: bytes = self._call_store("get", self._key) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store raise RendezvousConnectionError( torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details. srun: error: ip-26-0-171-88: task 1: Exited with exit code 1 Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.