======================== START TIME: Tue Jul 2 21:08:45 UTC 2024 python3 version = Python 3.10.14 ======================== The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well. Token is valid (permission: write). Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token Login successful Already on 'bench_cluster' M examples/config_tiny_llama.py M examples/config_tiny_llama.yaml M examples/train_tiny_llama.sh M src/nanotron/models/llama.py M src/nanotron/trainer.py Your branch is up to date with 'origin/bench_cluster'. Job status: RUNNING W0702 21:08:47.731000 139855240435520 torch/distributed/run.py:757] W0702 21:08:47.731000 139855240435520 torch/distributed/run.py:757] ***************************************** W0702 21:08:47.731000 139855240435520 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0702 21:08:47.731000 139855240435520 torch/distributed/run.py:757] ***************************************** W0702 21:08:47.754000 140400905824064 torch/distributed/run.py:757] W0702 21:08:47.754000 140400905824064 torch/distributed/run.py:757] ***************************************** W0702 21:08:47.754000 140400905824064 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0702 21:08:47.754000 140400905824064 torch/distributed/run.py:757] ***************************************** W0702 21:08:47.788000 139761221384000 torch/distributed/run.py:757] W0702 21:08:47.788000 139761221384000 torch/distributed/run.py:757] ***************************************** W0702 21:08:47.788000 139761221384000 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0702 21:08:47.788000 139761221384000 torch/distributed/run.py:757] ***************************************** W0702 21:08:47.808000 139863576618816 torch/distributed/run.py:757] W0702 21:08:47.808000 139863576618816 torch/distributed/run.py:757] ***************************************** W0702 21:08:47.808000 139863576618816 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0702 21:08:47.808000 139863576618816 torch/distributed/run.py:757] ***************************************** W0702 21:08:50.320000 139633795336000 torch/distributed/run.py:757] W0702 21:08:50.320000 139633795336000 torch/distributed/run.py:757] ***************************************** W0702 21:08:50.320000 139633795336000 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0702 21:08:50.320000 139633795336000 torch/distributed/run.py:757] ***************************************** W0702 21:08:50.468000 140634038429504 torch/distributed/run.py:757] W0702 21:08:50.468000 140634038429504 torch/distributed/run.py:757] ***************************************** W0702 21:08:50.468000 140634038429504 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0702 21:08:50.468000 140634038429504 torch/distributed/run.py:757] ***************************************** W0702 21:08:51.233000 140507709724480 torch/distributed/run.py:757] W0702 21:08:51.233000 140507709724480 torch/distributed/run.py:757] ***************************************** W0702 21:08:51.233000 140507709724480 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0702 21:08:51.233000 140507709724480 torch/distributed/run.py:757] ***************************************** W0702 21:08:51.245000 139670412732224 torch/distributed/run.py:757] W0702 21:08:51.245000 139670412732224 torch/distributed/run.py:757] ***************************************** W0702 21:08:51.245000 139670412732224 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0702 21:08:51.245000 139670412732224 torch/distributed/run.py:757] ***************************************** [default0]:07/02/2024 21:09:15 [WARNING|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Vocab Size Padding] Padded vocab (size: 50257) with 1 dummy tokens (new size: 50258) [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Config: [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Config(general=GeneralArgs(project='bench_cluster', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: run='%date_%jobid', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: seed=42, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: step=None, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: consumed_train_samples=None, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: benchmark_csv_path=None, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: ignore_sanity_checks=True), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: parallelism=ParallelismArgs(dp=32, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pp=1, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tp=2, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pp_engine=, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tp_mode=, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tp_linear_async_communication=False, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: expert_parallel_size=1), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: eos_token_id=2, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hidden_act='silu', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hidden_size=2048, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: initializer_range=0.02, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: intermediate_size=4096, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: is_llama_config=True, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: max_position_embeddings=4096, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_attention_heads=32, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_hidden_layers=24, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_key_value_heads=32, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pad_token_id=None, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pretraining_tp=1, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rms_norm_eps=1e-05, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rope_scaling=None, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rope_theta=10000.0, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tie_word_embeddings=True, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: use_cache=True, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: vocab_size=50258), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: init_method=RandomInit(std=0.025), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: dtype=torch.bfloat16, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: make_vocab_size_divisible_by=1, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: ddp_bucket_cap_mb=25), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tokenizer_revision=None, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tokenizer_max_length=None), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: checkpoint_interval=100000, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: save_initial_state=False, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: resume_checkpoint_path=None, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: checkpoints_path_is_shared_file_system=False), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: logging=LoggingArgs(log_level='info', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: log_level_replica='info', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration_step_info_interval=1), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tokens=TokensArgs(sequence_length=4096, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: train_steps=20, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: micro_batch_size=16, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: batch_accumulation_per_replica=2, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: val_check_interval=-1, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: limit_val_batches=0, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: limit_test_batches=0), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: adam_beta1=0.9, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: adam_beta2=0.95, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: torch_adam_is_fused=True, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: name='adamW'), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: zero_stage=1, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: weight_decay=0.01, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: clip_grad=1.0, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: accumulate_grad_in_fp32=True, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lr_warmup_steps=1, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lr_warmup_style='linear', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lr_decay_style='linear', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lr_decay_steps=19, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lr_decay_starting_step=None, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: min_decay_lr=1e-05)), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: data_stages=[DatasetStageArgs(name='Training Stage', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: start_training_step=1, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hf_dataset_splits='train', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hf_dataset_config_name=None, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: dataset_processing_num_proc_per_process=64, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: dataset_overwrite_cache=False, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: text_column_name='text'), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: seed=42, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_loading_workers=32))], [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/64_GPUS/dp-32_tp-2_pp-1_mbz-16')), [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lighteval=None) [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Model Config: [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: LlamaConfig(bos_token_id=1, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: eos_token_id=2, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hidden_act='silu', [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hidden_size=2048, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: initializer_range=0.02, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: intermediate_size=4096, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: is_llama_config=True, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: max_position_embeddings=4096, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_attention_heads=32, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_hidden_layers=24, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_key_value_heads=32, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pad_token_id=None, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pretraining_tp=1, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rms_norm_eps=1e-05, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rope_scaling=None, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rope_theta=10000.0, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tie_word_embeddings=True, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: use_cache=True, [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: vocab_size=50258) [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Building model.. [default0]:07/02/2024 21:09:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Setting PP block ranks... [default0]:07/02/2024 21:09:27 [INFO|DP=16|PP=0|TP=0|ip-26-0-169-86]: No checkpoint path provided. [default1]:07/02/2024 21:09:27 [INFO|DP=16|PP=0|TP=1|ip-26-0-169-86]: No checkpoint path provided. [default0]:07/02/2024 21:09:27 [INFO|DP=4|PP=0|TP=0|ip-26-0-163-147]: No checkpoint path provided. [default3]:07/02/2024 21:09:27 [INFO|DP=25|PP=0|TP=1|ip-26-0-171-62]: No checkpoint path provided. [default1]:07/02/2024 21:09:27 [INFO|DP=4|PP=0|TP=1|ip-26-0-163-147]: No checkpoint path provided. [default1]:07/02/2024 21:09:27 [INFO|DP=24|PP=0|TP=1|ip-26-0-171-62]: No checkpoint path provided. [default3]:07/02/2024 21:09:27 [INFO|DP=17|PP=0|TP=1|ip-26-0-169-86]: No checkpoint path provided. [default2]:07/02/2024 21:09:27 [INFO|DP=17|PP=0|TP=0|ip-26-0-169-86]: No checkpoint path provided. [default2]:07/02/2024 21:09:27 [INFO|DP=25|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default0]:07/02/2024 21:09:27 [INFO|DP=24|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default4]:07/02/2024 21:09:27 [INFO|DP=26|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default5]:07/02/2024 21:09:27 [INFO|DP=26|PP=0|TP=1|ip-26-0-171-62]: No checkpoint path provided. [default6]:07/02/2024 21:09:27 [INFO|DP=27|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided. [default2]:07/02/2024 21:09:27 [INFO|DP=5|PP=0|TP=0|ip-26-0-163-147]: No checkpoint path provided. [default3]:07/02/2024 21:09:27 [INFO|DP=5|PP=0|TP=1|ip-26-0-163-147]: No checkpoint path provided. [default0]:07/02/2024 21:09:27 [INFO|DP=8|PP=0|TP=0|ip-26-0-163-226]: No checkpoint path provided. [default1]:07/02/2024 21:09:27 [INFO|DP=8|PP=0|TP=1|ip-26-0-163-226]: No checkpoint path provided. [default0]:07/02/2024 21:09:27 [INFO|DP=28|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default1]:07/02/2024 21:09:27 [INFO|DP=28|PP=0|TP=1|ip-26-0-171-88]: No checkpoint path provided. [default0]:07/02/2024 21:09:27 [INFO|DP=12|PP=0|TP=0|ip-26-0-168-238]: No checkpoint path provided. [default7]:07/02/2024 21:09:27 [INFO|DP=27|PP=0|TP=1|ip-26-0-171-62]: No checkpoint path provided. [default1]:07/02/2024 21:09:27 [INFO|DP=12|PP=0|TP=1|ip-26-0-168-238]: No checkpoint path provided. [default5]:07/02/2024 21:09:27 [INFO|DP=6|PP=0|TP=1|ip-26-0-163-147]: No checkpoint path provided. [default7]:07/02/2024 21:09:27 [INFO|DP=11|PP=0|TP=1|ip-26-0-163-226]: No checkpoint path provided. [default3]:07/02/2024 21:09:27 [INFO|DP=9|PP=0|TP=1|ip-26-0-163-226]: No checkpoint path provided. [default6]:07/02/2024 21:09:27 [INFO|DP=7|PP=0|TP=0|ip-26-0-163-147]: No checkpoint path provided. [default4]:07/02/2024 21:09:27 [INFO|DP=10|PP=0|TP=0|ip-26-0-163-226]: No checkpoint path provided. [default5]:07/02/2024 21:09:27 [INFO|DP=10|PP=0|TP=1|ip-26-0-163-226]: No checkpoint path provided. [default4]:07/02/2024 21:09:27 [INFO|DP=6|PP=0|TP=0|ip-26-0-163-147]: No checkpoint path provided. [default2]:07/02/2024 21:09:27 [INFO|DP=9|PP=0|TP=0|ip-26-0-163-226]: No checkpoint path provided. [default7]:07/02/2024 21:09:27 [INFO|DP=7|PP=0|TP=1|ip-26-0-163-147]: No checkpoint path provided. [default6]:07/02/2024 21:09:27 [INFO|DP=11|PP=0|TP=0|ip-26-0-163-226]: No checkpoint path provided. [default4]:07/02/2024 21:09:27 [INFO|DP=30|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default5]:07/02/2024 21:09:27 [INFO|DP=30|PP=0|TP=1|ip-26-0-171-88]: No checkpoint path provided. [default2]:07/02/2024 21:09:27 [INFO|DP=29|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default6]:07/02/2024 21:09:27 [INFO|DP=31|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided. [default7]:07/02/2024 21:09:27 [INFO|DP=31|PP=0|TP=1|ip-26-0-171-88]: No checkpoint path provided. [default3]:07/02/2024 21:09:27 [INFO|DP=29|PP=0|TP=1|ip-26-0-171-88]: No checkpoint path provided. [default5]:07/02/2024 21:09:27 [INFO|DP=18|PP=0|TP=1|ip-26-0-169-86]: No checkpoint path provided. [default6]:07/02/2024 21:09:27 [INFO|DP=19|PP=0|TP=0|ip-26-0-169-86]: No checkpoint path provided. [default7]:07/02/2024 21:09:27 [INFO|DP=19|PP=0|TP=1|ip-26-0-169-86]: No checkpoint path provided. [default4]:07/02/2024 21:09:27 [INFO|DP=18|PP=0|TP=0|ip-26-0-169-86]: No checkpoint path provided. [default0]:07/02/2024 21:09:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Total number of parameters: 1.11G (2116.70MiB) [default0]:07/02/2024 21:09:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Local number of parameters: 555M (1058.35MiB) [default0]:07/02/2024 21:09:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [After model building] Memory usage: 1082.37MiB. Peak allocated: 1182.56MiB Peak reserved: 1200.00MiB [default0]:07/02/2024 21:09:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided. [default0]:07/02/2024 21:09:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Parametrizing model parameters using StandardParametrizator [default1]:07/02/2024 21:09:27 [INFO|DP=0|PP=0|TP=1|ip-26-0-160-225]: Local number of parameters: 555M (1058.35MiB) [default1]:07/02/2024 21:09:27 [INFO|DP=0|PP=0|TP=1|ip-26-0-160-225]: [After model building] Memory usage: 1082.37MiB. Peak allocated: 1182.56MiB Peak reserved: 1200.00MiB [default1]:07/02/2024 21:09:27 [INFO|DP=0|PP=0|TP=1|ip-26-0-160-225]: No checkpoint path provided. [default3]:07/02/2024 21:09:28 [INFO|DP=13|PP=0|TP=1|ip-26-0-168-238]: No checkpoint path provided. [default6]:07/02/2024 21:09:28 [INFO|DP=15|PP=0|TP=0|ip-26-0-168-238]: No checkpoint path provided. [default4]:07/02/2024 21:09:28 [INFO|DP=14|PP=0|TP=0|ip-26-0-168-238]: No checkpoint path provided. [default5]:07/02/2024 21:09:28 [INFO|DP=14|PP=0|TP=1|ip-26-0-168-238]: No checkpoint path provided. [default2]:07/02/2024 21:09:28 [INFO|DP=13|PP=0|TP=0|ip-26-0-168-238]: No checkpoint path provided. [default7]:07/02/2024 21:09:28 [INFO|DP=15|PP=0|TP=1|ip-26-0-168-238]: No checkpoint path provided. [default2]:07/02/2024 21:09:28 [INFO|DP=1|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided. [default7]:07/02/2024 21:09:28 [INFO|DP=3|PP=0|TP=1|ip-26-0-160-225]: No checkpoint path provided. [default3]:07/02/2024 21:09:28 [INFO|DP=1|PP=0|TP=1|ip-26-0-160-225]: No checkpoint path provided. [default6]:07/02/2024 21:09:28 [INFO|DP=3|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided. [default0]:07/02/2024 21:09:28 [INFO|DP=20|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided. [default7]:07/02/2024 21:09:28 [INFO|DP=23|PP=0|TP=1|ip-26-0-171-102]: No checkpoint path provided. [default6]:07/02/2024 21:09:28 [INFO|DP=23|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided. [default4]:07/02/2024 21:09:28 [INFO|DP=22|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided. [default1]:07/02/2024 21:09:28 [INFO|DP=20|PP=0|TP=1|ip-26-0-171-102]: No checkpoint path provided. [default3]:07/02/2024 21:09:28 [INFO|DP=21|PP=0|TP=1|ip-26-0-171-102]: No checkpoint path provided. [default4]:07/02/2024 21:09:28 [INFO|DP=2|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided. [default5]:07/02/2024 21:09:28 [INFO|DP=22|PP=0|TP=1|ip-26-0-171-102]: No checkpoint path provided. [default2]:07/02/2024 21:09:28 [INFO|DP=21|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided. [default5]:07/02/2024 21:09:28 [INFO|DP=2|PP=0|TP=1|ip-26-0-160-225]: No checkpoint path provided. [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Optimizer Building] Using LearningRateForSP as learning rate [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] Size of optimizer params per rank: [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 0 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 1 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 2 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 3 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 4 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 5 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 6 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 7 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 8 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 9 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 10 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 11 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 12 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 13 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 14 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 15 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 16 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 17 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 18 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 19 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 20 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 21 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 22 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 23 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 24 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 25 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 26 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 27 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 28 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 29 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 30 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 31 has 17.3M out of 555M (3.12%) params' optimizer states [default0]:07/02/2024 21:09:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples [default0]:07/02/2024 21:09:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Using `datasets` library [default0]:07/02/2024 21:09:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4') [default0]:Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 21:09:37 [WARNING|DP=0|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 21:09:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Training Plan] There are 1 training stages [default0]:07/02/2024 21:09:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Stage Training Stage] start from step 1 [default0]:07/02/2024 21:09:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [default0]:07/02/2024 21:09:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Start training] datetime: 2024-07-02 21:09:39.032041 | mbs: 16 | grad_accum: 2 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0 [default0]:07/02/2024 21:09:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps [default0]:07/02/2024 21:09:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 3265.22MiB. Peak allocated 3265.22MiB. Peak reserved: 3318.00MiB [default0]:Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 21:09:39 [WARNING|DP=4|PP=0|TP=0|ip-26-0-163-147]: Repo card metadata block was not found. Setting CardData to empty. [default4]:07/02/2024 21:09:39 [WARNING|DP=10|PP=0|TP=0|ip-26-0-163-226]: Repo card metadata block was not found. Setting CardData to empty. [default6]:Repo card metadata block was not found. Setting CardData to empty. [default7]:07/02/2024 21:09:39 [WARNING|DP=23|PP=0|TP=1|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. [default7]:Repo card metadata block was not found. Setting CardData to empty. [default4]:Repo card metadata block was not found. Setting CardData to empty. [default6]:07/02/2024 21:09:39 [WARNING|DP=11|PP=0|TP=0|ip-26-0-163-226]: Repo card metadata block was not found. Setting CardData to empty. [default5]:07/02/2024 21:09:39 [WARNING|DP=30|PP=0|TP=1|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 21:09:39 [WARNING|DP=28|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default0]:Repo card metadata block was not found. Setting CardData to empty. [default3]:Repo card metadata block was not found. Setting CardData to empty. [default3]:07/02/2024 21:09:39 [WARNING|DP=29|PP=0|TP=1|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default5]:Repo card metadata block was not found. Setting CardData to empty. [default7]:Repo card metadata block was not found. Setting CardData to empty. [default7]:07/02/2024 21:09:39 [WARNING|DP=19|PP=0|TP=1|ip-26-0-169-86]: Repo card metadata block was not found. Setting CardData to empty. [default3]:Repo card metadata block was not found. Setting CardData to empty. [default3]:Repo card metadata block was not found. Setting CardData to empty. [default7]:07/02/2024 21:09:39 [WARNING|DP=3|PP=0|TP=1|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. [default2]:07/02/2024 21:09:39 [WARNING|DP=1|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. [default2]:07/02/2024 21:09:39 [WARNING|DP=5|PP=0|TP=0|ip-26-0-163-147]: Repo card metadata block was not found. Setting CardData to empty. [default3]:07/02/2024 21:09:39 [WARNING|DP=5|PP=0|TP=1|ip-26-0-163-147]: Repo card metadata block was not found. Setting CardData to empty. [default4]:Repo card metadata block was not found. Setting CardData to empty. [default4]:07/02/2024 21:09:39 [WARNING|DP=26|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default5]:Repo card metadata block was not found. Setting CardData to empty. [default6]:07/02/2024 21:09:39 [WARNING|DP=27|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default6]:07/02/2024 21:09:39 [WARNING|DP=7|PP=0|TP=0|ip-26-0-163-147]: Repo card metadata block was not found. Setting CardData to empty. [default7]:07/02/2024 21:09:39 [WARNING|DP=7|PP=0|TP=1|ip-26-0-163-147]: Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 21:09:39 [WARNING|DP=20|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. [default3]:07/02/2024 21:09:39 [WARNING|DP=25|PP=0|TP=1|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default6]:07/02/2024 21:09:39 [WARNING|DP=23|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. [default4]:07/02/2024 21:09:39 [WARNING|DP=22|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. [default1]:07/02/2024 21:09:39 [WARNING|DP=20|PP=0|TP=1|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. [default2]:Repo card metadata block was not found. Setting CardData to empty. [default3]:Repo card metadata block was not found. Setting CardData to empty. [default4]:Repo card metadata block was not found. Setting CardData to empty. [default3]:07/02/2024 21:09:39 [WARNING|DP=21|PP=0|TP=1|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. [default1]:Repo card metadata block was not found. Setting CardData to empty. [default1]:07/02/2024 21:09:39 [WARNING|DP=8|PP=0|TP=1|ip-26-0-163-226]: Repo card metadata block was not found. Setting CardData to empty. [default5]:07/02/2024 21:09:39 [WARNING|DP=2|PP=0|TP=1|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. [default0]:Repo card metadata block was not found. Setting CardData to empty. [default4]:07/02/2024 21:09:39 [WARNING|DP=2|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. [default2]:07/02/2024 21:09:39 [WARNING|DP=21|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. [default1]:Repo card metadata block was not found. Setting CardData to empty. [default2]:Repo card metadata block was not found. Setting CardData to empty. [default6]:Repo card metadata block was not found. Setting CardData to empty. [default2]:07/02/2024 21:09:39 [WARNING|DP=29|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default7]:07/02/2024 21:09:39 [WARNING|DP=31|PP=0|TP=1|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default6]:07/02/2024 21:09:39 [WARNING|DP=31|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default4]:Repo card metadata block was not found. Setting CardData to empty. [default1]:07/02/2024 21:09:39 [WARNING|DP=0|PP=0|TP=1|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. [default3]:Repo card metadata block was not found. Setting CardData to empty. [default7]:Repo card metadata block was not found. Setting CardData to empty. [default5]:07/02/2024 21:09:39 [WARNING|DP=14|PP=0|TP=1|ip-26-0-168-238]: Repo card metadata block was not found. Setting CardData to empty. [default2]:Repo card metadata block was not found. Setting CardData to empty. [default2]:Repo card metadata block was not found. Setting CardData to empty. [default2]:Repo card metadata block was not found. Setting CardData to empty. [default2]:07/02/2024 21:09:39 [WARNING|DP=17|PP=0|TP=0|ip-26-0-169-86]: Repo card metadata block was not found. Setting CardData to empty. [default7]:07/02/2024 21:09:39 [WARNING|DP=27|PP=0|TP=1|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default3]:07/02/2024 21:09:39 [WARNING|DP=17|PP=0|TP=1|ip-26-0-169-86]: Repo card metadata block was not found. Setting CardData to empty. [default6]:Repo card metadata block was not found. Setting CardData to empty. [default7]:Repo card metadata block was not found. Setting CardData to empty. [default6]:Repo card metadata block was not found. Setting CardData to empty. [default4]:Repo card metadata block was not found. Setting CardData to empty. [default7]:07/02/2024 21:09:39 [WARNING|DP=15|PP=0|TP=1|ip-26-0-168-238]: Repo card metadata block was not found. Setting CardData to empty. [default1]:07/02/2024 21:09:39 [WARNING|DP=12|PP=0|TP=1|ip-26-0-168-238]: Repo card metadata block was not found. Setting CardData to empty. [default2]:07/02/2024 21:09:39 [WARNING|DP=25|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default5]:Repo card metadata block was not found. Setting CardData to empty. [default6]:Repo card metadata block was not found. Setting CardData to empty. [default6]:Repo card metadata block was not found. Setting CardData to empty. [default1]:Repo card metadata block was not found. Setting CardData to empty. [default7]:Repo card metadata block was not found. Setting CardData to empty. [default2]:Repo card metadata block was not found. Setting CardData to empty. [default6]:07/02/2024 21:09:39 [WARNING|DP=19|PP=0|TP=0|ip-26-0-169-86]: Repo card metadata block was not found. Setting CardData to empty. [default4]:07/02/2024 21:09:39 [WARNING|DP=18|PP=0|TP=0|ip-26-0-169-86]: Repo card metadata block was not found. Setting CardData to empty. [default7]:Repo card metadata block was not found. Setting CardData to empty. [default6]:07/02/2024 21:09:39 [WARNING|DP=3|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. [default7]:Repo card metadata block was not found. Setting CardData to empty. [default1]:Repo card metadata block was not found. Setting CardData to empty. [default5]:07/02/2024 21:09:39 [WARNING|DP=26|PP=0|TP=1|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default1]:07/02/2024 21:09:39 [WARNING|DP=24|PP=0|TP=1|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default0]:Repo card metadata block was not found. Setting CardData to empty. [default1]:07/02/2024 21:09:39 [WARNING|DP=28|PP=0|TP=1|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default6]:Repo card metadata block was not found. Setting CardData to empty. [default1]:Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 21:09:39 [WARNING|DP=12|PP=0|TP=0|ip-26-0-168-238]: Repo card metadata block was not found. Setting CardData to empty. [default5]:Repo card metadata block was not found. Setting CardData to empty. [default1]:Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 21:09:39 [WARNING|DP=24|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. [default0]:Repo card metadata block was not found. Setting CardData to empty. [default5]:07/02/2024 21:09:39 [WARNING|DP=6|PP=0|TP=1|ip-26-0-163-147]: Repo card metadata block was not found. Setting CardData to empty. [default7]:07/02/2024 21:09:39 [WARNING|DP=11|PP=0|TP=1|ip-26-0-163-226]: Repo card metadata block was not found. Setting CardData to empty. [default2]:Repo card metadata block was not found. Setting CardData to empty. [default5]:Repo card metadata block was not found. Setting CardData to empty. [default4]:07/02/2024 21:09:39 [WARNING|DP=6|PP=0|TP=0|ip-26-0-163-147]: Repo card metadata block was not found. Setting CardData to empty. [default2]:07/02/2024 21:09:39 [WARNING|DP=9|PP=0|TP=0|ip-26-0-163-226]: Repo card metadata block was not found. Setting CardData to empty. [default5]:07/02/2024 21:09:39 [WARNING|DP=22|PP=0|TP=1|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 21:09:39 [WARNING|DP=16|PP=0|TP=0|ip-26-0-169-86]: Repo card metadata block was not found. Setting CardData to empty. [default6]:07/02/2024 21:09:39 [WARNING|DP=15|PP=0|TP=0|ip-26-0-168-238]: Repo card metadata block was not found. Setting CardData to empty. [default4]:07/02/2024 21:09:39 [WARNING|DP=14|PP=0|TP=0|ip-26-0-168-238]: Repo card metadata block was not found. Setting CardData to empty. [default4]:Repo card metadata block was not found. Setting CardData to empty. [default4]:Repo card metadata block was not found. Setting CardData to empty. [default6]:Repo card metadata block was not found. Setting CardData to empty. [default2]:07/02/2024 21:09:39 [WARNING|DP=13|PP=0|TP=0|ip-26-0-168-238]: Repo card metadata block was not found. Setting CardData to empty. [default5]:Repo card metadata block was not found. Setting CardData to empty. [default7]:Repo card metadata block was not found. Setting CardData to empty. [default2]:Repo card metadata block was not found. Setting CardData to empty. [default0]:Repo card metadata block was not found. Setting CardData to empty. [default5]:Repo card metadata block was not found. Setting CardData to empty. [default5]:07/02/2024 21:09:39 [WARNING|DP=10|PP=0|TP=1|ip-26-0-163-226]: Repo card metadata block was not found. Setting CardData to empty. [default4]:07/02/2024 21:09:39 [WARNING|DP=30|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. [default1]:07/02/2024 21:09:39 [WARNING|DP=16|PP=0|TP=1|ip-26-0-169-86]: Repo card metadata block was not found. Setting CardData to empty. [default3]:07/02/2024 21:09:39 [WARNING|DP=13|PP=0|TP=1|ip-26-0-168-238]: Repo card metadata block was not found. Setting CardData to empty. [default3]:Repo card metadata block was not found. Setting CardData to empty. [default4]:Repo card metadata block was not found. Setting CardData to empty. [default1]:Repo card metadata block was not found. Setting CardData to empty. [default3]:07/02/2024 21:09:39 [WARNING|DP=9|PP=0|TP=1|ip-26-0-163-226]: Repo card metadata block was not found. Setting CardData to empty. [default3]:Repo card metadata block was not found. Setting CardData to empty. [default5]:07/02/2024 21:09:39 [WARNING|DP=18|PP=0|TP=1|ip-26-0-169-86]: Repo card metadata block was not found. Setting CardData to empty. [default5]:Repo card metadata block was not found. Setting CardData to empty. [default3]:Repo card metadata block was not found. Setting CardData to empty. [default3]:07/02/2024 21:09:39 [WARNING|DP=1|PP=0|TP=1|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. [default0]:07/02/2024 21:09:39 [WARNING|DP=8|PP=0|TP=0|ip-26-0-163-226]: Repo card metadata block was not found. Setting CardData to empty. [default0]:Repo card metadata block was not found. Setting CardData to empty. [default0]:[rank0]: Traceback (most recent call last): [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default0]:[rank0]: trainer.train(dataloader) [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default0]:[rank0]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default0]:[rank0]: outputs = self.pipeline_engine.train_batch_iter( [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default0]:[rank0]: for micro_batch in batch: [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default0]:[rank0]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default0]:[rank0]: for batch in dataloader: [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default0]:[rank0]: return self._get_iterator() [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default0]:[rank0]: return _MultiProcessingDataLoaderIter(self) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default0]:[rank0]: w.start() [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default0]:[rank0]: self._popen = self._Popen(self) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default0]:[rank0]: return _default_context.get_context().Process._Popen(process_obj) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default0]:[rank0]: return Popen(process_obj) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default0]:[rank0]: self._launch(process_obj) [default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default0]:[rank0]: self.pid = os.fork() [default0]:[rank0]: OSError: [Errno 12] Cannot allocate memory [default2]:[rank2]: Traceback (most recent call last): [default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default2]:[rank2]: trainer.train(dataloader) [default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default2]:[rank2]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default2]:[rank2]: outputs = self.pipeline_engine.train_batch_iter( [default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default2]:[rank2]: for micro_batch in batch: [default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default2]:[rank2]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default2]:[rank2]: for batch in dataloader: [default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default2]:[rank2]: return self._get_iterator() [default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default2]:[rank2]: return _MultiProcessingDataLoaderIter(self) [default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default2]:[rank2]: w.start() [default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default2]:[rank2]: self._popen = self._Popen(self) [default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default2]:[rank2]: return _default_context.get_context().Process._Popen(process_obj) [default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default2]:[rank2]: return Popen(process_obj) [default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default2]:[rank2]: self._launch(process_obj) [default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default2]:[rank2]: self.pid = os.fork() [default2]:[rank2]: OSError: [Errno 12] Cannot allocate memory [default2]:[rank34]: Traceback (most recent call last): [default2]:[rank34]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default2]:[rank34]: trainer.train(dataloader) [default2]:[rank34]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default2]:[rank34]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default2]:[rank34]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default2]:[rank34]: outputs = self.pipeline_engine.train_batch_iter( [default2]:[rank34]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default2]:[rank34]: for micro_batch in batch: [default2]:[rank34]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default2]:[rank34]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default2]:[rank34]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default2]:[rank34]: for batch in dataloader: [default2]:[rank34]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default2]:[rank34]: return self._get_iterator() [default2]:[rank34]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default2]:[rank34]: return _MultiProcessingDataLoaderIter(self) [default2]:[rank34]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default2]:[rank34]: w.start() [default2]:[rank34]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default2]:[rank34]: self._popen = self._Popen(self) [default2]:[rank34]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default2]:[rank34]: return _default_context.get_context().Process._Popen(process_obj) [default2]:[rank34]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default2]:[rank34]: return Popen(process_obj) [default2]:[rank34]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default2]:[rank34]: self._launch(process_obj) [default2]:[rank34]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default2]:[rank34]: self.pid = os.fork() [default2]:[rank34]: OSError: [Errno 12] Cannot allocate memory [default6]:[rank38]: Traceback (most recent call last): [default6]:[rank38]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default6]:[rank38]: trainer.train(dataloader) [default6]:[rank38]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default6]:[rank38]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default6]:[rank38]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default6]:[rank38]: outputs = self.pipeline_engine.train_batch_iter( [default6]:[rank38]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default6]:[rank38]: for micro_batch in batch: [default6]:[rank38]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default6]:[rank38]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default6]:[rank38]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default6]:[rank38]: for batch in dataloader: [default6]:[rank38]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default6]:[rank38]: return self._get_iterator() [default6]:[rank38]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default6]:[rank38]: return _MultiProcessingDataLoaderIter(self) [default6]:[rank38]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default6]:[rank38]: w.start() [default6]:[rank38]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default6]:[rank38]: self._popen = self._Popen(self) [default6]:[rank38]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default6]:[rank38]: return _default_context.get_context().Process._Popen(process_obj) [default6]:[rank38]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default6]:[rank38]: return Popen(process_obj) [default6]:[rank38]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default6]:[rank38]: self._launch(process_obj) [default6]:[rank38]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default6]:[rank38]: self.pid = os.fork() [default6]:[rank38]: OSError: [Errno 12] Cannot allocate memory [default4]:[rank36]: Traceback (most recent call last): [default4]:[rank36]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default4]:[rank36]: trainer.train(dataloader) [default4]:[rank36]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default4]:[rank36]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default4]:[rank36]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default4]:[rank36]: outputs = self.pipeline_engine.train_batch_iter( [default4]:[rank36]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default4]:[rank36]: for micro_batch in batch: [default4]:[rank36]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default4]:[rank36]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default4]:[rank36]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default4]:[rank36]: for batch in dataloader: [default4]:[rank36]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default4]:[rank36]: return self._get_iterator() [default4]:[rank36]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default4]:[rank36]: return _MultiProcessingDataLoaderIter(self) [default4]:[rank36]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default4]:[rank36]: w.start() [default4]:[rank36]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default4]:[rank36]: self._popen = self._Popen(self) [default4]:[rank36]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default4]:[rank36]: return _default_context.get_context().Process._Popen(process_obj) [default4]:[rank36]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default4]:[rank36]: return Popen(process_obj) [default4]:[rank36]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default4]:[rank36]: self._launch(process_obj) [default4]:[rank36]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default4]:[rank36]: self.pid = os.fork() [default4]:[rank36]: OSError: [Errno 12] Cannot allocate memory [default7]:[rank39]: Traceback (most recent call last): [default7]:[rank39]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default7]:[rank39]: trainer.train(dataloader) [default7]:[rank39]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default7]:[rank39]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default7]:[rank39]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default7]:[rank39]: outputs = self.pipeline_engine.train_batch_iter( [default7]:[rank39]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default7]:[rank39]: for micro_batch in batch: [default7]:[rank39]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default7]:[rank39]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default7]:[rank39]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default7]:[rank39]: for batch in dataloader: [default7]:[rank39]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default7]:[rank39]: return self._get_iterator() [default7]:[rank39]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default7]:[rank39]: return _MultiProcessingDataLoaderIter(self) [default7]:[rank39]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default7]:[rank39]: w.start() [default7]:[rank39]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default7]:[rank39]: self._popen = self._Popen(self) [default7]:[rank39]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default7]:[rank39]: return _default_context.get_context().Process._Popen(process_obj) [default7]:[rank39]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default7]:[rank39]: return Popen(process_obj) [default7]:[rank39]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default7]:[rank39]: self._launch(process_obj) [default7]:[rank39]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default7]:[rank39]: self.pid = os.fork() [default7]:[rank39]: OSError: [Errno 12] Cannot allocate memory [default3]:[rank35]: Traceback (most recent call last): [default3]:[rank35]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default3]:[rank35]: trainer.train(dataloader) [default3]:[rank35]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default3]:[rank35]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default3]:[rank35]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default3]:[rank35]: outputs = self.pipeline_engine.train_batch_iter( [default3]:[rank35]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default3]:[rank35]: for micro_batch in batch: [default3]:[rank35]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default3]:[rank35]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default3]:[rank35]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default3]:[rank35]: for batch in dataloader: [default3]:[rank35]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default3]:[rank35]: return self._get_iterator() [default3]:[rank35]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default3]:[rank35]: return _MultiProcessingDataLoaderIter(self) [default3]:[rank35]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default3]:[rank35]: w.start() [default3]:[rank35]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default3]:[rank35]: self._popen = self._Popen(self) [default3]:[rank35]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default3]:[rank35]: return _default_context.get_context().Process._Popen(process_obj) [default3]:[rank35]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default3]:[rank35]: return Popen(process_obj) [default3]:[rank35]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default3]:[rank35]: self._launch(process_obj) [default3]:[rank35]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default3]:[rank35]: self.pid = os.fork() [default3]:[rank35]: OSError: [Errno 12] Cannot allocate memory [default4]:[rank28]: Traceback (most recent call last): [default4]:[rank28]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default4]:[rank28]: trainer.train(dataloader) [default4]:[rank28]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default4]:[rank28]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default4]:[rank28]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default4]:[rank28]: outputs = self.pipeline_engine.train_batch_iter( [default4]:[rank28]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default4]:[rank28]: for micro_batch in batch: [default4]:[rank28]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default4]:[rank28]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default4]:[rank28]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default4]:[rank28]: for batch in dataloader: [default4]:[rank28]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default4]:[rank28]: return self._get_iterator() [default4]:[rank28]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default4]:[rank28]: return _MultiProcessingDataLoaderIter(self) [default4]:[rank28]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default4]:[rank28]: w.start() [default4]:[rank28]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default4]:[rank28]: self._popen = self._Popen(self) [default4]:[rank28]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default4]:[rank28]: return _default_context.get_context().Process._Popen(process_obj) [default4]:[rank28]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default4]:[rank28]: return Popen(process_obj) [default4]:[rank28]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default4]:[rank28]: self._launch(process_obj) [default4]:[rank28]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default4]:[rank28]: self.pid = os.fork() [default4]:[rank28]: OSError: [Errno 12] Cannot allocate memory [default6]:[rank30]: Traceback (most recent call last): [default6]:[rank30]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default6]:[rank30]: trainer.train(dataloader) [default6]:[rank30]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default6]:[rank30]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default6]:[rank30]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default6]:[rank30]: outputs = self.pipeline_engine.train_batch_iter( [default6]:[rank30]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default6]:[rank30]: for micro_batch in batch: [default6]:[rank30]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default6]:[rank30]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default6]:[rank30]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default6]:[rank30]: for batch in dataloader: [default6]:[rank30]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default6]:[rank30]: return self._get_iterator() [default6]:[rank30]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default6]:[rank30]: return _MultiProcessingDataLoaderIter(self) [default6]:[rank30]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default6]:[rank30]: w.start() [default6]:[rank30]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default6]:[rank30]: self._popen = self._Popen(self) [default6]:[rank30]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default6]:[rank30]: return _default_context.get_context().Process._Popen(process_obj) [default6]:[rank30]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default6]:[rank30]: return Popen(process_obj) [default6]:[rank30]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default6]:[rank30]: self._launch(process_obj) [default6]:[rank30]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default6]:[rank30]: self.pid = os.fork() [default6]:[rank30]: OSError: [Errno 12] Cannot allocate memory [default3]:[rank27]: Traceback (most recent call last): [default3]:[rank27]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default3]:[rank27]: trainer.train(dataloader) [default3]:[rank27]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default3]:[rank27]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default3]:[rank27]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default3]:[rank27]: outputs = self.pipeline_engine.train_batch_iter( [default3]:[rank27]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default3]:[rank27]: for micro_batch in batch: [default3]:[rank27]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default3]:[rank27]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default3]:[rank27]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default3]:[rank27]: for batch in dataloader: [default3]:[rank27]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default3]:[rank27]: return self._get_iterator() [default3]:[rank27]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default3]:[rank27]: return _MultiProcessingDataLoaderIter(self) [default3]:[rank27]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default3]:[rank27]: w.start() [default3]:[rank27]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default3]:[rank27]: self._popen = self._Popen(self) [default3]:[rank27]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default3]:[rank27]: return _default_context.get_context().Process._Popen(process_obj) [default3]:[rank27]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default3]:[rank27]: return Popen(process_obj) [default3]:[rank27]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default3]:[rank27]: self._launch(process_obj) [default3]:[rank27]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default3]:[rank27]: self.pid = os.fork() [default3]:[rank27]: OSError: [Errno 12] Cannot allocate memory [default1]:[rank25]: Traceback (most recent call last): [default1]:[rank25]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default1]:[rank25]: trainer.train(dataloader) [default1]:[rank25]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default1]:[rank25]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default1]:[rank25]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default1]:[rank25]: outputs = self.pipeline_engine.train_batch_iter( [default1]:[rank25]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default1]:[rank25]: for micro_batch in batch: [default1]:[rank25]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default1]:[rank25]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default1]:[rank25]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default1]:[rank25]: for batch in dataloader: [default1]:[rank25]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default1]:[rank25]: return self._get_iterator() [default1]:[rank25]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default1]:[rank25]: return _MultiProcessingDataLoaderIter(self) [default1]:[rank25]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default1]:[rank25]: w.start() [default1]:[rank25]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default1]:[rank25]: self._popen = self._Popen(self) [default1]:[rank25]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default1]:[rank25]: return _default_context.get_context().Process._Popen(process_obj) [default1]:[rank25]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default1]:[rank25]: return Popen(process_obj) [default1]:[rank25]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default1]:[rank25]: self._launch(process_obj) [default1]:[rank25]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default1]:[rank25]: self.pid = os.fork() [default1]:[rank25]: OSError: [Errno 12] Cannot allocate memory [default7]:[rank31]: Traceback (most recent call last): [default7]:[rank31]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default7]:[rank31]: trainer.train(dataloader) [default7]:[rank31]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default7]:[rank31]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default7]:[rank31]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default7]:[rank31]: outputs = self.pipeline_engine.train_batch_iter( [default7]:[rank31]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default7]:[rank31]: for micro_batch in batch: [default7]:[rank31]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default7]:[rank31]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default7]:[rank31]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default7]:[rank31]: for batch in dataloader: [default7]:[rank31]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default7]:[rank31]: return self._get_iterator() [default7]:[rank31]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default7]:[rank31]: return _MultiProcessingDataLoaderIter(self) [default7]:[rank31]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default7]:[rank31]: w.start() [default7]:[rank31]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default7]:[rank31]: self._popen = self._Popen(self) [default7]:[rank31]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default7]:[rank31]: return _default_context.get_context().Process._Popen(process_obj) [default7]:[rank31]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default7]:[rank31]: return Popen(process_obj) [default7]:[rank31]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default7]:[rank31]: self._launch(process_obj) [default7]:[rank31]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default7]:[rank31]: self.pid = os.fork() [default7]:[rank31]: OSError: [Errno 12] Cannot allocate memory [default3]:[rank3]: Traceback (most recent call last): [default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default3]:[rank3]: trainer.train(dataloader) [default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default3]:[rank3]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default3]:[rank3]: outputs = self.pipeline_engine.train_batch_iter( [default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default3]:[rank3]: for micro_batch in batch: [default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default3]:[rank3]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default3]:[rank3]: for batch in dataloader: [default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default3]:[rank3]: return self._get_iterator() [default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default3]:[rank3]: return _MultiProcessingDataLoaderIter(self) [default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default3]:[rank3]: w.start() [default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default3]:[rank3]: self._popen = self._Popen(self) [default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default3]:[rank3]: return _default_context.get_context().Process._Popen(process_obj) [default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default3]:[rank3]: return Popen(process_obj) [default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default3]:[rank3]: self._launch(process_obj) [default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default3]:[rank3]: self.pid = os.fork() [default3]:[rank3]: OSError: [Errno 12] Cannot allocate memory [default1]:[rank1]: Traceback (most recent call last): [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default1]:[rank1]: trainer.train(dataloader) [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default1]:[rank1]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default1]:[rank1]: outputs = self.pipeline_engine.train_batch_iter( [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default1]:[rank1]: for micro_batch in batch: [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default1]:[rank1]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default1]:[rank1]: for batch in dataloader: [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default1]:[rank1]: return self._get_iterator() [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default1]:[rank1]: return _MultiProcessingDataLoaderIter(self) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default1]:[rank1]: w.start() [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default1]:[rank1]: self._popen = self._Popen(self) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default1]:[rank1]: return _default_context.get_context().Process._Popen(process_obj) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default1]:[rank1]: return Popen(process_obj) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default1]:[rank1]: self._launch(process_obj) [default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default1]:[rank1]: self.pid = os.fork() [default1]:[rank1]: OSError: [Errno 12] Cannot allocate memory [default6]:[rank6]: Traceback (most recent call last): [default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default6]:[rank6]: trainer.train(dataloader) [default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default6]:[rank6]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default6]:[rank6]: outputs = self.pipeline_engine.train_batch_iter( [default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default6]:[rank6]: for micro_batch in batch: [default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default6]:[rank6]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default6]:[rank6]: for batch in dataloader: [default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default6]:[rank6]: return self._get_iterator() [default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default6]:[rank6]: return _MultiProcessingDataLoaderIter(self) [default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default6]:[rank6]: w.start() [default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default6]:[rank6]: self._popen = self._Popen(self) [default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default6]:[rank6]: return _default_context.get_context().Process._Popen(process_obj) [default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default6]:[rank6]: return Popen(process_obj) [default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default6]:[rank6]: self._launch(process_obj) [default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default6]:[rank6]: self.pid = os.fork() [default6]:[rank6]: OSError: [Errno 12] Cannot allocate memory [default7]:[rank7]: Traceback (most recent call last): [default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default7]:[rank7]: trainer.train(dataloader) [default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default7]:[rank7]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default7]:[rank7]: outputs = self.pipeline_engine.train_batch_iter( [default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default7]:[rank7]: for micro_batch in batch: [default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default7]:[rank7]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default7]:[rank7]: for batch in dataloader: [default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default7]:[rank7]: return self._get_iterator() [default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default7]:[rank7]: return _MultiProcessingDataLoaderIter(self) [default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default7]:[rank7]: w.start() [default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default7]:[rank7]: self._popen = self._Popen(self) [default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default7]:[rank7]: return _default_context.get_context().Process._Popen(process_obj) [default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default7]:[rank7]: return Popen(process_obj) [default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default7]:[rank7]: self._launch(process_obj) [default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default7]:[rank7]: self.pid = os.fork() [default7]:[rank7]: OSError: [Errno 12] Cannot allocate memory [default2]:[rank26]: Traceback (most recent call last): [default5]:[rank29]: Traceback (most recent call last): [default2]:[rank26]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default5]:[rank29]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default5]:[rank29]: trainer.train(dataloader) [default5]:[rank29]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default5]:[rank29]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default2]:[rank26]: trainer.train(dataloader) [default5]:[rank29]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default2]:[rank26]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default2]:[rank26]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default2]:[rank26]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default2]:[rank26]: outputs = self.pipeline_engine.train_batch_iter( [default2]:[rank26]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default5]:[rank29]: outputs = self.pipeline_engine.train_batch_iter( [default2]:[rank26]: for micro_batch in batch: [default5]:[rank29]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default2]:[rank26]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default2]:[rank26]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default5]:[rank29]: for micro_batch in batch: [default5]:[rank29]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default2]:[rank26]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default2]:[rank26]: for batch in dataloader: [default2]:[rank26]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default5]:[rank29]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default5]:[rank29]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default5]:[rank29]: for batch in dataloader: [default5]:[rank29]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default5]:[rank29]: return self._get_iterator() [default5]:[rank29]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default5]:[rank29]: return _MultiProcessingDataLoaderIter(self) [default5]:[rank29]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default2]:[rank26]: return self._get_iterator() [default5]:[rank29]: w.start() [default2]:[rank26]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default5]:[rank29]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default5]:[rank29]: self._popen = self._Popen(self) [default5]:[rank29]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default2]:[rank26]: return _MultiProcessingDataLoaderIter(self) [default0]:[rank24]: Traceback (most recent call last): [default0]:[rank24]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default5]:[rank29]: return _default_context.get_context().Process._Popen(process_obj) [default0]:[rank24]: trainer.train(dataloader) [default5]:[rank29]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default0]:[rank24]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default2]:[rank26]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default2]:[rank26]: w.start() [default2]:[rank26]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default0]:[rank24]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default5]:[rank29]: return Popen(process_obj) [default5]:[rank29]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default5]:[rank29]: self._launch(process_obj) [default0]:[rank24]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default2]:[rank26]: self._popen = self._Popen(self) [default5]:[rank29]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default5]:[rank29]: self.pid = os.fork() [default0]:[rank24]: outputs = self.pipeline_engine.train_batch_iter( [default0]:[rank24]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default2]:[rank26]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default0]:[rank24]: for micro_batch in batch: [default2]:[rank26]: return _default_context.get_context().Process._Popen(process_obj) [default5]:[rank29]: OSError: [Errno 12] Cannot allocate memory [default0]:[rank24]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default2]:[rank26]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default0]:[rank24]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default2]:[rank26]: return Popen(process_obj) [default0]:[rank24]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default2]:[rank26]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default0]:[rank24]: for batch in dataloader: [default2]:[rank26]: self._launch(process_obj) [default0]:[rank24]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default0]:[rank24]: return self._get_iterator() [default0]:[rank24]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default0]:[rank24]: return _MultiProcessingDataLoaderIter(self) [default2]:[rank26]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default0]:[rank24]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default0]:[rank24]: w.start() [default2]:[rank26]: self.pid = os.fork() [default0]:[rank24]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default2]:[rank26]: OSError: [Errno 12] Cannot allocate memory [default0]:[rank24]: self._popen = self._Popen(self) [default0]:[rank24]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default0]:[rank24]: return _default_context.get_context().Process._Popen(process_obj) [default0]:[rank24]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default0]:[rank24]: return Popen(process_obj) [default0]:[rank24]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default0]:[rank24]: self._launch(process_obj) [default0]:[rank24]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default0]:[rank24]: self.pid = os.fork() [default0]:[rank24]: OSError: [Errno 12] Cannot allocate memory [default2]:[rank18]: Traceback (most recent call last): [default5]:[rank21]: Traceback (most recent call last): [default2]:[rank18]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default2]:[rank18]: trainer.train(dataloader) [default2]:[rank18]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default2]:[rank18]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default5]:[rank21]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default5]:[rank21]: trainer.train(dataloader) [default5]:[rank21]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default5]:[rank21]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default5]:[rank21]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default5]:[rank21]: outputs = self.pipeline_engine.train_batch_iter( [default2]:[rank18]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default5]:[rank21]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default2]:[rank18]: outputs = self.pipeline_engine.train_batch_iter( [default5]:[rank21]: for micro_batch in batch: [default5]:[rank21]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default2]:[rank18]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default5]:[rank21]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default5]:[rank21]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default5]:[rank21]: for batch in dataloader: [default5]:[rank21]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default5]:[rank21]: return self._get_iterator() [default5]:[rank21]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default5]:[rank21]: return _MultiProcessingDataLoaderIter(self) [default5]:[rank21]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default5]:[rank21]: w.start() [default2]:[rank18]: for micro_batch in batch: [default5]:[rank21]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default5]:[rank21]: self._popen = self._Popen(self) [default5]:[rank21]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default5]:[rank21]: return _default_context.get_context().Process._Popen(process_obj) [default2]:[rank18]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default2]:[rank18]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default2]:[rank18]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default2]:[rank18]: for batch in dataloader: [default2]:[rank18]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default2]:[rank18]: return self._get_iterator() [default2]:[rank18]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default2]:[rank18]: return _MultiProcessingDataLoaderIter(self) [default2]:[rank18]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default2]:[rank18]: w.start() [default2]:[rank18]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default5]:[rank21]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default5]:[rank21]: return Popen(process_obj) [default5]:[rank21]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default5]:[rank21]: self._launch(process_obj) [default5]:[rank21]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default2]:[rank18]: self._popen = self._Popen(self) [default5]:[rank21]: self.pid = os.fork() [default5]:[rank21]: OSError: [Errno 12] Cannot allocate memory [default2]:[rank18]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default2]:[rank18]: return _default_context.get_context().Process._Popen(process_obj) [default2]:[rank18]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default2]:[rank18]: return Popen(process_obj) [default2]:[rank18]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default2]:[rank18]: self._launch(process_obj) [default2]:[rank18]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default2]:[rank18]: self.pid = os.fork() [default2]:[rank18]: OSError: [Errno 12] Cannot allocate memory [default3]:[rank19]: Traceback (most recent call last): [default3]:[rank19]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default3]:[rank19]: trainer.train(dataloader) [default3]:[rank19]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default3]:[rank19]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default3]:[rank19]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default3]:[rank19]: outputs = self.pipeline_engine.train_batch_iter( [default3]:[rank19]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default3]:[rank19]: for micro_batch in batch: [default3]:[rank19]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default3]:[rank19]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default3]:[rank19]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default3]:[rank19]: for batch in dataloader: [default3]:[rank19]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default3]:[rank19]: return self._get_iterator() [default3]:[rank19]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default3]:[rank19]: return _MultiProcessingDataLoaderIter(self) [default3]:[rank19]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default3]:[rank19]: w.start() [default3]:[rank19]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default3]:[rank19]: self._popen = self._Popen(self) [default3]:[rank19]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default3]:[rank19]: return _default_context.get_context().Process._Popen(process_obj) [default3]:[rank19]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default6]:[rank22]: Traceback (most recent call last): [default6]:[rank22]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default3]:[rank19]: return Popen(process_obj) [default6]:[rank22]: trainer.train(dataloader) [default6]:[rank22]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default6]:[rank22]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default3]:[rank19]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default3]:[rank19]: self._launch(process_obj) [default6]:[rank22]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default6]:[rank22]: outputs = self.pipeline_engine.train_batch_iter( [default6]:[rank22]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default6]:[rank22]: for micro_batch in batch: [default6]:[rank22]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default6]:[rank22]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default6]:[rank22]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default6]:[rank22]: for batch in dataloader: [default6]:[rank22]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default6]:[rank22]: return self._get_iterator() [default3]:[rank19]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default3]:[rank19]: self.pid = os.fork() [default3]:[rank19]: OSError: [Errno 12] Cannot allocate memory [default6]:[rank22]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default6]:[rank22]: return _MultiProcessingDataLoaderIter(self) [default6]:[rank22]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default6]:[rank22]: w.start() [default6]:[rank22]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default6]:[rank22]: self._popen = self._Popen(self) [default6]:[rank22]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default6]:[rank22]: return _default_context.get_context().Process._Popen(process_obj) [default6]:[rank22]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default6]:[rank22]: return Popen(process_obj) [default6]:[rank22]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default6]:[rank22]: self._launch(process_obj) [default6]:[rank22]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default6]:[rank22]: self.pid = os.fork() [default6]:[rank22]: OSError: [Errno 12] Cannot allocate memory [default0]:[rank16]: Traceback (most recent call last): [default0]:[rank16]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default0]:[rank16]: trainer.train(dataloader) [default0]:[rank16]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default0]:[rank16]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default0]:[rank16]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default0]:[rank16]: outputs = self.pipeline_engine.train_batch_iter( [default0]:[rank16]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default0]:[rank16]: for micro_batch in batch: [default0]:[rank16]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default0]:[rank16]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default0]:[rank16]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default0]:[rank16]: for batch in dataloader: [default0]:[rank16]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default0]:[rank16]: return self._get_iterator() [default0]:[rank16]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default0]:[rank16]: return _MultiProcessingDataLoaderIter(self) [default0]:[rank16]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default0]:[rank16]: w.start() [default0]:[rank16]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default0]:[rank16]: self._popen = self._Popen(self) [default0]:[rank16]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default0]:[rank16]: return _default_context.get_context().Process._Popen(process_obj) [default0]:[rank16]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default0]:[rank16]: return Popen(process_obj) [default0]:[rank16]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default0]:[rank16]: self._launch(process_obj) [default0]:[rank16]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default0]:[rank16]: self.pid = os.fork() [default0]:[rank16]: OSError: [Errno 12] Cannot allocate memory [default4]:[rank20]: Traceback (most recent call last): [default4]:[rank20]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default4]:[rank20]: trainer.train(dataloader) [default4]:[rank20]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default4]:[rank20]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default1]:[rank17]: Traceback (most recent call last): [default4]:[rank20]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default4]:[rank20]: outputs = self.pipeline_engine.train_batch_iter( [default1]:[rank17]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default1]:[rank17]: trainer.train(dataloader) [default1]:[rank17]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default1]:[rank17]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default1]:[rank17]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default1]:[rank17]: outputs = self.pipeline_engine.train_batch_iter( [default1]:[rank17]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default1]:[rank17]: for micro_batch in batch: [default1]:[rank17]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default1]:[rank17]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default4]:[rank20]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default4]:[rank20]: for micro_batch in batch: [default4]:[rank20]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default4]:[rank20]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default4]:[rank20]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default1]:[rank17]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default1]:[rank17]: for batch in dataloader: [default1]:[rank17]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default1]:[rank17]: return self._get_iterator() [default1]:[rank17]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default1]:[rank17]: return _MultiProcessingDataLoaderIter(self) [default1]:[rank17]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default4]:[rank20]: for batch in dataloader: [default4]:[rank20]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default1]:[rank17]: w.start() [default4]:[rank20]: return self._get_iterator() [default1]:[rank17]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default1]:[rank17]: self._popen = self._Popen(self) [default1]:[rank17]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default1]:[rank17]: return _default_context.get_context().Process._Popen(process_obj) [default1]:[rank17]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default4]:[rank20]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default1]:[rank17]: return Popen(process_obj) [default4]:[rank20]: return _MultiProcessingDataLoaderIter(self) [default1]:[rank17]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default1]:[rank17]: self._launch(process_obj) [default1]:[rank17]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default1]:[rank17]: self.pid = os.fork() [default1]:[rank17]: OSError: [Errno 12] Cannot allocate memory [default4]:[rank20]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default4]:[rank20]: w.start() [default4]:[rank20]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default4]:[rank20]: self._popen = self._Popen(self) [default4]:[rank20]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default4]:[rank20]: return _default_context.get_context().Process._Popen(process_obj) [default4]:[rank20]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default4]:[rank20]: return Popen(process_obj) [default4]:[rank20]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default4]:[rank20]: self._launch(process_obj) [default4]:[rank20]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default4]:[rank20]: self.pid = os.fork() [default4]:[rank20]: OSError: [Errno 12] Cannot allocate memory [default7]:[rank23]: Traceback (most recent call last): [default7]:[rank23]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default7]:[rank23]: trainer.train(dataloader) [default7]:[rank23]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default7]:[rank23]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default7]:[rank23]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default7]:[rank23]: outputs = self.pipeline_engine.train_batch_iter( [default7]:[rank23]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default7]:[rank23]: for micro_batch in batch: [default7]:[rank23]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default7]:[rank23]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default7]:[rank23]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default7]:[rank23]: for batch in dataloader: [default7]:[rank23]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default7]:[rank23]: return self._get_iterator() [default7]:[rank23]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default7]:[rank23]: return _MultiProcessingDataLoaderIter(self) [default7]:[rank23]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default7]:[rank23]: w.start() [default7]:[rank23]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default7]:[rank23]: self._popen = self._Popen(self) [default7]:[rank23]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default7]:[rank23]: return _default_context.get_context().Process._Popen(process_obj) [default7]:[rank23]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default7]:[rank23]: return Popen(process_obj) [default7]:[rank23]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default7]:[rank23]: self._launch(process_obj) [default7]:[rank23]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default7]:[rank23]: self.pid = os.fork() [default7]:[rank23]: OSError: [Errno 12] Cannot allocate memory [default4]:[rank52]: Traceback (most recent call last): [default4]:[rank52]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default4]:[rank52]: trainer.train(dataloader) [default4]:[rank52]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default4]:[rank52]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default4]:[rank52]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default4]:[rank52]: outputs = self.pipeline_engine.train_batch_iter( [default4]:[rank52]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default4]:[rank52]: for micro_batch in batch: [default4]:[rank52]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default4]:[rank52]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default4]:[rank52]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default4]:[rank52]: for batch in dataloader: [default4]:[rank52]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default4]:[rank52]: return self._get_iterator() [default4]:[rank52]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default4]:[rank52]: return _MultiProcessingDataLoaderIter(self) [default4]:[rank52]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default4]:[rank52]: w.start() [default4]:[rank52]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default4]:[rank52]: self._popen = self._Popen(self) [default4]:[rank52]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default4]:[rank52]: return _default_context.get_context().Process._Popen(process_obj) [default4]:[rank52]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default4]:[rank52]: return Popen(process_obj) [default4]:[rank52]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default4]:[rank52]: self._launch(process_obj) [default4]:[rank52]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default4]:[rank52]: self.pid = os.fork() [default4]:[rank52]: OSError: [Errno 12] Cannot allocate memory [default0]:[rank56]: Traceback (most recent call last): [default0]:[rank56]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default0]:[rank56]: trainer.train(dataloader) [default0]:[rank56]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default0]:[rank56]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default0]:[rank56]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default0]:[rank56]: outputs = self.pipeline_engine.train_batch_iter( [default0]:[rank56]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default0]:[rank56]: for micro_batch in batch: [default0]:[rank56]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default0]:[default7]:[rank55]: Traceback (most recent call last): [default7]:[rank55]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default7]:[rank55]: trainer.train(dataloader) [default7]:[rank55]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default7]:[rank55]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default7]:[rank55]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default7]:[rank55]: outputs = self.pipeline_engine.train_batch_iter( [default7]:[rank55]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default7]:[rank55]: for micro_batch in batch: [default7]:[rank55]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default7]:[rank56]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default0]:[rank56]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [rank55]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default7]:[rank55]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default7]:[rank55]: for batch in dataloader: [default7]:[rank55]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default7]:[rank55]: return self._get_iterator() [default7]:[rank55]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default7]:[rank55]: return _MultiProcessingDataLoaderIter(self) [default0]:[rank56]: for batch in dataloader: [default0]:[rank56]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default0]:[rank56]: return self._get_iterator() [default0]:[rank56]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default0]:[rank56]: return _MultiProcessingDataLoaderIter(self) [default0]:[rank56]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default0]:[rank56]: w.start() [default0]:[rank56]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default0]:[rank56]: self._popen = self._Popen(self) [default0]:[rank56]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/con[default7]:[rank55]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default7]:[rank55]: w.start() [default7]:[rank55]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default7]:[rank55]: self._popen = self._Popen(self) [default7]:[rank55]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default7]:[rank55]: return _default_context.get_context().Process._Popen(process_obj) [default7]:[rank55]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default7]:[rank55]: return Popen(process_obj) [default7]:[rank55]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default7]:[rank55]: self._launch(protext.py", line 224, in _Popen [default0]:[rank56]: return _default_context.get_context().Process._Popen(process_obj) [default0]:[rank56]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default0]:[rank56]: return Popen(process_obj) [default0]:[rank56]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default0]:[rank56]: self._launch(process_obj) [default0]:[rank56]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default0]:[rank56]: self.pid = os.fork() cess_obj) [default7]:[rank55]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default7]:[rank55]: self.pid = os.fork() [default7]:[rank55]: OSError: [Errno 12] Cannot allocate memory [default0]:[rank56]: OSError: [Errno 12] Cannot allocate memory [default5]:[rank61]: Traceback (most recent call last): [default5]:[rank61]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default5]:[rank61]: trainer.train(dataloader) [default5]:[rank61]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default5]:[rank61]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default5]:[rank61]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default5]:[rank61]: outputs = self.pipeline_engine.train_batch_iter( [default5]:[rank61]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default5]:[rank61]: for micro_batch in batch: [default5]:[rank61]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default5]:[rank61]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default5]:[rank61]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default5]:[rank61]: for batch in dataloader: [default5]:[rank61]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default5]:[rank61]: return self._get_iterator() [default5]:[rank61]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default5]:[rank61]: return _MultiProcessingDataLoaderIter(self) [default5]:[rank61]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default5]:[rank61]: w.start() [default5]:[rank61]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default5]:[rank61]: self._popen = self._Popen(self) [default5]:[rank61]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default5]:[rank61]: return _default_context.get_context().Process._Popen(process_obj) [default5]:[rank61]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default5]:[rank61]: return Popen(process_obj) [default5]:[rank61]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default5]:[rank61]: self._launch(process_obj) [default5]:[rank61]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default5]:[rank61]: self.pid = os.fork() [default5]:[rank61]: OSError: [Errno 12] Cannot allocate memory [default6]:[rank54]: Traceback (most recent call last): [default6]:[rank54]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default6]:[rank54]: trainer.train(dataloader) [default6]:[rank54]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default6]:[rank54]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default6]:[rank54]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default6]:[rank54]: outputs = self.pipeline_engine.train_batch_iter( [default6]:[rank54]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default6]:[rank54]: for micro_batch in batch: [default6]:[rank54]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default6]:[rank54]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default6]:[rank54]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default6]:[rank54]: for batch in dataloader: [default6]:[rank54]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default6]:[rank54]: return self._get_iterator() [default6]:[rank54]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default6]:[rank54]: return _MultiProcessingDataLoaderIter(self) [default6]:[rank54]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default6]:[rank54]: w.start() [default6]:[rank54]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default6]:[rank54]: self._popen = self._Popen(self) [default6]:[rank54]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default6]:[rank54]: return _default_context.get_context().Process._Popen(process_obj) [default6]:[rank54]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default6]:[rank54]: return Popen(process_obj) [default6]:[rank54]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default6]:[rank54]: self._launch(process_obj) [default6]:[rank54]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default6]:[rank54]: self.pid = os.fork() [default6]:[rank54]: OSError: [Errno 12] Cannot allocate memory [default4]:[rank4]: Traceback (most recent call last): [default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default4]:[rank4]: trainer.train(dataloader) [default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default4]:[rank4]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default4]:[rank4]: outputs = self.pipeline_engine.train_batch_iter( [default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default4]:[rank4]: for micro_batch in batch: [default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default4]:[rank4]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default4]:[rank4]: for batch in dataloader: [default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default4]:[rank4]: return self._get_iterator() [default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default4]:[rank4]: return _MultiProcessingDataLoaderIter(self) [default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default4]:[rank4]: w.start() [default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default4]:[rank4]: self._popen = self._Popen(self) [default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default4]:[rank4]: return _default_context.get_context().Process._Popen(process_obj) [default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default4]:[rank4]: return Popen(process_obj) [default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default4]:[rank4]: self._launch(process_obj) [default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default4]:[rank4]: self.pid = os.fork() [default4]:[rank4]: OSError: [Errno 12] Cannot allocate memory [default5]:[rank5]: Traceback (most recent call last): [default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default5]:[rank5]: trainer.train(dataloader) [default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default5]:[rank5]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default5]:[rank5]: outputs = self.pipeline_engine.train_batch_iter( [default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default5]:[rank5]: for micro_batch in batch: [default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default5]:[rank5]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default5]:[rank5]: for batch in dataloader: [default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default5]:[rank5]: return self._get_iterator() [default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default5]:[rank5]: return _MultiProcessingDataLoaderIter(self) [default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default5]:[rank5]: w.start() [default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default5]:[rank5]: self._popen = self._Popen(self) [default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default5]:[rank5]: return _default_context.get_context().Process._Popen(process_obj) [default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default5]:[rank5]: return Popen(process_obj) [default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default5]:[rank5]: self._launch(process_obj) [default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default5]:[rank5]: self.pid = os.fork() [default5]:[rank5]: OSError: [Errno 12] Cannot allocate memory [default7]:[rank47]: Traceback (most recent call last): [default7]:[rank47]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default7]:[rank47]: trainer.train(dataloader) [default7]:[rank47]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default7]:[rank47]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default7]:[rank47]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default7]:[rank47]: outputs = self.pipeline_engine.train_batch_iter( [default7]:[rank47]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default7]:[rank47]: for micro_batch in batch: [default7]:[rank47]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default7]:[rank47]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default7]:[rank47]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default7]:[rank47]: for batch in dataloader: [default7]:[rank47]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default7]:[rank47]: return self._get_iterator() [default7]:[rank47]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default7]:[rank47]: return _MultiProcessingDataLoaderIter(self) [default7]:[rank47]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default7]:[rank47]: w.start() [default7]:[rank47]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default7]:[rank47]: self._popen = self._Popen(self) [default7]:[rank47]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default7]:[rank47]: return _default_context.get_context().Process._Popen(process_obj) [default7]:[rank47]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default7]:[rank47]: return Popen(process_obj) [default7]:[rank47]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default7]:[rank47]: self._launch(process_obj) [default7]:[rank47]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default7]:[rank47]: self.pid = os.fork() [default7]:[rank47]: OSError: [Errno 12] Cannot allocate memory [default0]:[rank32]: Traceback (most recent call last): [default0]:[rank32]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default0]:[rank32]: trainer.train(dataloader) [default0]:[rank32]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default0]:[rank32]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default0]:[rank32]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default0]:[rank32]: outputs = self.pipeline_engine.train_batch_iter( [default0]:[rank32]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default0]:[rank32]: for micro_batch in batch: [default0]:[rank32]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default0]:[rank32]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default0]:[rank32]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default0]:[rank32]: for batch in dataloader: [default0]:[rank32]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default0]:[rank32]: return self._get_iterator() [default0]:[rank32]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default0]:[rank32]: return _MultiProcessingDataLoaderIter(self) [default0]:[rank32]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default0]:[rank32]: w.start() [default0]:[rank32]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default0]:[rank32]: self._popen = self._Popen(self) [default0]:[rank32]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default0]:[rank32]: return _default_context.get_context().Process._Popen(process_obj) [default0]:[rank32]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default0]:[rank32]: return Popen(process_obj) [default0]:[rank32]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default0]:[rank32]: self._launch(process_obj) [default0]:[rank32]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default0]:[rank32]: self.pid = os.fork() [default0]:[rank32]: OSError: [Errno 12] Cannot allocate memory [default1]:Repo card metadata block was not found. Setting CardData to empty. [default1]:07/02/2024 21:09:44 [WARNING|DP=4|PP=0|TP=1|ip-26-0-163-147]: Repo card metadata block was not found. Setting CardData to empty. [default2]:[rank42]: Traceback (most recent call last): [default2]:[rank42]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default2]:[rank42]: trainer.train(dataloader) [default2]:[rank42]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default2]:[rank42]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default2]:[rank42]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default2]:[rank42]: outputs = self.pipeline_engine.train_batch_iter( [default2]:[rank42]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default2]:[rank42]: for micro_batch in batch: [default2]:[rank42]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default2]:[rank42]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default2]:[rank42]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default2]:[rank42]: for batch in dataloader: [default2]:[rank42]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default2]:[rank42]: return self._get_iterator() [default2]:[rank42]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default2]:[rank42]: return _MultiProcessingDataLoaderIter(self) [default2]:[rank42]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default2]:[rank42]: w.start() [default2]:[rank42]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default2]:[rank42]: self._popen = self._Popen(self) [default2]:[rank42]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default2]:[rank42]: return _default_context.get_context().Process._Popen(process_obj) [default2]:[rank42]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default2]:[rank42]: return Popen(process_obj) [default2]:[rank42]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default2]:[rank42]: self._launch(process_obj) [default2]:[rank42]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default2]:[rank42]: self.pid = os.fork() [default2]:[rank42]: OSError: [Errno 12] Cannot allocate memory [default4]:[rank44]: Traceback (most recent call last): [default4]:[rank44]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default4]:[rank44]: trainer.train(dataloader) [default4]:[rank44]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default4]:[rank44]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default4]:[rank44]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default4]:[rank44]: outputs = self.pipeline_engine.train_batch_iter( [default4]:[rank44]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default4]:[rank44]: for micro_batch in batch: [default4]:[rank44]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default4]:[rank44]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default4]:[rank44]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default4]:[rank44]: for batch in dataloader: [default4]:[rank44]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default4]:[rank44]: return self._get_iterator() [default4]:[rank44]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default4]:[rank44]: return _MultiProcessingDataLoaderIter(self) [default4]:[rank44]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default4]:[rank44]: w.start() [default4]:[rank44]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default4]:[rank44]: self._popen = self._Popen(self) [default4]:[rank44]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default4]:[rank44]: return _default_context.get_context().Process._Popen(process_obj) [default4]:[rank44]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default4]:[rank44]: return Popen(process_obj) [default4]:[rank44]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default4]:[rank44]: self._launch(process_obj) [default4]:[rank44]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default4]:[rank44]: self.pid = os.fork() [default4]:[rank44]: OSError: [Errno 12] Cannot allocate memory [default0]:[rank40]: Traceback (most recent call last): [default0]:[rank40]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default0]:[rank40]: trainer.train(dataloader) [default0]:[rank40]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default0]:[rank40]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default0]:[rank40]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default0]:[rank40]: outputs = self.pipeline_engine.train_batch_iter( [default0]:[rank40]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default0]:[rank40]: for micro_batch in batch: [default0]:[rank40]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default0]:[rank40]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default0]:[rank40]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default0]:[rank40]: for batch in dataloader: [default0]:[rank40]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default0]:[rank40]: return self._get_iterator() [default0]:[rank40]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default0]:[rank40]: return _MultiProcessingDataLoaderIter(self) [default0]:[rank40]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default0]:[rank40]: w.start() [default0]:[rank40]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default0]:[rank40]: self._popen = self._Popen(self) [default0]:[rank40]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default0]:[rank40]: return _default_context.get_context().Process._Popen(process_obj) [default0]:[rank40]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default0]:[rank40]: return Popen(process_obj) [default0]:[rank40]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default0]:[rank40]: self._launch(process_obj) [default0]:[rank40]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default0]:[rank40]: self.pid = os.fork() [default0]:[rank40]: OSError: [Errno 12] Cannot allocate memory [default5]:[rank45]: Traceback (most recent call last): [default5]:[rank45]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default5]:[rank45]: trainer.train(dataloader) [default5]:[rank45]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default6]:[rank46]: Traceback (most recent call last): [default6]:[rank46]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default5]:[rank45]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default6]:[rank46]: trainer.train(dataloader) [default6]:[rank46]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default6]:[rank46]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default6]:[rank46]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default5]:[rank45]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default6]:[rank46]: outputs = self.pipeline_engine.train_batch_iter( [default5]:[rank45]: outputs = self.pipeline_engine.train_batch_iter( [default5]:[rank45]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default6]:[rank46]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default6]:[rank46]: for micro_batch in batch: [default5]:[rank45]: for micro_batch in batch: [default6]:[rank46]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default6]:[rank46]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default5]:[rank45]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default5]:[rank45]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default5]:[rank45]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default6]:[rank46]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default6]:[rank46]: for batch in dataloader: [default6]:[rank46]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default6]:[rank46]: return self._get_iterator() [default6]:[rank46]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default6]:[rank46]: return _MultiProcessingDataLoaderIter(self) [default6]:[rank46]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default6]:[rank46]: w.start() [default5]:[rank45]: for batch in dataloader: [default5]:[rank45]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default5]:[rank45]: return self._get_iterator() [default6]:[rank46]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default5]:[rank45]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default5]:[rank45]: return _MultiProcessingDataLoaderIter(self) [default5]:[rank45]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default5]:[rank45]: w.start() [default5]:[rank45]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default5]:[rank45]: self._popen = self._Popen(self) [default5]:[rank45]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default5]:[rank45]: return _default_context.get_context().Process._Popen(process_obj) [default5]:[rank45]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default5]:[rank45]: return Popen(process_obj) [default6]:[rank46]: self._popen = self._Popen(self) [default5]:[rank45]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default6]:[rank46]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default5]:[rank45]: self._launch(process_obj) [default5]:[rank45]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default6]:[rank46]: return _default_context.get_context().Process._Popen(process_obj) [default5]:[rank45]: self.pid = os.fork() [default5]:[rank45]: OSError: [Errno 12] Cannot allocate memory [default6]:[rank46]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default6]:[rank46]: return Popen(process_obj) [default6]:[rank46]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default6]:[rank46]: self._launch(process_obj) [default6]:[rank46]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default6]:[rank46]: self.pid = os.fork() [default6]:[rank46]: OSError: [Errno 12] Cannot allocate memory [default3]:[rank43]: Traceback (most recent call last): [default3]:[rank43]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default3]:[rank43]: trainer.train(dataloader) [default3]:[rank43]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default3]:[rank43]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default3]:[rank43]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default3]:[rank43]: outputs = self.pipeline_engine.train_batch_iter( [default3]:[rank43]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default3]:[rank43]: for micro_batch in batch: [default3]:[rank43]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default3]:[rank43]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default3]:[rank43]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default3]:[rank43]: for batch in dataloader: [default3]:[rank43]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default3]:[rank43]: return self._get_iterator() [default3]:[rank43]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default3]:[rank43]: return _MultiProcessingDataLoaderIter(self) [default3]:[rank43]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default3]:[rank43]: w.start() [default3]:[rank43]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default3]:[rank43]: self._popen = self._Popen(self) [default3]:[rank43]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default3]:[rank43]: return _default_context.get_context().Process._Popen(process_obj) [default3]:[rank43]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default3]:[rank43]: return Popen(process_obj) [default3]:[rank43]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default3]:[rank43]: self._launch(process_obj) [default3]:[rank43]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default3]:[rank43]: self.pid = os.fork() [default3]:[rank43]: OSError: [Errno 12] Cannot allocate memory [default2]:[rank50]: Traceback (most recent call last): [default2]:[rank50]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default2]:[rank50]: trainer.train(dataloader) [default2]:[rank50]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default2]:[rank50]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default2]:[rank50]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default2]:[rank50]: outputs = self.pipeline_engine.train_batch_iter( [default2]:[rank50]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default2]:[rank50]: for micro_batch in batch: [default2]:[rank50]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default2]:[rank50]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default2]:[rank50]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default2]:[rank50]: for batch in dataloader: [default2]:[rank50]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default2]:[rank50]: return self._get_iterator() [default2]:[rank50]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default2]:[rank50]: return _MultiProcessingDataLoaderIter(self) [default2]:[rank50]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default2]:[rank50]: w.start() [default2]:[rank50]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default2]:[rank50]: self._popen = self._Popen(self) [default2]:[rank50]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default2]:[rank50]: return _default_context.get_context().Process._Popen(process_obj) [default2]:[rank50]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default2]:[rank50]: return Popen(process_obj) [default2]:[rank50]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default2]:[rank50]: self._launch(process_obj) [default2]:[rank50]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default2]:[rank50]: self.pid = os.fork() [default2]:[rank50]: OSError: [Errno 12] Cannot allocate memory [default3]:[rank59]: Traceback (most recent call last): [default3]:[rank51]: Traceback (most recent call last): [default3]:[rank51]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default3]:[rank51]: trainer.train(dataloader) [default3]:[rank51]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default3]:[rank51]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default3]:[rank51]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default3]:[rank51]: outputs = self.pipeline_engine.train_batch_iter( [default3]:[rank51]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default3]:[rank51]: for micro_batch in batch: [default3]:[rank51]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default3]:[default3]:[rank59]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default1]:[rank57]: Traceback (most recent call last): [default1]:[rank57]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default1]:[rank57]: trainer.train(dataloader) [default1]:[rank57]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default1]:[rank57]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default1]:[rank57]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default1]:[rank57]: outputs = self.pipeline_engine.train_batch_iter( [default1]:[rank57]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default1]:[rank57]: for micro_batch in batch: [rank51]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default3]:[rank51]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default3]:[rank51]: for batch in dataloader: [default3]:[rank51]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default3]:[rank51]: return self._get_iterator() [default3]:[rank51]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default3]:[rank51]: return _MultiProcessingDataLoaderIter(self) [default3]:[rank51]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default3]:[rank59]: trainer.train(dataloader) [default3]:[rank59]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default3]:[rank51]: w.start() [default3]:[rank51]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default3]:[rank51]: self._popen = self._Popen(self) [default3]:[rank51]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default3]:[rank51]: return _default_context.get_context().Process._Popen(process_obj) [default3]:[rank51]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default3]:[rank51]: return Popen(process_obj) [default3]:[rank51]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default3]:[rank51]: self._launch(process_obj) [default3]:[rank51]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default[default1]:[rank57]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in 3]:[rank51]: self.pid = os.fork() [default3]:[rank51]: OSError: [Errno 12] Cannot allocate memory [default3]:[rank59]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default1]:[rank57]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default3]:[rank59]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default1]:[rank57]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default1]:[rank57]: for batch in dataloader: [default1]:[rank57]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default1]:[rank57]: return self._get_iterator() [default1]:[rank57]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default3]:[rank59]: outputs = self.pipeline_engine.train_batch_iter( [default1]:[rank57]: return _MultiProcessingDataLoaderIter(self) [default3]:[rank59]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default1]:[rank57]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default1]:[rank57]: w.start() [default1]:[rank57]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default1]:[rank57]: self._popen = self._Popen(self) [default1]:[rank57]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default1]:[rank57]: return _default_context.get_context().Process._Popen(process_obj) [default1]:[rank57]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default1]:[rank57]: return Popen(process_obj) [default1]:[rank57]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default1]:[rank57]: self._launch(process_obj) [default1]:[rank57]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default1]:[rank57]: self.pid = os.fork() [default1]:[rank57]: OSError: [Errno 12] Cannot allocate memory [default3]:[rank59]: for micro_batch in batch: [default3]:[rank59]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default3]:[rank59]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default3]:[rank59]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default3]:[rank59]: for batch in dataloader: [default3]:[rank59]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default3]:[rank59]: return self._get_iterator() [default3]:[rank59]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default3]:[rank59]: return _MultiProcessingDataLoaderIter(self) [default3]:[rank59]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default3]:[rank59]: w.start() [default3]:[rank59]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default3]:[rank59]: self._popen = self._Popen(self) [default3]:[rank59]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default3]:[rank59]: return _default_context.get_context().Process._Popen(process_obj) [default3]:[rank59]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default3]:[rank59]: return Popen(process_obj) [default3]:[rank59]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default3]:[rank59]: self._launch(process_obj) [default3]:[rank59]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default3]:[rank59]: self.pid = os.fork() [default3]:[rank59]: OSError: [Errno 12] Cannot allocate memory [default2]:[rank58]: Traceback (most recent call last): [default2]:[rank58]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default7]:[rank63]: Traceback (most recent call last): [default7]:[rank63]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default2]:[rank58]: trainer.train(dataloader) [default7]:[rank63]: trainer.train(dataloader) [default7]:[rank63]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default7]:[rank63]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default7]:[rank63]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default7]:[rank63]: outputs = self.pipeline_engine.train_batch_iter( [default2]:[rank58]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default7]:[rank63]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default2]:[rank58]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default2]:[rank58]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default7]:[rank63]: for micro_batch in batch: [default2]:[rank58]: outputs = self.pipeline_engine.train_batch_iter( [default2]:[rank58]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default2]:[rank58]: for micro_batch in batch: [default7]:[rank63]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default2]:[rank58]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default7]:[rank63]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default7]:[rank63]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default7]:[rank63]: for batch in dataloader: [default7]:[rank63]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default7]:[rank63]: return self._get_iterator() [default7]:[rank63]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default7]:[rank63]: return _MultiProcessingDataLoaderIter(self) [default7]:[rank63]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default7]:[rank63]: w.start() [default7]:[rank63]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default7]:[rank63]: self._popen = self._Popen(self) [default7]:[rank63]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default7]:[rank63]: return _default_context.get_context().Process._Popen(process_obj) [default2]:[rank58]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default2]:[rank58]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default2]:[rank58]: for batch in dataloader: [default7]:[rank63]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default7]:[rank63]: return Popen(process_obj) [default7]:[rank63]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default7]:[rank63]: self._launch(process_obj) [default2]:[rank58]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default7]:[rank63]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default2]:[rank58]: return self._get_iterator() [default2]:[rank58]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default7]:[rank63]: self.pid = os.fork() [default7]:[rank63]: OSError: [Errno 12] Cannot allocate memory [default2]:[rank58]: return _MultiProcessingDataLoaderIter(self) [default2]:[rank58]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default2]:[rank58]: w.start() [default2]:[rank58]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default2]:[rank58]: self._popen = self._Popen(self) [default2]:[rank58]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default2]:[rank58]: return _default_context.get_context().Process._Popen(process_obj) [default2]:[rank58]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default2]:[rank58]: return Popen(process_obj) [default2]:[rank58]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default2]:[rank58]: self._launch(process_obj) [default2]:[rank58]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default2]:[rank58]: self.pid = os.fork() [default2]:[rank58]: OSError: [Errno 12] Cannot allocate memory [default0]:[rank48]: Traceback (most recent call last): [default0]:[rank48]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default0]:[rank48]: trainer.train(dataloader) [default0]:[rank48]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default0]:[rank48]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default0]:[rank48]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default0]:[rank48]: outputs = self.pipeline_engine.train_batch_iter( [default0]:[rank48]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default0]:[rank48]: for micro_batch in batch: [default0]:[rank48]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default0]:[rank48]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default0]:[rank48]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default0]:[rank48]: for batch in dataloader: [default0]:[rank48]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default0]:[rank48]: return self._get_iterator() [default0]:[rank48]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default0]:[rank48]: return _MultiProcessingDataLoaderIter(self) [default0]:[rank48]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default0]:[rank48]: w.start() [default0]:[rank48]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default0]:[rank48]: self._popen = self._Popen(self) [default0]:[rank48]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default0]:[rank48]: return _default_context.get_context().Process._Popen(process_obj) [default0]:[rank48]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default0]:[rank48]: return Popen(process_obj) [default0]:[rank48]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default0]:[rank48]: self._launch(process_obj) [default0]:[rank48]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default0]:[rank48]: self.pid = os.fork() [default0]:[rank48]: OSError: [Errno 12] Cannot allocate memory [default1]:[rank49]: Traceback (most recent call last): [default1]:[rank49]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default1]:[rank49]: trainer.train(dataloader) [default1]:[rank49]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default1]:[rank49]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default1]:[rank49]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default1]:[rank49]: outputs = self.pipeline_engine.train_batch_iter( [default1]:[rank49]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default1]:[rank49]: for micro_batch in batch: [default1]:[rank49]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default1]:[rank49]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default1]:[rank49]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default1]:[rank49]: for batch in dataloader: [default1]:[rank49]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default1]:[rank49]: return self._get_iterator() [default1]:[rank49]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default1]:[rank49]: return _MultiProcessingDataLoaderIter(self) [default1]:[rank49]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default1]:[rank49]: w.start() [default1]:[rank49]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default1]:[rank49]: self._popen = self._Popen(self) [default1]:[rank49]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default1]:[rank49]: return _default_context.get_context().Process._Popen(process_obj) [default1]:[rank49]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default1]:[rank49]: return Popen(process_obj) [default1]:[rank49]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default1]:[rank49]: self._launch(process_obj) [default1]:[rank49]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default1]:[rank49]: self.pid = os.fork() [default1]:[rank49]: OSError: [Errno 12] Cannot allocate memory [default5]:[rank53]: Traceback (most recent call last): [default5]:[rank53]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default5]:[rank53]: trainer.train(dataloader) [default5]:[rank53]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default5]:[rank53]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default5]:[rank53]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default5]:[rank53]: outputs = self.pipeline_engine.train_batch_iter( [default5]:[rank53]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default5]:[rank53]: for micro_batch in batch: [default5]:[rank53]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default5]:[rank53]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default5]:[rank53]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default5]:[rank53]: for batch in dataloader: [default5]:[rank53]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default5]:[rank53]: return self._get_iterator() [default5]:[rank53]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default5]:[rank53]: return _MultiProcessingDataLoaderIter(self) [default5]:[rank53]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default5]:[rank53]: w.start() [default5]:[rank53]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default5]:[rank53]: self._popen = self._Popen(self) [default5]:[rank53]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default5]:[rank53]: return _default_context.get_context().Process._Popen(process_obj) [default5]:[rank53]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default5]:[rank53]: return Popen(process_obj) [default5]:[rank53]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default5]:[rank53]: self._launch(process_obj) [default5]:[rank53]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default5]:[rank53]: self.pid = os.fork() [default5]:[rank53]: OSError: [Errno 12] Cannot allocate memory [default0]:[rank8]: Traceback (most recent call last): [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default0]:[rank8]: trainer.train(dataloader) [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default0]:[rank8]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default0]:[rank8]: outputs = self.pipeline_engine.train_batch_iter( [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default0]:[rank8]: for micro_batch in batch: [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default0]:[rank8]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default0]:[rank8]: for batch in dataloader: [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default0]:[rank8]: return self._get_iterator() [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default0]:[rank8]: return _MultiProcessingDataLoaderIter(self) [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default0]:[rank8]: w.start() [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default0]:[rank8]: self._popen = self._Popen(self) [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default0]:[rank8]: return _default_context.get_context().Process._Popen(process_obj) [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default0]:[rank8]: return Popen(process_obj) [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default0]:[rank8]: self._launch(process_obj) [default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default0]:[rank8]: self.pid = os.fork() [default0]:[rank8]: OSError: [Errno 12] Cannot allocate memory [default3]:[rank11]: Traceback (most recent call last): [default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default3]:[rank11]: trainer.train(dataloader) [default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default3]:[rank11]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default3]:[rank11]: outputs = self.pipeline_engine.train_batch_iter( [default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default3]:[rank11]: for micro_batch in batch: [default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default3]:[rank11]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default3]:[rank11]: for batch in dataloader: [default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default3]:[rank11]: return self._get_iterator() [default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default3]:[rank11]: return _MultiProcessingDataLoaderIter(self) [default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default3]:[rank11]: w.start() [default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default3]:[rank11]: self._popen = self._Popen(self) [default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default3]:[rank11]: return _default_context.get_context().Process._Popen(process_obj) [default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default3]:[rank11]: return Popen(process_obj) [default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default3]:[rank11]: self._launch(process_obj) [default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default3]:[rank11]: self.pid = os.fork() [default3]:[rank11]: OSError: [Errno 12] Cannot allocate memory [default7]:[rank15]: Traceback (most recent call last): [default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default7]:[rank15]: trainer.train(dataloader) [default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default7]:[rank15]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default7]:[rank15]: outputs = self.pipeline_engine.train_batch_iter( [default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default7]:[rank15]: for micro_batch in batch: [default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default7]:[rank15]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default7]:[rank15]: for batch in dataloader: [default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default7]:[rank15]: return self._get_iterator() [default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default7]:[rank15]: return _MultiProcessingDataLoaderIter(self) [default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default7]:[rank15]: w.start() [default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default7]:[rank15]: self._popen = self._Popen(self) [default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default7]:[rank15]: return _default_context.get_context().Process._Popen(process_obj) [default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default7]:[rank15]: return Popen(process_obj) [default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default7]:[rank15]: self._launch(process_obj) [default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default7]:[rank15]: self.pid = os.fork() [default7]:[rank15]: OSError: [Errno 12] Cannot allocate memory [default1]:[rank9]: Traceback (most recent call last): [default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default1]:[rank9]: trainer.train(dataloader) [default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default1]:[rank9]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default1]:[rank9]: outputs = self.pipeline_engine.train_batch_iter( [default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default1]:[rank9]: for micro_batch in batch: [default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default1]:[rank9]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default1]:[rank9]: for batch in dataloader: [default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default1]:[rank9]: return self._get_iterator() [default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default1]:[rank9]: return _MultiProcessingDataLoaderIter(self) [default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default1]:[rank9]: w.start() [default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default1]:[rank9]: self._popen = self._Popen(self) [default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default1]:[rank9]: return _default_context.get_context().Process._Popen(process_obj) [default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default1]:[rank9]: return Popen(process_obj) [default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default1]:[rank9]: self._launch(process_obj) [default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default1]:[rank9]: self.pid = os.fork() [default1]:[rank9]: OSError: [Errno 12] Cannot allocate memory [default5]:[rank13]: Traceback (most recent call last): [default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default5]:[rank13]: trainer.train(dataloader) [default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default5]:[rank13]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default5]:[rank13]: outputs = self.pipeline_engine.train_batch_iter( [default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default5]:[rank13]: for micro_batch in batch: [default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default5]:[rank13]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default5]:[rank13]: for batch in dataloader: [default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default5]:[rank13]: return self._get_iterator() [default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default5]:[rank13]: return _MultiProcessingDataLoaderIter(self) [default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default5]:[rank13]: w.start() [default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default5]:[rank13]: self._popen = self._Popen(self) [default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default5]:[rank13]: return _default_context.get_context().Process._Popen(process_obj) [default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default5]:[rank13]: return Popen(process_obj) [default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default5]:[rank13]: self._launch(process_obj) [default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default5]:[rank13]: self.pid = os.fork() [default5]:[rank13]: OSError: [Errno 12] Cannot allocate memory [default4]:[rank12]: Traceback (most recent call last): [default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default4]:[rank12]: trainer.train(dataloader) [default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default4]:[rank12]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default4]:[rank12]: outputs = self.pipeline_engine.train_batch_iter( [default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default4]:[rank12]: for micro_batch in batch: [default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default4]:[rank12]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default4]:[rank12]: for batch in dataloader: [default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default4]:[rank12]: return self._get_iterator() [default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default4]:[rank12]: return _MultiProcessingDataLoaderIter(self) [default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default4]:[rank12]: w.start() [default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default4]:[rank12]: self._popen = self._Popen(self) [default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default4]:[rank12]: return _default_context.get_context().Process._Popen(process_obj) [default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default4]:[rank12]: return Popen(process_obj) [default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default4]:[rank12]: self._launch(process_obj) [default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default4]:[rank12]: self.pid = os.fork() [default4]:[rank12]: OSError: [Errno 12] Cannot allocate memory [default2]:[rank10]: Traceback (most recent call last): [default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default2]:[rank10]: trainer.train(dataloader) [default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default2]:[rank10]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default2]:[rank10]: outputs = self.pipeline_engine.train_batch_iter( [default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default2]:[rank10]: for micro_batch in batch: [default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default2]:[rank10]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default2]:[rank10]: for batch in dataloader: [default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default2]:[rank10]: return self._get_iterator() [default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default2]:[rank10]: return _MultiProcessingDataLoaderIter(self) [default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default2]:[rank10]: w.start() [default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default2]:[rank10]: self._popen = self._Popen(self) [default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default2]:[rank10]: return _default_context.get_context().Process._Popen(process_obj) [default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default2]:[rank10]: return Popen(process_obj) [default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default2]:[rank10]: self._launch(process_obj) [default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default2]:[rank10]: self.pid = os.fork() [default2]:[rank10]: OSError: [Errno 12] Cannot allocate memory [default6]:[rank14]: Traceback (most recent call last): [default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default6]:[rank14]: trainer.train(dataloader) [default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default6]:[rank14]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default6]:[rank14]: outputs = self.pipeline_engine.train_batch_iter( [default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default6]:[rank14]: for micro_batch in batch: [default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default6]:[rank14]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default6]:[rank14]: for batch in dataloader: [default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default6]:[rank14]: return self._get_iterator() [default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default6]:[rank14]: return _MultiProcessingDataLoaderIter(self) [default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default6]:[rank14]: w.start() [default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default6]:[rank14]: self._popen = self._Popen(self) [default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default6]:[rank14]: return _default_context.get_context().Process._Popen(process_obj) [default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default6]:[rank14]: return Popen(process_obj) [default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default6]:[rank14]: self._launch(process_obj) [default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default6]:[rank14]: self.pid = os.fork() [default6]:[rank14]: OSError: [Errno 12] Cannot allocate memory [default5]:[rank37]: Traceback (most recent call last): [default5]:[rank37]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default5]:[rank37]: trainer.train(dataloader) [default5]:[rank37]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default5]:[rank37]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default5]:[rank37]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default5]:[rank37]: outputs = self.pipeline_engine.train_batch_iter( [default5]:[rank37]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default5]:[rank37]: for micro_batch in batch: [default5]:[rank37]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default5]:[rank37]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default5]:[rank37]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default5]:[rank37]: for batch in dataloader: [default5]:[rank37]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default5]:[rank37]: return self._get_iterator() [default5]:[rank37]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default5]:[rank37]: return _MultiProcessingDataLoaderIter(self) [default5]:[rank37]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default5]:[rank37]: w.start() [default5]:[rank37]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default5]:[rank37]: self._popen = self._Popen(self) [default5]:[rank37]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default5]:[rank37]: return _default_context.get_context().Process._Popen(process_obj) [default5]:[rank37]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default5]:[rank37]: return Popen(process_obj) [default5]:[rank37]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default5]:[rank37]: self._launch(process_obj) [default5]:[rank37]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default5]:[rank37]: self.pid = os.fork() [default5]:[rank37]: OSError: [Errno 12] Cannot allocate memory [default1]:[rank33]: Traceback (most recent call last): [default1]:[rank33]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default1]:[rank33]: trainer.train(dataloader) [default1]:[rank33]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default1]:[rank33]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default1]:[rank33]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default1]:[rank33]: outputs = self.pipeline_engine.train_batch_iter( [default1]:[rank33]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default1]:[rank33]: for micro_batch in batch: [default1]:[rank33]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default1]:[rank33]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default1]:[rank33]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default1]:[rank33]: for batch in dataloader: [default1]:[rank33]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default1]:[rank33]: return self._get_iterator() [default1]:[rank33]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default1]:[rank33]: return _MultiProcessingDataLoaderIter(self) [default1]:[rank33]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default1]:[rank33]: w.start() [default1]:[rank33]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default1]:[rank33]: self._popen = self._Popen(self) [default1]:[rank33]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default1]:[rank33]: return _default_context.get_context().Process._Popen(process_obj) [default1]:[rank33]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default1]:[rank33]: return Popen(process_obj) [default1]:[rank33]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default1]:[rank33]: self._launch(process_obj) [default1]:[rank33]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default1]:[rank33]: self.pid = os.fork() [default1]:[rank33]: OSError: [Errno 12] Cannot allocate memory [default6]:[rank62]: Traceback (most recent call last): [default6]:[rank62]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default6]:[rank62]: trainer.train(dataloader) [default6]:[rank62]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default6]:[rank62]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default6]:[rank62]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default6]:[rank62]: outputs = self.pipeline_engine.train_batch_iter( [default6]:[rank62]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default6]:[rank62]: for micro_batch in batch: [default6]:[rank62]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default6]:[rank62]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default6]:[rank62]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default6]:[rank62]: for batch in dataloader: [default6]:[rank62]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default6]:[rank62]: return self._get_iterator() [default6]:[rank62]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default6]:[rank62]: return _MultiProcessingDataLoaderIter(self) [default6]:[rank62]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default6]:[rank62]: w.start() [default6]:[rank62]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default6]:[rank62]: self._popen = self._Popen(self) [default6]:[rank62]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default6]:[rank62]: return _default_context.get_context().Process._Popen(process_obj) [default6]:[rank62]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default6]:[rank62]: return Popen(process_obj) [default6]:[rank62]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default6]:[rank62]: self._launch(process_obj) [default6]:[rank62]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default6]:[rank62]: self.pid = os.fork() [default6]:[rank62]: OSError: [Errno 12] Cannot allocate memory [default1]:[rank41]: Traceback (most recent call last): [default1]:[rank41]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default1]:[rank41]: trainer.train(dataloader) [default1]:[rank41]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default1]:[rank41]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default1]:[rank41]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default1]:[rank41]: outputs = self.pipeline_engine.train_batch_iter( [default1]:[rank41]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default1]:[rank41]: for micro_batch in batch: [default1]:[rank41]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default1]:[rank41]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default1]:[rank41]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default1]:[rank41]: for batch in dataloader: [default1]:[rank41]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default1]:[rank41]: return self._get_iterator() [default1]:[rank41]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default1]:[rank41]: return _MultiProcessingDataLoaderIter(self) [default1]:[rank41]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default1]:[rank41]: w.start() [default1]:[rank41]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default1]:[rank41]: self._popen = self._Popen(self) [default1]:[rank41]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default1]:[rank41]: return _default_context.get_context().Process._Popen(process_obj) [default1]:[rank41]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default1]:[rank41]: return Popen(process_obj) [default1]:[rank41]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default1]:[rank41]: self._launch(process_obj) [default1]:[rank41]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default1]:[rank41]: self.pid = os.fork() [default1]:[rank41]: OSError: [Errno 12] Cannot allocate memory [default4]:[rank60]: Traceback (most recent call last): [default4]:[rank60]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in [default4]:[rank60]: trainer.train(dataloader) [default4]:[rank60]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train [default4]:[rank60]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) [default4]:[rank60]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step [default4]:[rank60]: outputs = self.pipeline_engine.train_batch_iter( [default4]:[rank60]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 276, in train_batch_iter [default4]:[rank60]: for micro_batch in batch: [default4]:[rank60]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 465, in [default4]:[rank60]: batch=(next(dataloader) for _ in range(self.n_micro_batches_per_batch)), [default4]:[rank60]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/dataloader.py", line 46, in sanity_check_dataloader [default4]:[rank60]: for batch in dataloader: [default4]:[rank60]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 439, in __iter__ [default4]:[rank60]: return self._get_iterator() [default4]:[rank60]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 387, in _get_iterator [default4]:[rank60]: return _MultiProcessingDataLoaderIter(self) [default4]:[rank60]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1040, in __init__ [default4]:[rank60]: w.start() [default4]:[rank60]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/process.py", line 121, in start [default4]:[rank60]: self._popen = self._Popen(self) [default4]:[rank60]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 224, in _Popen [default4]:[rank60]: return _default_context.get_context().Process._Popen(process_obj) [default4]:[rank60]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/context.py", line 281, in _Popen [default4]:[rank60]: return Popen(process_obj) [default4]:[rank60]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ [default4]:[rank60]: self._launch(process_obj) [default4]:[rank60]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch [default4]:[rank60]: self.pid = os.fork() [default4]:[rank60]: OSError: [Errno 12] Cannot allocate memory E0702 21:09:52.605000 139855240435520 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 1719528) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in sys.exit(main()) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ /fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED ------------------------------------------------------------ Failures: [1]: time : 2024-07-02_21:09:52 host : ip-26-0-160-225.ec2.internal rank : 1 (local_rank: 1) exitcode : 1 (pid: 1719529) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [2]: time : 2024-07-02_21:09:52 host : ip-26-0-160-225.ec2.internal rank : 2 (local_rank: 2) exitcode : 1 (pid: 1719530) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [3]: time : 2024-07-02_21:09:52 host : ip-26-0-160-225.ec2.internal rank : 3 (local_rank: 3) exitcode : 1 (pid: 1719531) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [4]: time : 2024-07-02_21:09:52 host : ip-26-0-160-225.ec2.internal rank : 4 (local_rank: 4) exitcode : 1 (pid: 1719532) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [5]: time : 2024-07-02_21:09:52 host : ip-26-0-160-225.ec2.internal rank : 5 (local_rank: 5) exitcode : 1 (pid: 1719533) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [6]: time : 2024-07-02_21:09:52 host : ip-26-0-160-225.ec2.internal rank : 6 (local_rank: 6) exitcode : 1 (pid: 1719534) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [7]: time : 2024-07-02_21:09:52 host : ip-26-0-160-225.ec2.internal rank : 7 (local_rank: 7) exitcode : 1 (pid: 1719535) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-07-02_21:09:52 host : ip-26-0-160-225.ec2.internal rank : 0 (local_rank: 0) exitcode : 1 (pid: 1719528) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ srun: error: ip-26-0-160-225: task 0: Exited with exit code 1 W0702 21:09:56.574000 139664751998720 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-171-88.ec2.internal_827710_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:09:56.662000 140628377696000 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-163-226.ec2.internal_3154823_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:09:56.671000 139628134602496 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-163-147.ec2.internal_748365_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:09:56.930000 139857915885312 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-168-238.ec2.internal_1788932_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:09:56.936000 140395245090560 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-169-86.ec2.internal_1762539_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:09:57.119000 139755560650496 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-171-102.ec2.internal_3712213_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:09:57.454000 140502048990976 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-171-62.ec2.internal_3840866_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:09:57.480000 140634038429504 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3154898 closing signal SIGTERM W0702 21:09:57.480000 140634038429504 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3154900 closing signal SIGTERM W0702 21:09:57.481000 140634038429504 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3154901 closing signal SIGTERM W0702 21:09:57.481000 140634038429504 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3154903 closing signal SIGTERM W0702 21:09:57.481000 140634038429504 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3154904 closing signal SIGTERM W0702 21:09:57.481000 140634038429504 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3154905 closing signal SIGTERM W0702 21:09:57.488000 139863576618816 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1789006 closing signal SIGTERM W0702 21:09:57.488000 139863576618816 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1789007 closing signal SIGTERM W0702 21:09:57.489000 139863576618816 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1789008 closing signal SIGTERM W0702 21:09:57.488000 139761221384000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3712291 closing signal SIGTERM W0702 21:09:57.489000 139761221384000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3712292 closing signal SIGTERM W0702 21:09:57.489000 139761221384000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3712293 closing signal SIGTERM W0702 21:09:57.490000 139863576618816 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1789009 closing signal SIGTERM W0702 21:09:57.490000 139863576618816 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1789010 closing signal SIGTERM W0702 21:09:57.489000 139761221384000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3712294 closing signal SIGTERM W0702 21:09:57.491000 139863576618816 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1789011 closing signal SIGTERM W0702 21:09:57.491000 139863576618816 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1789012 closing signal SIGTERM W0702 21:09:57.491000 139761221384000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3712295 closing signal SIGTERM W0702 21:09:57.491000 139761221384000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3712296 closing signal SIGTERM W0702 21:09:57.491000 139633795336000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 748438 closing signal SIGTERM W0702 21:09:57.491000 139633795336000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 748439 closing signal SIGTERM W0702 21:09:57.492000 139633795336000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 748440 closing signal SIGTERM W0702 21:09:57.492000 139761221384000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3712297 closing signal SIGTERM W0702 21:09:57.493000 139761221384000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3712298 closing signal SIGTERM W0702 21:09:57.492000 139633795336000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 748441 closing signal SIGTERM W0702 21:09:57.492000 139633795336000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 748442 closing signal SIGTERM W0702 21:09:57.493000 140507709724480 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3840940 closing signal SIGTERM W0702 21:09:57.493000 140507709724480 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3840941 closing signal SIGTERM W0702 21:09:57.494000 139633795336000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 748443 closing signal SIGTERM W0702 21:09:57.494000 139670412732224 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 827784 closing signal SIGTERM W0702 21:09:57.494000 139670412732224 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 827785 closing signal SIGTERM W0702 21:09:57.494000 140507709724480 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3840942 closing signal SIGTERM W0702 21:09:57.494000 139670412732224 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 827786 closing signal SIGTERM W0702 21:09:57.495000 139670412732224 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 827787 closing signal SIGTERM W0702 21:09:57.494000 140507709724480 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3840943 closing signal SIGTERM W0702 21:09:57.495000 139633795336000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 748444 closing signal SIGTERM W0702 21:09:57.495000 140507709724480 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3840944 closing signal SIGTERM W0702 21:09:57.495000 139633795336000 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 748445 closing signal SIGTERM W0702 21:09:57.496000 139670412732224 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 827788 closing signal SIGTERM W0702 21:09:57.496000 139670412732224 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 827789 closing signal SIGTERM W0702 21:09:57.496000 140507709724480 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3840945 closing signal SIGTERM W0702 21:09:57.496000 139670412732224 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 827790 closing signal SIGTERM W0702 21:09:57.496000 140507709724480 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3840946 closing signal SIGTERM W0702 21:09:57.496000 140507709724480 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 3840947 closing signal SIGTERM W0702 21:09:57.497000 139670412732224 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 827791 closing signal SIGTERM W0702 21:09:57.502000 140400905824064 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1762612 closing signal SIGTERM W0702 21:09:57.502000 140400905824064 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1762613 closing signal SIGTERM W0702 21:09:57.502000 140400905824064 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1762614 closing signal SIGTERM W0702 21:09:57.502000 140400905824064 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1762615 closing signal SIGTERM W0702 21:09:57.503000 140400905824064 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1762616 closing signal SIGTERM W0702 21:09:57.503000 140400905824064 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1762617 closing signal SIGTERM W0702 21:09:57.504000 140400905824064 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1762618 closing signal SIGTERM W0702 21:09:57.504000 140400905824064 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1762619 closing signal SIGTERM E0702 21:09:58.209000 140634038429504 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 1 (pid: 3154899) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 W0702 21:09:58.214000 140634038429504 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-163-226.ec2.internal_3154823_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:09:58.248000 140634038429504 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-163-226.ec2.internal_3154823_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:09:58.264000 140634038429504 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-163-226.ec2.internal_3154823_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in sys.exit(main()) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ /fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED ------------------------------------------------------------ Failures: [1]: time : 2024-07-02_21:09:57 host : ip-26-0-163-226.ec2.internal rank : 20 (local_rank: 4) exitcode : 1 (pid: 3154902) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-07-02_21:09:57 host : ip-26-0-163-226.ec2.internal rank : 17 (local_rank: 1) exitcode : 1 (pid: 3154899) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ E0702 21:09:58.517000 139863576618816 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 7 (pid: 1789013) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 W0702 21:09:58.523000 139863576618816 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-168-238.ec2.internal_1788932_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. srun: error: ip-26-0-163-226: task 2: Exited with exit code 1 W0702 21:09:58.560000 139863576618816 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-168-238.ec2.internal_1788932_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:09:58.570000 139863576618816 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-168-238.ec2.internal_1788932_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in sys.exit(main()) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ /fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED ------------------------------------------------------------ Failures: ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-07-02_21:09:57 host : ip-26-0-168-238.ec2.internal rank : 31 (local_rank: 7) exitcode : 1 (pid: 1789013) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ W0702 21:09:58.619000 139670412732224 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-88.ec2.internal_827710_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:09:58.630000 139670412732224 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-88.ec2.internal_827710_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store return getattr(self._store, store_op)(*args, **kwargs) torch.distributed.DistNetworkError: Broken pipe The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in sys.exit(main()) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 254, in launch_agent result = agent.run() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper result = f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 733, in run result = self._invoke_run(role) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 908, in _invoke_run num_nodes_waiting = rdzv_handler.num_nodes_waiting() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1174, in num_nodes_waiting self._state_holder.sync() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 419, in sync get_response = self._backend.get_state() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state base64_state: bytes = self._call_store("get", self._key) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store raise RendezvousConnectionError( torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details. srun: error: ip-26-0-168-238: task 3: Exited with exit code 1 srun: error: ip-26-0-171-88: task 6: Exited with exit code 1 W0702 21:10:01.018000 139633795336000 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-163-147.ec2.internal_748365_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:10:01.028000 139633795336000 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-163-147.ec2.internal_748365_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store return getattr(self._store, store_op)(*args, **kwargs) torch.distributed.DistNetworkError: Broken pipe The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in sys.exit(main()) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 254, in launch_agent result = agent.run() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper result = f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 733, in run result = self._invoke_run(role) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 908, in _invoke_run num_nodes_waiting = rdzv_handler.num_nodes_waiting() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1174, in num_nodes_waiting self._state_holder.sync() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 419, in sync get_response = self._backend.get_state() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state base64_state: bytes = self._call_store("get", self._key) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store raise RendezvousConnectionError( torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details. srun: error: ip-26-0-163-147: task 1: Exited with exit code 1 W0702 21:10:01.425000 140507709724480 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-62.ec2.internal_3840866_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:10:01.436000 140507709724480 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-62.ec2.internal_3840866_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store return getattr(self._store, store_op)(*args, **kwargs) torch.distributed.DistNetworkError: Broken pipe The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in sys.exit(main()) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 254, in launch_agent result = agent.run() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper result = f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 733, in run result = self._invoke_run(role) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 908, in _invoke_run num_nodes_waiting = rdzv_handler.num_nodes_waiting() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1174, in num_nodes_waiting self._state_holder.sync() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 419, in sync get_response = self._backend.get_state() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state base64_state: bytes = self._call_store("get", self._key) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store raise RendezvousConnectionError( torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details. srun: error: ip-26-0-171-62: task 5: Exited with exit code 1 W0702 21:10:01.940000 140395245090560 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-169-86.ec2.internal_1762539_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:10:02.124000 139755560650496 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-171-102.ec2.internal_3712213_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:10:02.523000 139761221384000 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-102.ec2.internal_3712213_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:10:02.534000 139761221384000 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-102.ec2.internal_3712213_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store return getattr(self._store, store_op)(*args, **kwargs) torch.distributed.DistNetworkError: Broken pipe The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in sys.exit(main()) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 254, in launch_agent result = agent.run() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper result = f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 733, in run result = self._invoke_run(role) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 908, in _invoke_run num_nodes_waiting = rdzv_handler.num_nodes_waiting() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1174, in num_nodes_waiting self._state_holder.sync() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 419, in sync get_response = self._backend.get_state() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state base64_state: bytes = self._call_store("get", self._key) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store raise RendezvousConnectionError( torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details. srun: error: ip-26-0-171-102: task 7: Exited with exit code 1 W0702 21:10:03.031000 140400905824064 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-169-86.ec2.internal_1762539_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. W0702 21:10:03.039000 140400905824064 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-169-86.ec2.internal_1762539_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store return getattr(self._store, store_op)(*args, **kwargs) torch.distributed.DistNetworkError: Broken pipe The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in sys.exit(main()) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 254, in launch_agent result = agent.run() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper result = f(*args, **kwargs) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 733, in run result = self._invoke_run(role) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 908, in _invoke_run num_nodes_waiting = rdzv_handler.num_nodes_waiting() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1174, in num_nodes_waiting self._state_holder.sync() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 419, in sync get_response = self._backend.get_state() File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state base64_state: bytes = self._call_store("get", self._key) File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store raise RendezvousConnectionError( torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details. srun: error: ip-26-0-169-86: task 4: Exited with exit code 1 Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.