3outeille's picture
3outeille HF staff
Upload llama-1B/16_GPUS/dp-16_tp-1_pp-1_mbz-8
eff8ee1 verified
raw
history blame
86.3 kB
========================
START TIME: Tue Jul 2 16:30:16 UTC 2024
python3 version = Python 3.10.14
========================
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
Token is valid (permission: write).
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token
Login successful
Already on 'bench_cluster'
M examples/config_tiny_llama.py
M examples/config_tiny_llama.yaml
M examples/train_tiny_llama.sh
M src/nanotron/models/llama.py
M src/nanotron/trainer.py
Your branch is up to date with 'origin/bench_cluster'.
Job status: RUNNING
W0702 16:30:18.958000 139728007939904 torch/distributed/run.py:757]
W0702 16:30:18.958000 139728007939904 torch/distributed/run.py:757] *****************************************
W0702 16:30:18.958000 139728007939904 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0702 16:30:18.958000 139728007939904 torch/distributed/run.py:757] *****************************************
W0702 16:30:18.975000 140288511039296 torch/distributed/run.py:757]
W0702 16:30:18.975000 140288511039296 torch/distributed/run.py:757] *****************************************
W0702 16:30:18.975000 140288511039296 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0702 16:30:18.975000 140288511039296 torch/distributed/run.py:757] *****************************************
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Config:
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Config(general=GeneralArgs(project='bench_cluster',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: run='%date_%jobid',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: seed=42,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: step=None,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: consumed_train_samples=None,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: benchmark_csv_path=None,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: ignore_sanity_checks=True),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: parallelism=ParallelismArgs(dp=16,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pp=1,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tp=1,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7f2370f98910>,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tp_linear_async_communication=False,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: expert_parallel_size=1),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: eos_token_id=2,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hidden_act='silu',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hidden_size=2048,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: initializer_range=0.02,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: intermediate_size=4096,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: is_llama_config=True,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: max_position_embeddings=4096,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_attention_heads=32,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_hidden_layers=24,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_key_value_heads=32,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pad_token_id=None,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pretraining_tp=1,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rms_norm_eps=1e-05,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rope_scaling=None,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rope_theta=10000.0,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tie_word_embeddings=True,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: use_cache=True,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: vocab_size=50257),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: init_method=RandomInit(std=0.025),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: dtype=torch.bfloat16,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: make_vocab_size_divisible_by=1,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: ddp_bucket_cap_mb=25),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tokenizer_revision=None,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tokenizer_max_length=None),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: checkpoint_interval=100000,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: save_initial_state=False,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: resume_checkpoint_path=None,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: checkpoints_path_is_shared_file_system=False),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: logging=LoggingArgs(log_level='info',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: log_level_replica='info',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: iteration_step_info_interval=1),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tokens=TokensArgs(sequence_length=4096,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: train_steps=20,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: micro_batch_size=8,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: batch_accumulation_per_replica=8,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: val_check_interval=-1,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: limit_val_batches=0,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: limit_test_batches=0),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: adam_beta1=0.9,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: adam_beta2=0.95,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: torch_adam_is_fused=True,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: name='adamW'),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: zero_stage=1,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: weight_decay=0.01,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: clip_grad=1.0,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: accumulate_grad_in_fp32=True,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lr_warmup_steps=1,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lr_warmup_style='linear',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lr_decay_style='linear',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lr_decay_steps=19,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lr_decay_starting_step=None,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: min_decay_lr=1e-05)),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: data_stages=[DatasetStageArgs(name='Training Stage',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: start_training_step=1,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hf_dataset_splits='train',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hf_dataset_config_name=None,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: dataset_processing_num_proc_per_process=64,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: dataset_overwrite_cache=False,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: text_column_name='text'),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: seed=42,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_loading_workers=32))],
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/16_GPUS/dp-16_tp-1_pp-1_mbz-8')),
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: lighteval=None)
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Model Config:
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: LlamaConfig(bos_token_id=1,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: eos_token_id=2,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hidden_act='silu',
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: hidden_size=2048,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: initializer_range=0.02,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: intermediate_size=4096,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: is_llama_config=True,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: max_position_embeddings=4096,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_attention_heads=32,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_hidden_layers=24,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: num_key_value_heads=32,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pad_token_id=None,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: pretraining_tp=1,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rms_norm_eps=1e-05,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rope_scaling=None,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: rope_theta=10000.0,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: tie_word_embeddings=True,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: use_cache=True,
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: vocab_size=50257)
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Building model..
[default0]:07/02/2024 16:30:36 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Setting PP block ranks...
[default4]:07/02/2024 16:30:46 [INFO|DP=12|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default1]:07/02/2024 16:30:46 [INFO|DP=1|PP=0|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default6]:07/02/2024 16:30:46 [INFO|DP=6|PP=0|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default4]:07/02/2024 16:30:46 [INFO|DP=4|PP=0|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default3]:07/02/2024 16:30:46 [INFO|DP=3|PP=0|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default2]:07/02/2024 16:30:46 [INFO|DP=2|PP=0|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default5]:07/02/2024 16:30:46 [INFO|DP=5|PP=0|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default0]:07/02/2024 16:30:46 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Total number of parameters: 1.11G (2116.51MiB)
[default0]:07/02/2024 16:30:46 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Local number of parameters: 1.11G (2116.51MiB)
[default0]:07/02/2024 16:30:46 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [After model building] Memory usage: 2140.53MiB. Peak allocated: 2338.88MiB Peak reserved: 2392.00MiB
[default0]:07/02/2024 16:30:46 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default0]:07/02/2024 16:30:46 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Parametrizing model parameters using StandardParametrizator
[default7]:07/02/2024 16:30:45 [INFO|DP=7|PP=0|TP=0|ip-26-0-160-192]: No checkpoint path provided.
[default7]:07/02/2024 16:30:46 [INFO|DP=15|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default0]:07/02/2024 16:30:46 [INFO|DP=8|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default1]:07/02/2024 16:30:46 [INFO|DP=9|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default2]:07/02/2024 16:30:46 [INFO|DP=10|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default5]:07/02/2024 16:30:46 [INFO|DP=13|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default3]:07/02/2024 16:30:46 [INFO|DP=11|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default6]:07/02/2024 16:30:46 [INFO|DP=14|PP=0|TP=0|ip-26-0-162-233]: No checkpoint path provided.
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [Optimizer Building] Using LearningRateForSP as learning rate
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] Size of optimizer params per rank:
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 0 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 1 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 2 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 3 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 4 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 5 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 6 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 7 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 8 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 9 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 10 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 11 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 12 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 13 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 14 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:54 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [ZeRO sharding] DP Rank 15 has 69.4M out of 1.11G (6.25%) params' optimizer states
[default0]:07/02/2024 16:30:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples
[default0]:07/02/2024 16:30:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Using `datasets` library
[default0]:07/02/2024 16:30:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4')
[default0]:07/02/2024 16:30:56 [WARNING|DP=0|PP=0|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/02/2024 16:30:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [Training Plan] There are 1 training stages
[default0]:07/02/2024 16:30:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [Stage Training Stage] start from step 1
[default0]:07/02/2024 16:30:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]:
[default0]:07/02/2024 16:30:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: [Start training] datetime: 2024-07-02 16:30:56.969534 | mbs: 8 | grad_accum: 8 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0
[default0]:07/02/2024 16:30:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps
[default0]:07/02/2024 16:30:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-192]: Memory usage: 6639.09MiB. Peak allocated 6639.09MiB. Peak reserved: 6892.00MiB
[default1]:07/02/2024 16:30:57 [WARNING|DP=9|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/02/2024 16:30:57 [WARNING|DP=15|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/02/2024 16:30:57 [WARNING|DP=8|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/02/2024 16:30:57 [WARNING|DP=11|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/02/2024 16:30:57 [WARNING|DP=1|PP=0|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/02/2024 16:30:57 [WARNING|DP=5|PP=0|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/02/2024 16:30:57 [WARNING|DP=10|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/02/2024 16:30:57 [WARNING|DP=14|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/02/2024 16:30:57 [WARNING|DP=6|PP=0|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/02/2024 16:30:57 [WARNING|DP=4|PP=0|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/02/2024 16:30:57 [WARNING|DP=2|PP=0|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/02/2024 16:30:57 [WARNING|DP=3|PP=0|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/02/2024 16:30:57 [WARNING|DP=13|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/02/2024 16:30:57 [WARNING|DP=12|PP=0|TP=0|ip-26-0-162-233]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/02/2024 16:30:57 [WARNING|DP=7|PP=0|TP=0|ip-26-0-160-192]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:[rank0]: Traceback (most recent call last):
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default0]:[rank0]: trainer.train(dataloader)
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default0]:[rank0]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default0]:[rank0]: outputs = self.pipeline_engine.train_batch_iter(
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default0]:[rank0]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default0]:[rank0]: output = model(**micro_batch)
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank0]: return self._call_impl(*args, **kwargs)
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank0]: return forward_call(*args, **kwargs)
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default0]:[rank0]: sharded_logits = self.model(
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank0]: return self._call_impl(*args, **kwargs)
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank0]: return forward_call(*args, **kwargs)
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default0]:[rank0]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 786, in forward_with_hidden_states
[default0]:[rank0]: fp32_sharded_logits = self.cast_to_fp32(x=sharded_logits)["output"]
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank0]: return self._call_impl(*args, **kwargs)
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank0]: return forward_call(*args, **kwargs)
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward
[default0]:[rank0]: output = self.pp_block(**new_kwargs)
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 753, in <lambda>
[default0]:[rank0]: module_builder=lambda: lambda x: x.float(),
[default0]:[rank0]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.14 GiB. GPU
[default1]:[rank1]: Traceback (most recent call last):
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default1]:[rank1]: trainer.train(dataloader)
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default1]:[rank1]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default1]:[rank1]: outputs = self.pipeline_engine.train_batch_iter(
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default1]:[rank1]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default1]:[rank1]: output = model(**micro_batch)
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default1]:[rank1]: return self._call_impl(*args, **kwargs)
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default1]:[rank1]: return forward_call(*args, **kwargs)
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default1]:[rank1]: sharded_logits = self.model(
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default1]:[rank1]: return self._call_impl(*args, **kwargs)
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default1]:[rank1]: return forward_call(*args, **kwargs)
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default1]:[rank1]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 786, in forward_with_hidden_states
[default1]:[rank1]: fp32_sharded_logits = self.cast_to_fp32(x=sharded_logits)["output"]
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default1]:[rank1]: return self._call_impl(*args, **kwargs)
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default1]:[rank1]: return forward_call(*args, **kwargs)
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward
[default1]:[rank1]: output = self.pp_block(**new_kwargs)
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 753, in <lambda>
[default1]:[rank1]: module_builder=lambda: lambda x: x.float(),
[default1]:[rank1]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.14 GiB. GPU  has a total capacity of 79.33 GiB of which 5.28 GiB is free. Including non-PyTorch memory, this process has 74.04 GiB memory in use. Of the allocated memory 66.95 GiB is allocated by PyTorch, and 170.19 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[default1]:[rank9]: OSError: [Errno 122] Disk quota exceeded
[default1]:
[default1]:[rank9]: During handling of the above exception, another exception occurred:
[default1]:
[default1]:[rank9]: Traceback (most recent call last):
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default1]:[rank9]: trainer.train(dataloader)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default1]:[rank9]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default1]:[rank9]: outputs = self.pipeline_engine.train_batch_iter(
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default1]:[rank9]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default1]:[rank9]: output = model(**micro_batch)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default1]:[rank9]: return self._call_impl(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default1]:[rank9]: return forward_call(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default1]:[rank9]: sharded_logits = self.model(
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default1]:[rank9]: return self._call_impl(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default1]:[rank9]: return forward_call(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default1]:[rank9]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default1]:[rank9]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default1]:[rank9]: return self._call_impl(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default1]:[rank9]: return forward_call(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward
[default1]:[rank9]: output = self.pp_block(**new_kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default1]:[rank9]: return self._call_impl(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default1]:[rank9]: return forward_call(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 629, in forward
[default1]:[rank9]: hidden_states = self.input_layernorm(hidden_states)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default1]:[rank9]: return self._call_impl(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default1]:[rank9]: return forward_call(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/nn/layer_norm.py", line 42, in forward
[default1]:[rank9]: return layer_norm_fn(
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 875, in layer_norm_fn
[default1]:[rank9]: return LayerNormFn.apply(
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply
[default1]:[rank9]: return super().apply(*args, **kwargs) # type: ignore[misc]
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 748, in forward
[default1]:[rank9]: y, y1, mean, rstd, residual_out, seeds, dropout_mask, dropout_mask1 = _layer_norm_fwd(
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 335, in _layer_norm_fwd
[default1]:[rank9]: _layer_norm_fwd_1pass_kernel[(M,)](
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in <lambda>
[default1]:[rank9]: return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 143, in run
[default1]:[rank9]: timings = {config: self._bench(*args, config=config, **kwargs) for config in pruned_configs}
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 143, in <dictcomp>
[default1]:[rank9]: timings = {config: self._bench(*args, config=config, **kwargs) for config in pruned_configs}
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 122, in _bench
[default1]:[rank9]: return do_bench(kernel_call, warmup=self.warmup, rep=self.rep, quantiles=(0.5, 0.2, 0.8))
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/testing.py", line 102, in do_bench
[default1]:[rank9]: fn()
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 110, in kernel_call
[default1]:[rank9]: self.fn.run(
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run
[default1]:[rank9]: return self.fn.run(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run
[default1]:[rank9]: return self.fn.run(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run
[default1]:[rank9]: return self.fn.run(*args, **kwargs)
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 416, in run
[default1]:[rank9]: self.cache[device][key] = compile(
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/compiler/compiler.py", line 194, in compile
[default1]:[rank9]: metadata_group[f"{src.name}.{ext}"] = fn_cache_manager.put(next_module, f"{src.name}.{ext}")
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/cache.py", line 123, in put
[default1]:[rank9]: with open(temp_path, mode) as f:
[default1]:[rank9]: OSError: [Errno 122] Disk quota exceeded
[default5]:[rank13]: OSError: [Errno 122] Disk quota exceeded
[default5]:
[default5]:[rank13]: During handling of the above exception, another exception occurred:
[default5]:
[default5]:[rank13]: Traceback (most recent call last):
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default5]:[rank13]: trainer.train(dataloader)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default5]:[rank13]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default5]:[rank13]: outputs = self.pipeline_engine.train_batch_iter(
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default5]:[rank13]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default5]:[rank13]: output = model(**micro_batch)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default5]:[rank13]: return self._call_impl(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default5]:[rank13]: return forward_call(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default5]:[rank13]: sharded_logits = self.model(
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default5]:[rank13]: return self._call_impl(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default5]:[rank13]: return forward_call(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default5]:[rank13]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default5]:[rank13]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default5]:[rank13]: return self._call_impl(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default5]:[rank13]: return forward_call(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward
[default5]:[rank13]: output = self.pp_block(**new_kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default5]:[rank13]: return self._call_impl(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default5]:[rank13]: return forward_call(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 629, in forward
[default5]:[rank13]: hidden_states = self.input_layernorm(hidden_states)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default5]:[rank13]: return self._call_impl(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default5]:[rank13]: return forward_call(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/nn/layer_norm.py", line 42, in forward
[default5]:[rank13]: return layer_norm_fn(
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 875, in layer_norm_fn
[default5]:[rank13]: return LayerNormFn.apply(
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply
[default5]:[rank13]: return super().apply(*args, **kwargs) # type: ignore[misc]
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 748, in forward
[default5]:[rank13]: y, y1, mean, rstd, residual_out, seeds, dropout_mask, dropout_mask1 = _layer_norm_fwd(
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/layer_norm.py", line 335, in _layer_norm_fwd
[default5]:[rank13]: _layer_norm_fwd_1pass_kernel[(M,)](
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in <lambda>
[default5]:[rank13]: return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 143, in run
[default5]:[rank13]: timings = {config: self._bench(*args, config=config, **kwargs) for config in pruned_configs}
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 143, in <dictcomp>
[default5]:[rank13]: timings = {config: self._bench(*args, config=config, **kwargs) for config in pruned_configs}
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 122, in _bench
[default5]:[rank13]: return do_bench(kernel_call, warmup=self.warmup, rep=self.rep, quantiles=(0.5, 0.2, 0.8))
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/testing.py", line 102, in do_bench
[default5]:[rank13]: fn()
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 110, in kernel_call
[default5]:[rank13]: self.fn.run(
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run
[default5]:[rank13]: return self.fn.run(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run
[default5]:[rank13]: return self.fn.run(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 305, in run
[default5]:[rank13]: return self.fn.run(*args, **kwargs)
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 416, in run
[default5]:[rank13]: self.cache[device][key] = compile(
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/compiler/compiler.py", line 194, in compile
[default5]:[rank13]: metadata_group[f"{src.name}.{ext}"] = fn_cache_manager.put(next_module, f"{src.name}.{ext}")
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/cache.py", line 123, in put
[default5]:[rank13]: with open(temp_path, mode) as f:
[default5]:[rank13]: OSError: [Errno 122] Disk quota exceeded
[default2]:[rank10]: Traceback (most recent call last):
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default2]:[rank10]: trainer.train(dataloader)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default2]:[rank10]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default2]:[rank10]: outputs = self.pipeline_engine.train_batch_iter(
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default2]:[rank10]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default2]:[rank10]: output = model(**micro_batch)
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default2]:[rank10]: return self._call_impl(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default2]:[rank10]: return forward_call(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default2]:[rank10]: sharded_logits = self.model(
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default2]:[rank10]: return self._call_impl(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default2]:[rank10]: return forward_call(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default2]:[rank10]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 786, in forward_with_hidden_states
[default2]:[rank10]: fp32_sharded_logits = self.cast_to_fp32(x=sharded_logits)["output"]
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default2]:[rank10]: return self._call_impl(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default2]:[rank10]: return forward_call(*args, **kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward
[default2]:[rank10]: output = self.pp_block(**new_kwargs)
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 753, in <lambda>
[default2]:[rank10]: module_builder=lambda: lambda x: x.float(),
[default2]:[rank10]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.14 GiB. GPU  has a total capacity of 79.33 GiB of which 5.12 GiB is free. Including non-PyTorch memory, this process has 74.20 GiB memory in use. Of the allocated memory 66.95 GiB is allocated by PyTorch, and 170.19 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[default7]:[rank15]: Traceback (most recent call last):
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default7]:[rank15]: trainer.train(dataloader)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default7]:[rank15]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default7]:[rank15]: outputs = self.pipeline_engine.train_batch_iter(
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default7]:[rank15]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default7]:[rank15]: output = model(**micro_batch)
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default7]:[rank15]: return self._call_impl(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default7]:[rank15]: return forward_call(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default7]:[rank15]: sharded_logits = self.model(
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default7]:[rank15]: return self._call_impl(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default7]:[rank15]: return forward_call(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default7]:[rank15]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 786, in forward_with_hidden_states
[default7]:[rank15]: fp32_sharded_logits = self.cast_to_fp32(x=sharded_logits)["output"]
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default7]:[rank15]: return self._call_impl(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default7]:[rank15]: return forward_call(*args, **kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward
[default7]:[rank15]: output = self.pp_block(**new_kwargs)
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 753, in <lambda>
[default7]:[rank15]: module_builder=lambda: lambda x: x.float(),
[default7]:[rank15]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.14 GiB. GPU  has a total capacity of 79.33 GiB of which 4.96 GiB is free. Including non-PyTorch memory, this process has 74.36 GiB memory in use. Of the allocated memory 66.95 GiB is allocated by PyTorch, and 170.19 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[default3]:[rank11]: Traceback (most recent call last):
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default3]:[rank11]: trainer.train(dataloader)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default3]:[rank11]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default3]:[rank11]: outputs = self.pipeline_engine.train_batch_iter(
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default3]:[rank11]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default3]:[rank11]: output = model(**micro_batch)
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default3]:[rank11]: return self._call_impl(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default3]:[rank11]: return forward_call(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default3]:[rank11]: sharded_logits = self.model(
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default3]:[rank11]: return self._call_impl(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default3]:[rank11]: return forward_call(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default3]:[rank11]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 786, in forward_with_hidden_states
[default3]:[rank11]: fp32_sharded_logits = self.cast_to_fp32(x=sharded_logits)["output"]
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default3]:[rank11]: return self._call_impl(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default3]:[rank11]: return forward_call(*args, **kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward
[default3]:[rank11]: output = self.pp_block(**new_kwargs)
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 753, in <lambda>
[default3]:[rank11]: module_builder=lambda: lambda x: x.float(),
[default3]:[rank11]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.14 GiB. GPU  has a total capacity of 79.33 GiB of which 4.88 GiB is free. Including non-PyTorch memory, this process has 74.43 GiB memory in use. Of the allocated memory 66.95 GiB is allocated by PyTorch, and 170.19 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[default0]:[rank8]: OSError: [Errno 122] Disk quota exceeded
[default0]:
[default0]:[rank8]: During handling of the above exception, another exception occurred:
[default0]:
[default0]:[rank8]: Traceback (most recent call last):
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default0]:[rank8]: trainer.train(dataloader)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default0]:[rank8]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default0]:[rank8]: outputs = self.pipeline_engine.train_batch_iter(
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default0]:[rank8]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default0]:[rank8]: output = model(**micro_batch)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank8]: return self._call_impl(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank8]: return forward_call(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default0]:[rank8]: sharded_logits = self.model(
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank8]: return self._call_impl(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank8]: return forward_call(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default0]:[rank8]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states
[default0]:[rank8]: hidden_encoder_states = encoder_block(**hidden_encoder_states)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank8]: return self._call_impl(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank8]: return forward_call(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward
[default0]:[rank8]: output = self.pp_block(**new_kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank8]: return self._call_impl(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank8]: return forward_call(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 631, in forward
[default0]:[rank8]: output = self.attn(hidden_states=hidden_states, sequence_mask=sequence_mask)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank8]: return self._call_impl(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank8]: return forward_call(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 566, in forward
[default0]:[rank8]: query_states, key_value_states = self.flash_rotary_embedding(query_states, kv=key_value_states)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default0]:[rank8]: return self._call_impl(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default0]:[rank8]: return forward_call(*args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 457, in forward
[default0]:[rank8]: q = apply_rotary_emb_func(
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 122, in apply_rotary_emb
[default0]:[rank8]: return ApplyRotaryEmb.apply(
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply
[default0]:[rank8]: return super().apply(*args, **kwargs) # type: ignore[misc]
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 48, in forward
[default0]:[rank8]: out = apply_rotary(
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/flash_attn/ops/triton/rotary.py", line 202, in apply_rotary
[default0]:[rank8]: rotary_kernel[grid](
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in <lambda>
[default0]:[rank8]: return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/jit.py", line 416, in run
[default0]:[rank8]: self.cache[device][key] = compile(
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/compiler/compiler.py", line 194, in compile
[default0]:[rank8]: metadata_group[f"{src.name}.{ext}"] = fn_cache_manager.put(next_module, f"{src.name}.{ext}")
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/triton/runtime/cache.py", line 123, in put
[default0]:[rank8]: with open(temp_path, mode) as f:
[default0]:[rank8]: OSError: [Errno 122] Disk quota exceeded
[default6]:[rank14]: Traceback (most recent call last):
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default6]:[rank14]: trainer.train(dataloader)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default6]:[rank14]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default6]:[rank14]: outputs = self.pipeline_engine.train_batch_iter(
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default6]:[rank14]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default6]:[rank14]: output = model(**micro_batch)
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default6]:[rank14]: return self._call_impl(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default6]:[rank14]: return forward_call(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default6]:[rank14]: sharded_logits = self.model(
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default6]:[rank14]: return self._call_impl(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default6]:[rank14]: return forward_call(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default6]:[rank14]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 786, in forward_with_hidden_states
[default6]:[rank14]: fp32_sharded_logits = self.cast_to_fp32(x=sharded_logits)["output"]
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default6]:[rank14]: return self._call_impl(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default6]:[rank14]: return forward_call(*args, **kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward
[default6]:[rank14]: output = self.pp_block(**new_kwargs)
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 753, in <lambda>
[default6]:[rank14]: module_builder=lambda: lambda x: x.float(),
[default6]:[rank14]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.14 GiB. GPU  has a total capacity of 79.33 GiB of which 4.88 GiB is free. Including non-PyTorch memory, this process has 74.43 GiB memory in use. Of the allocated memory 66.95 GiB is allocated by PyTorch, and 170.19 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[default4]:[rank12]: Traceback (most recent call last):
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module>
[default4]:[rank12]: trainer.train(dataloader)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train
[default4]:[rank12]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step
[default4]:[rank12]: outputs = self.pipeline_engine.train_batch_iter(
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter
[default4]:[rank12]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward
[default4]:[rank12]: output = model(**micro_batch)
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default4]:[rank12]: return self._call_impl(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default4]:[rank12]: return forward_call(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward
[default4]:[rank12]: sharded_logits = self.model(
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default4]:[rank12]: return self._call_impl(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default4]:[rank12]: return forward_call(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward
[default4]:[rank12]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0]
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 786, in forward_with_hidden_states
[default4]:[rank12]: fp32_sharded_logits = self.cast_to_fp32(x=sharded_logits)["output"]
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[default4]:[rank12]: return self._call_impl(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[default4]:[rank12]: return forward_call(*args, **kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward
[default4]:[rank12]: output = self.pp_block(**new_kwargs)
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 753, in <lambda>
[default4]:[rank12]: module_builder=lambda: lambda x: x.float(),
[default4]:[rank12]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.14 GiB. GPU  has a total capacity of 79.33 GiB of which 4.88 GiB is free. Including non-PyTorch memory, this process has 74.43 GiB memory in use. Of the allocated memory 66.95 GiB is allocated by PyTorch, and 170.19 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
W0702 16:31:05.158000 139728007939904 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 867663 closing signal SIGTERM
W0702 16:31:05.164000 139728007939904 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 867664 closing signal SIGTERM
W0702 16:31:05.161000 140288511039296 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1215877 closing signal SIGTERM
W0702 16:31:05.161000 140288511039296 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1215878 closing signal SIGTERM
W0702 16:31:05.161000 140288511039296 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1215880 closing signal SIGTERM
W0702 16:31:05.162000 140288511039296 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1215882 closing signal SIGTERM
W0702 16:31:05.169000 139728007939904 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 867665 closing signal SIGTERM
W0702 16:31:05.172000 139728007939904 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 867666 closing signal SIGTERM
W0702 16:31:05.177000 139728007939904 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 867667 closing signal SIGTERM
W0702 16:31:05.185000 139728007939904 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 867668 closing signal SIGTERM
E0702 16:31:06.175000 140288511039296 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 2 (pid: 1215879) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2024-07-02_16:31:05
host : ip-26-0-162-233.ec2.internal
rank : 12 (local_rank: 4)
exitcode : 1 (pid: 1215881)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2024-07-02_16:31:05
host : ip-26-0-162-233.ec2.internal
rank : 14 (local_rank: 6)
exitcode : 1 (pid: 1215883)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2024-07-02_16:31:05
host : ip-26-0-162-233.ec2.internal
rank : 15 (local_rank: 7)
exitcode : 1 (pid: 1215884)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-02_16:31:05
host : ip-26-0-162-233.ec2.internal
rank : 10 (local_rank: 2)
exitcode : 1 (pid: 1215879)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
srun: error: ip-26-0-162-233: task 1: Exited with exit code 1
E0702 16:31:07.108000 139728007939904 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 867661) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10
Traceback (most recent call last):
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main
run(args)
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run
elastic_launch(
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2024-07-02_16:31:05
host : ip-26-0-160-192.ec2.internal
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 867662)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-07-02_16:31:05
host : ip-26-0-160-192.ec2.internal
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 867661)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
srun: error: ip-26-0-160-192: task 0: Exited with exit code 1
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.