|
======================== |
|
START TIME: Tue Jul 2 20:14:49 UTC 2024 |
|
python3 version = Python 3.10.14 |
|
======================== |
|
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well. |
|
Token is valid (permission: write). |
|
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token |
|
Login successful |
|
Already on 'bench_cluster' |
|
M examples/config_tiny_llama.py |
|
M examples/config_tiny_llama.yaml |
|
M examples/train_tiny_llama.sh |
|
M src/nanotron/models/llama.py |
|
M src/nanotron/trainer.py |
|
Your branch is up to date with 'origin/bench_cluster'. |
|
Job status: RUNNING |
|
W0702 20:14:51.710000 140433821124416 torch/distributed/run.py:757] |
|
W0702 20:14:51.710000 140433821124416 torch/distributed/run.py:757] ***************************************** |
|
W0702 20:14:51.710000 140433821124416 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0702 20:14:51.710000 140433821124416 torch/distributed/run.py:757] ***************************************** |
|
W0702 20:14:51.716000 139673748735808 torch/distributed/run.py:757] |
|
W0702 20:14:51.716000 139673748735808 torch/distributed/run.py:757] ***************************************** |
|
W0702 20:14:51.716000 139673748735808 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0702 20:14:51.716000 139673748735808 torch/distributed/run.py:757] ***************************************** |
|
[default0]:07/02/2024 20:15:10 [WARNING|DP=0|PP=0|TP=0|ip-26-0-173-202]: [Vocab Size Padding] Padded vocab (size: 50257) with 15 dummy tokens (new size: 50272) |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Config: |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Config(general=GeneralArgs(project='bench_cluster', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: run='%date_%jobid', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: seed=42, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: step=None, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: consumed_train_samples=None, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: benchmark_csv_path=None, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: ignore_sanity_checks=True), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: parallelism=ParallelismArgs(dp=1, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: pp=1, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: tp=16, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7fe49980c910>, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: tp_linear_async_communication=False, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: expert_parallel_size=1), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: eos_token_id=2, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: hidden_act='silu', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: hidden_size=2048, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: initializer_range=0.02, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: intermediate_size=4096, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: is_llama_config=True, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: max_position_embeddings=4096, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: num_attention_heads=32, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: num_hidden_layers=24, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: num_key_value_heads=32, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: pad_token_id=None, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: pretraining_tp=1, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: rms_norm_eps=1e-05, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: rope_scaling=None, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: rope_theta=10000.0, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: tie_word_embeddings=True, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: use_cache=True, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: vocab_size=50272), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: init_method=RandomInit(std=0.025), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: dtype=torch.bfloat16, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: make_vocab_size_divisible_by=1, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: ddp_bucket_cap_mb=25), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: tokenizer_revision=None, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: tokenizer_max_length=None), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: checkpoint_interval=100000, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: save_initial_state=False, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: resume_checkpoint_path=None, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: checkpoints_path_is_shared_file_system=False), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: logging=LoggingArgs(log_level='info', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: log_level_replica='info', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: iteration_step_info_interval=1), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: tokens=TokensArgs(sequence_length=4096, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: train_steps=20, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: micro_batch_size=64, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: batch_accumulation_per_replica=16, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: val_check_interval=-1, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: limit_val_batches=0, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: limit_test_batches=0), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: adam_beta1=0.9, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: adam_beta2=0.95, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: torch_adam_is_fused=True, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: name='adamW'), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: zero_stage=1, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: weight_decay=0.01, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: clip_grad=1.0, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: accumulate_grad_in_fp32=True, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: lr_warmup_steps=1, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: lr_warmup_style='linear', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: lr_decay_style='linear', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: lr_decay_steps=19, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: lr_decay_starting_step=None, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: min_decay_lr=1e-05)), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: data_stages=[DatasetStageArgs(name='Training Stage', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: start_training_step=1, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: hf_dataset_splits='train', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: hf_dataset_config_name=None, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: dataset_processing_num_proc_per_process=64, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: dataset_overwrite_cache=False, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: text_column_name='text'), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: seed=42, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: num_loading_workers=32))], |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/16_GPUS/dp-1_tp-16_pp-1_mbz-64')), |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: lighteval=None) |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Model Config: |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: LlamaConfig(bos_token_id=1, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: eos_token_id=2, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: hidden_act='silu', |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: hidden_size=2048, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: initializer_range=0.02, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: intermediate_size=4096, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: is_llama_config=True, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: max_position_embeddings=4096, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: num_attention_heads=32, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: num_hidden_layers=24, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: num_key_value_heads=32, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: pad_token_id=None, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: pretraining_tp=1, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: rms_norm_eps=1e-05, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: rope_scaling=None, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: rope_theta=10000.0, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: tie_word_embeddings=True, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: use_cache=True, |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: vocab_size=50272) |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Building model.. |
|
[default0]:07/02/2024 20:15:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Setting PP block ranks... |
|
[default1]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=1|ip-26-0-173-202]: Local number of parameters: 69.4M (132.46MiB) |
|
[default1]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=1|ip-26-0-173-202]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default1]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=1|ip-26-0-173-202]: No checkpoint path provided. |
|
[default1]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=9|ip-26-0-174-36]: Local number of parameters: 69.4M (132.46MiB) |
|
[default1]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=9|ip-26-0-174-36]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default1]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=9|ip-26-0-174-36]: No checkpoint path provided. |
|
[default7]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=7|ip-26-0-173-202]: Local number of parameters: 69.4M (132.46MiB) |
|
[default7]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=7|ip-26-0-173-202]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default7]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=7|ip-26-0-173-202]: No checkpoint path provided. |
|
[default3]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=3|ip-26-0-173-202]: Local number of parameters: 69.4M (132.46MiB) |
|
[default3]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=3|ip-26-0-173-202]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default3]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=3|ip-26-0-173-202]: No checkpoint path provided. |
|
[default4]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=4|ip-26-0-173-202]: Local number of parameters: 69.4M (132.46MiB) |
|
[default4]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=4|ip-26-0-173-202]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default4]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=4|ip-26-0-173-202]: No checkpoint path provided. |
|
[default0]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Total number of parameters: 1.11G (2119.44MiB) |
|
[default0]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Local number of parameters: 69.4M (132.46MiB) |
|
[default0]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default0]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: No checkpoint path provided. |
|
[default0]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Parametrizing model parameters using StandardParametrizator |
|
[default0]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: [Optimizer Building] Using LearningRateForSP as learning rate |
|
[default0]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: [ZeRO sharding] Size of optimizer params per rank: |
|
[default0]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: [ZeRO sharding] DP Rank 0 has 69.4M out of 69.4M (100.00%) params' optimizer states |
|
[default5]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=5|ip-26-0-173-202]: Local number of parameters: 69.4M (132.46MiB) |
|
[default0]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=8|ip-26-0-174-36]: Local number of parameters: 69.4M (132.46MiB) |
|
[default5]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=5|ip-26-0-173-202]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default5]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=5|ip-26-0-173-202]: No checkpoint path provided. |
|
[default0]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=8|ip-26-0-174-36]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default4]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=12|ip-26-0-174-36]: Local number of parameters: 69.4M (132.46MiB) |
|
[default2]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=2|ip-26-0-173-202]: Local number of parameters: 69.4M (132.46MiB) |
|
[default2]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=2|ip-26-0-173-202]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default2]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=2|ip-26-0-173-202]: No checkpoint path provided. |
|
[default6]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=6|ip-26-0-173-202]: Local number of parameters: 69.4M (132.46MiB) |
|
[default6]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=6|ip-26-0-173-202]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default6]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=6|ip-26-0-173-202]: No checkpoint path provided. |
|
[default4]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=12|ip-26-0-174-36]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default4]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=12|ip-26-0-174-36]: No checkpoint path provided. |
|
[default5]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=13|ip-26-0-174-36]: Local number of parameters: 69.4M (132.46MiB) |
|
[default5]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=13|ip-26-0-174-36]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default5]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=13|ip-26-0-174-36]: No checkpoint path provided. |
|
[default0]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=8|ip-26-0-174-36]: No checkpoint path provided. |
|
[default3]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=11|ip-26-0-174-36]: Local number of parameters: 69.4M (132.46MiB) |
|
[default3]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=11|ip-26-0-174-36]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default3]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=11|ip-26-0-174-36]: No checkpoint path provided. |
|
[default6]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=14|ip-26-0-174-36]: Local number of parameters: 69.4M (132.46MiB) |
|
[default6]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=14|ip-26-0-174-36]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default6]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=14|ip-26-0-174-36]: No checkpoint path provided. |
|
[default2]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=10|ip-26-0-174-36]: Local number of parameters: 69.4M (132.46MiB) |
|
[default2]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=10|ip-26-0-174-36]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default2]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=10|ip-26-0-174-36]: No checkpoint path provided. |
|
[default7]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=15|ip-26-0-174-36]: Local number of parameters: 69.4M (132.46MiB) |
|
[default7]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=15|ip-26-0-174-36]: [After model building] Memory usage: 159.71MiB. Peak allocated: 174.02MiB Peak reserved: 178.00MiB |
|
[default7]:07/02/2024 20:15:28 [INFO|DP=0|PP=0|TP=15|ip-26-0-174-36]: No checkpoint path provided. |
|
[default0]:07/02/2024 20:15:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples |
|
[default0]:07/02/2024 20:15:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Using `datasets` library |
|
[default0]:07/02/2024 20:15:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4') |
|
[default0]:07/02/2024 20:15:29 [WARNING|DP=0|PP=0|TP=0|ip-26-0-173-202]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/02/2024 20:15:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: [Training Plan] There are 1 training stages |
|
[default0]:07/02/2024 20:15:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: [Stage Training Stage] start from step 1 |
|
[default0]:07/02/2024 20:15:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: |
|
[default0]:07/02/2024 20:15:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: [Start training] datetime: 2024-07-02 20:15:30.205318 | mbs: 64 | grad_accum: 16 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0 |
|
[default0]:07/02/2024 20:15:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps |
|
[default0]:07/02/2024 20:15:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-173-202]: Memory usage: 689.57MiB. Peak allocated 689.57MiB. Peak reserved: 710.00MiB |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=1|ip-26-0-173-202]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=7|ip-26-0-173-202]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=4|ip-26-0-173-202]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=5|ip-26-0-173-202]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=14|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=9|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=2|ip-26-0-173-202]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=8|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=6|ip-26-0-173-202]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=13|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=12|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=10|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=15|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=3|ip-26-0-173-202]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/02/2024 20:15:30 [WARNING|DP=0|PP=0|TP=11|ip-26-0-174-36]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:[rank13]: Traceback (most recent call last): |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default5]:[rank13]: trainer.train(dataloader) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default5]:[rank13]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default5]:[rank13]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default5]:[rank13]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default5]:[rank13]: output = model(**micro_batch) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default5]:[rank13]: sharded_logits = self.model( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default5]:[rank13]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default5]:[rank13]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default5]:[rank13]: output = self.pp_block(**new_kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default5]:[rank13]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default5]:[rank13]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank13]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank13]: return forward_call(*args, **kwargs) |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default5]:[rank13]: return row_linear( |
|
[default5]:[rank13]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default5]:[rank13]: out = F.linear(input, weight, bias) |
|
[default5]:[rank13]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 613.94 MiB is free. Including non-PyTorch memory, this process has 78.72 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default6]:[rank14]: Traceback (most recent call last): |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default6]:[rank14]: trainer.train(dataloader) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default6]:[rank14]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default6]:[rank14]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default6]:[rank14]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default6]:[rank14]: output = model(**micro_batch) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default6]:[rank14]: sharded_logits = self.model( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default6]:[rank14]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default6]:[rank14]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default6]:[rank14]: output = self.pp_block(**new_kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default6]:[rank14]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default6]:[rank14]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank14]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank14]: return forward_call(*args, **kwargs) |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default6]:[rank14]: return row_linear( |
|
[default6]:[rank14]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default6]:[rank14]: out = F.linear(input, weight, bias) |
|
[default6]:[rank14]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 613.94 MiB is free. Including non-PyTorch memory, this process has 78.72 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default3]:[rank11]: Traceback (most recent call last): |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default3]:[rank11]: trainer.train(dataloader) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default3]:[rank11]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default3]:[rank11]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default3]:[rank11]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default3]:[rank11]: output = model(**micro_batch) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default3]:[rank11]: sharded_logits = self.model( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default3]:[rank11]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default3]:[rank11]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default3]:[rank11]: output = self.pp_block(**new_kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default3]:[rank11]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default3]:[rank11]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank11]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank11]: return forward_call(*args, **kwargs) |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default3]:[rank11]: return row_linear( |
|
[default3]:[rank11]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default3]:[rank11]: out = F.linear(input, weight, bias) |
|
[default3]:[rank11]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 613.94 MiB is free. Including non-PyTorch memory, this process has 78.72 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default7]:[rank15]: Traceback (most recent call last): |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default7]:[rank15]: trainer.train(dataloader) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default7]:[rank15]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default7]:[rank15]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default7]:[rank15]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default7]:[rank15]: output = model(**micro_batch) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank15]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank15]: return forward_call(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default7]:[rank15]: sharded_logits = self.model( |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank15]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank15]: return forward_call(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default7]:[rank15]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default7]:[rank15]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank15]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank15]: return forward_call(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default7]:[rank15]: output = self.pp_block(**new_kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank15]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank15]: return forward_call(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default7]:[rank15]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank15]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank15]: return forward_call(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default7]:[rank15]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank15]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank15]: return forward_call(*args, **kwargs) |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default7]:[rank15]: return row_linear( |
|
[default7]:[rank15]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default7]:[rank15]: out = F.linear(input, weight, bias) |
|
[default7]:[rank15]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 685.94 MiB is free. Including non-PyTorch memory, this process has 78.65 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default4]:[rank12]: Traceback (most recent call last): |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default4]:[rank12]: trainer.train(dataloader) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default4]:[rank12]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default4]:[rank12]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default4]:[rank12]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default4]:[rank12]: output = model(**micro_batch) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank12]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank12]: return forward_call(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default4]:[rank12]: sharded_logits = self.model( |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank12]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank12]: return forward_call(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default4]:[rank12]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default4]:[rank12]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank12]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank12]: return forward_call(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default4]:[rank12]: output = self.pp_block(**new_kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank12]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank12]: return forward_call(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default4]:[rank12]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank12]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank12]: return forward_call(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default4]:[rank12]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank12]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank12]: return forward_call(*args, **kwargs) |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default4]:[rank12]: return row_linear( |
|
[default4]:[rank12]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default4]:[rank12]: out = F.linear(input, weight, bias) |
|
[default4]:[rank12]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 613.94 MiB is free. Including non-PyTorch memory, this process has 78.72 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default3]:[rank3]: Traceback (most recent call last): |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default3]:[rank3]: trainer.train(dataloader) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default3]:[rank3]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default3]:[rank3]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default3]:[rank3]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default3]:[rank3]: output = model(**micro_batch) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default3]:[rank3]: sharded_logits = self.model( |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default3]:[rank3]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default3]:[rank3]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default3]:[rank3]: output = self.pp_block(**new_kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default3]:[rank3]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default3]:[rank3]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default3]:[rank3]: return row_linear( |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default3]:[rank3]: out = F.linear(input, weight, bias) |
|
[default3]:[rank3]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 613.94 MiB is free. Including non-PyTorch memory, this process has 78.72 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default4]:[rank4]: Traceback (most recent call last): |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default4]:[rank4]: trainer.train(dataloader) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default4]:[rank4]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default4]:[rank4]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default4]:[rank4]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default4]:[rank4]: output = model(**micro_batch) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank4]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank4]: return forward_call(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default4]:[rank4]: sharded_logits = self.model( |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank4]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank4]: return forward_call(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default4]:[rank4]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default4]:[rank4]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank4]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank4]: return forward_call(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default4]:[rank4]: output = self.pp_block(**new_kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank4]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank4]: return forward_call(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default4]:[rank4]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank4]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank4]: return forward_call(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default4]:[rank4]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default4]:[rank4]: return self._call_impl(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default4]:[rank4]: return forward_call(*args, **kwargs) |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default4]:[rank4]: return row_linear( |
|
[default4]:[rank4]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default4]:[rank4]: out = F.linear(input, weight, bias) |
|
[default4]:[rank4]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 613.94 MiB is free. Including non-PyTorch memory, this process has 78.72 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default5]:[rank5]: Traceback (most recent call last): |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default5]:[rank5]: trainer.train(dataloader) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default5]:[rank5]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default5]:[rank5]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default5]:[rank5]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default5]:[rank5]: output = model(**micro_batch) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank5]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank5]: return forward_call(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default5]:[rank5]: sharded_logits = self.model( |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank5]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank5]: return forward_call(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default5]:[rank5]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default5]:[rank5]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank5]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank5]: return forward_call(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default5]:[rank5]: output = self.pp_block(**new_kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank5]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank5]: return forward_call(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default5]:[rank5]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank5]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank5]: return forward_call(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default5]:[rank5]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default5]:[rank5]: return self._call_impl(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default5]:[rank5]: return forward_call(*args, **kwargs) |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default5]:[rank5]: return row_linear( |
|
[default5]:[rank5]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default5]:[rank5]: out = F.linear(input, weight, bias) |
|
[default5]:[rank5]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 613.94 MiB is free. Including non-PyTorch memory, this process has 78.72 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default2]:[rank10]: Traceback (most recent call last): |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default2]:[rank10]: trainer.train(dataloader) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default2]:[rank10]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default2]:[rank10]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default2]:[rank10]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default2]:[rank10]: output = model(**micro_batch) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default2]:[rank10]: sharded_logits = self.model( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default2]:[rank10]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default2]:[rank10]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default2]:[rank10]: output = self.pp_block(**new_kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default2]:[rank10]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default2]:[rank10]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank10]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank10]: return forward_call(*args, **kwargs) |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default2]:[rank10]: return row_linear( |
|
[default2]:[rank10]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default2]:[rank10]: out = F.linear(input, weight, bias) |
|
[default2]:[rank10]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 851.94 MiB is free. Including non-PyTorch memory, this process has 78.48 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default0]:[rank8]: Traceback (most recent call last): |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default0]:[rank8]: trainer.train(dataloader) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default0]:[rank8]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default0]:[rank8]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default0]:[rank8]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default0]:[rank8]: output = model(**micro_batch) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank8]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank8]: return forward_call(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default0]:[rank8]: sharded_logits = self.model( |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank8]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank8]: return forward_call(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default0]:[rank8]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default0]:[rank8]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank8]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank8]: return forward_call(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default0]:[rank8]: output = self.pp_block(**new_kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank8]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank8]: return forward_call(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default0]:[rank8]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank8]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank8]: return forward_call(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default0]:[rank8]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank8]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank8]: return forward_call(*args, **kwargs) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default0]:[rank8]: return row_linear( |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 479, in row_linear |
|
[default0]:[rank8]: out = differentiable_reduce_scatter_sum(out, group=group) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/distributed_differentiable_primitives.py", line 145, in differentiable_reduce_scatter_sum |
|
[default0]:[rank8]: return DifferentiableReduceScatterSum.apply(tensor, group) |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply |
|
[default0]:[rank8]: return super().apply(*args, **kwargs) # type: ignore[misc] |
|
[default0]:[rank8]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/distributed_differentiable_primitives.py", line 111, in forward |
|
[default0]:[rank8]: sharded_tensor = torch.empty( |
|
[default0]:[rank8]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU |
|
[default1]:[rank9]: Traceback (most recent call last): |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default1]:[rank9]: trainer.train(dataloader) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default1]:[rank9]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default1]:[rank9]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default1]:[rank9]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default1]:[rank9]: output = model(**micro_batch) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank9]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank9]: return forward_call(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default1]:[rank9]: sharded_logits = self.model( |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank9]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank9]: return forward_call(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default1]:[rank9]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default1]:[rank9]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank9]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank9]: return forward_call(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default1]:[rank9]: output = self.pp_block(**new_kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank9]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank9]: return forward_call(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default1]:[rank9]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank9]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank9]: return forward_call(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default1]:[rank9]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank9]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank9]: return forward_call(*args, **kwargs) |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default1]:[rank9]: return row_linear( |
|
[default1]:[rank9]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default1]:[rank9]: out = F.linear(input, weight, bias) |
|
[default1]:[rank9]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 1017.94 MiB is free. Including non-PyTorch memory, this process has 78.32 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default6]:[rank6]: Traceback (most recent call last): |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default6]:[rank6]: trainer.train(dataloader) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default6]:[rank6]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default6]:[rank6]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default6]:[rank6]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default6]:[rank6]: output = model(**micro_batch) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank6]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank6]: return forward_call(*args, **kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default6]:[rank6]: sharded_logits = self.model( |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank6]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank6]: return forward_call(*args, **kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default6]:[rank6]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default6]:[rank6]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank6]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank6]: return forward_call(*args, **kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default6]:[rank6]: output = self.pp_block(**new_kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank6]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: Traceback (most recent call last): |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default2]:[rank2]: trainer.train(dataloader) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default0]:[rank0]: Traceback (most recent call last): |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default7]:[rank7]: Traceback (most recent call last): |
|
[default2]:[rank2]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default0]:[rank0]: trainer.train(dataloader) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default0]:[rank0]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default1]:[rank1]: Traceback (most recent call last): |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default1]:[rank1]: trainer.train(dataloader) |
|
[default7]:[rank7]: trainer.train(dataloader) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank6]: return forward_call(*args, **kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default6]:[rank6]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank6]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank6]: return forward_call(*args, **kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default6]:[rank6]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default6]:[rank6]: return self._call_impl(*args, **kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default6]:[rank6]: return forward_call(*args, **kwargs) |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default6]:[rank6]: return row_linear( |
|
[default6]:[rank6]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default6]:[rank6]: out = F.linear(input, weight, bias) |
|
[default6]:[rank6]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 613.94 MiB is free. Including non-PyTorch memory, this process has 78.72 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default7]:[rank7]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default7]:[rank7]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default7]:[rank7]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default0]:[rank0]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default2]:[rank2]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default7]:[rank7]: output = model(**micro_batch) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default0]:[rank0]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default1]:[rank1]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default1]:[rank1]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default0]:[rank0]: output = model(**micro_batch) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank2]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default1]:[rank1]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default7]:[rank7]: sharded_logits = self.model( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default1]:[rank1]: output = model(**micro_batch) |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default0]:[rank0]: sharded_logits = self.model( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default1]:[rank1]: sharded_logits = self.model( |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: output = model(**micro_batch) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default7]:[rank7]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default7]:[rank7]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default7]:[rank7]: output = self.pp_block(**new_kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default7]:[rank7]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default7]:[rank7]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: return self._call_impl(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default7]:[rank7]: return forward_call(*args, **kwargs) |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default7]:[rank7]: return row_linear( |
|
[default7]:[rank7]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default7]:[rank7]: out = F.linear(input, weight, bias) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default7]:[rank7]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 685.94 MiB is free. Including non-PyTorch memory, this process has 78.65 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default1]:[rank1]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default1]:[rank1]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default2]:[rank2]: sharded_logits = self.model( |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default1]:[rank1]: output = self.pp_block(**new_kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank2]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default0]:[rank0]: output = self.pp_block(**new_kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default0]:[rank0]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: output = self.pp_block(**new_kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default0]:[rank0]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default1]:[rank1]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default0]:[rank0]: return row_linear( |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default1]:[rank1]: return row_linear( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default2]:[rank2]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: out = F.linear(input, weight, bias) |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 1017.94 MiB is free. Including non-PyTorch memory, this process has 78.32 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 479, in row_linear |
|
[default0]:[rank0]: out = differentiable_reduce_scatter_sum(out, group=group) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/distributed_differentiable_primitives.py", line 145, in differentiable_reduce_scatter_sum |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 159, in forward |
|
[default0]:[rank0]: return DifferentiableReduceScatterSum.apply(tensor, group) |
|
[default2]:[rank2]: return row_linear( |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply |
|
[default0]:[rank0]: return super().apply(*args, **kwargs) # type: ignore[misc] |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 474, in row_linear |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/distributed_differentiable_primitives.py", line 111, in forward |
|
[default0]:[rank0]: sharded_tensor = torch.empty( |
|
[default0]:[rank0]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU |
|
[default2]:[rank2]: out = F.linear(input, weight, bias) |
|
[default2]:[rank2]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB. GPU has a total capacity of 79.33 GiB of which 851.94 MiB is free. Including non-PyTorch memory, this process has 78.48 GiB memory in use. Of the allocated memory 71.33 GiB is allocated by PyTorch, and 62.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
E0702 20:16:18.226000 140433821124416 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 713396) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 |
|
E0702 20:16:18.231000 139673748735808 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 745667) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 |
|
Traceback (most recent call last): |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module> |
|
sys.exit(main()) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper |
|
return f(*args, **kwargs) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main |
|
run(args) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run |
|
elastic_launch( |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ |
|
return launch_agent(self._config, self._entrypoint, list(args)) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent |
|
raise ChildFailedError( |
|
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: |
|
============================================================ |
|
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED |
|
------------------------------------------------------------ |
|
Failures: |
|
[1]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-174-36.ec2.internal |
|
rank : 9 (local_rank: 1) |
|
exitcode : 1 (pid: 745668) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[2]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-174-36.ec2.internal |
|
rank : 10 (local_rank: 2) |
|
exitcode : 1 (pid: 745669) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[3]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-174-36.ec2.internal |
|
rank : 11 (local_rank: 3) |
|
exitcode : 1 (pid: 745670) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[4]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-174-36.ec2.internal |
|
rank : 12 (local_rank: 4) |
|
exitcode : 1 (pid: 745671) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[5]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-174-36.ec2.internal |
|
rank : 13 (local_rank: 5) |
|
exitcode : 1 (pid: 745672) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[6]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-174-36.ec2.internal |
|
rank : 14 (local_rank: 6) |
|
exitcode : 1 (pid: 745673) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[7]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-174-36.ec2.internal |
|
rank : 15 (local_rank: 7) |
|
exitcode : 1 (pid: 745674) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
------------------------------------------------------------ |
|
Root Cause (first observed failure): |
|
[0]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-174-36.ec2.internal |
|
rank : 8 (local_rank: 0) |
|
exitcode : 1 (pid: 745667) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
============================================================ |
|
Traceback (most recent call last): |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module> |
|
sys.exit(main()) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper |
|
return f(*args, **kwargs) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main |
|
run(args) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run |
|
elastic_launch( |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ |
|
return launch_agent(self._config, self._entrypoint, list(args)) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent |
|
raise ChildFailedError( |
|
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: |
|
============================================================ |
|
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED |
|
------------------------------------------------------------ |
|
Failures: |
|
[1]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-173-202.ec2.internal |
|
rank : 1 (local_rank: 1) |
|
exitcode : 1 (pid: 713397) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[2]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-173-202.ec2.internal |
|
rank : 2 (local_rank: 2) |
|
exitcode : 1 (pid: 713398) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[3]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-173-202.ec2.internal |
|
rank : 3 (local_rank: 3) |
|
exitcode : 1 (pid: 713399) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[4]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-173-202.ec2.internal |
|
rank : 4 (local_rank: 4) |
|
exitcode : 1 (pid: 713400) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[5]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-173-202.ec2.internal |
|
rank : 5 (local_rank: 5) |
|
exitcode : 1 (pid: 713401) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[6]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-173-202.ec2.internal |
|
rank : 6 (local_rank: 6) |
|
exitcode : 1 (pid: 713402) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
[7]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-173-202.ec2.internal |
|
rank : 7 (local_rank: 7) |
|
exitcode : 1 (pid: 713403) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
------------------------------------------------------------ |
|
Root Cause (first observed failure): |
|
[0]: |
|
time : 2024-07-02_20:16:18 |
|
host : ip-26-0-173-202.ec2.internal |
|
rank : 0 (local_rank: 0) |
|
exitcode : 1 (pid: 713396) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
============================================================ |
|
srun: error: ip-26-0-173-202: task 0: Exited with exit code 1 |
|
srun: error: ip-26-0-174-36: task 1: Exited with exit code 1 |
|
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details. |
|
|