|
======================== |
|
START TIME: Thu Jul 4 00:01:38 UTC 2024 |
|
python3 version = Python 3.10.14 |
|
======================== |
|
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well. |
|
Token is valid (permission: write). |
|
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token |
|
Login successful |
|
Already on 'bench_cluster' |
|
M examples/config_tiny_llama.py |
|
M examples/config_tiny_llama.yaml |
|
M examples/train_tiny_llama.sh |
|
M src/nanotron/models/llama.py |
|
M src/nanotron/trainer.py |
|
Your branch is up to date with 'origin/bench_cluster'. |
|
Job status: RUNNING |
|
W0704 00:01:41.514000 139693704775488 torch/distributed/run.py:757] |
|
W0704 00:01:41.514000 139693704775488 torch/distributed/run.py:757] ***************************************** |
|
W0704 00:01:41.514000 139693704775488 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0704 00:01:41.514000 139693704775488 torch/distributed/run.py:757] ***************************************** |
|
[default0]:07/04/2024 00:01:57 [WARNING|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Vocab Size Padding] Padded vocab (size: 50257) with 1 dummy tokens (new size: 50258) |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Config: |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Config(general=GeneralArgs(project='bench_cluster', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: run='%date_%jobid', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: seed=42, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: step=None, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: consumed_train_samples=None, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: benchmark_csv_path=None, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: ignore_sanity_checks=True), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: parallelism=ParallelismArgs(dp=2, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pp=2, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tp=2, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7f2bf27a5090>, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tp_linear_async_communication=False, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: expert_parallel_size=1), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: eos_token_id=2, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hidden_act='silu', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hidden_size=2048, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: initializer_range=0.02, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: intermediate_size=4096, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: is_llama_config=True, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: max_position_embeddings=4096, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_attention_heads=32, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_hidden_layers=24, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_key_value_heads=32, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pad_token_id=None, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pretraining_tp=1, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rms_norm_eps=1e-05, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rope_scaling=None, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rope_theta=10000.0, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tie_word_embeddings=True, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: use_cache=True, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: vocab_size=50258), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: init_method=RandomInit(std=0.025), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: dtype=torch.bfloat16, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: make_vocab_size_divisible_by=1, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: ddp_bucket_cap_mb=25), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tokenizer_revision=None, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tokenizer_max_length=None), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: checkpoint_interval=100000, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: save_initial_state=False, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: resume_checkpoint_path=None, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: checkpoints_path_is_shared_file_system=False), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: logging=LoggingArgs(log_level='info', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: log_level_replica='info', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: iteration_step_info_interval=1), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tokens=TokensArgs(sequence_length=4096, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: train_steps=20, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: micro_batch_size=16, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: batch_accumulation_per_replica=32, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: val_check_interval=-1, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: limit_val_batches=0, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: limit_test_batches=0), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: adam_beta1=0.9, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: adam_beta2=0.95, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: torch_adam_is_fused=True, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: name='adamW'), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: zero_stage=1, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: weight_decay=0.01, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: clip_grad=1.0, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: accumulate_grad_in_fp32=True, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lr_warmup_steps=1, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lr_warmup_style='linear', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lr_decay_style='linear', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lr_decay_steps=19, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lr_decay_starting_step=None, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: min_decay_lr=1e-05)), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: data_stages=[DatasetStageArgs(name='Training Stage', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: start_training_step=1, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hf_dataset_splits='train', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hf_dataset_config_name=None, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: dataset_processing_num_proc_per_process=64, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: dataset_overwrite_cache=False, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: text_column_name='text'), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: seed=42, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_loading_workers=0))], |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/8_GPUS/dp-2_tp-2_pp-2_mbz-16')), |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: lighteval=None) |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Model Config: |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: LlamaConfig(bos_token_id=1, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: eos_token_id=2, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hidden_act='silu', |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: hidden_size=2048, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: initializer_range=0.02, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: intermediate_size=4096, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: is_llama_config=True, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: max_position_embeddings=4096, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_attention_heads=32, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_hidden_layers=24, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: num_key_value_heads=32, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pad_token_id=None, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: pretraining_tp=1, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rms_norm_eps=1e-05, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rope_scaling=None, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: rope_theta=10000.0, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: tie_word_embeddings=True, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: use_cache=True, |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: vocab_size=50258) |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Building model.. |
|
[default0]:07/04/2024 00:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Setting PP block ranks... |
|
[default0]:07/04/2024 00:02:09 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Total number of parameters: 1.21G (2313.02MiB) |
|
[default0]:07/04/2024 00:02:09 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Local number of parameters: 345M (658.27MiB) |
|
[default0]:07/04/2024 00:02:09 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [After model building] Memory usage: 672.29MiB. Peak allocated: 674.32MiB Peak reserved: 690.00MiB |
|
[default0]:07/04/2024 00:02:09 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: No checkpoint path provided. |
|
[default0]:07/04/2024 00:02:09 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Parametrizing model parameters using StandardParametrizator |
|
[default3]:07/04/2024 00:02:09 [INFO|DP=1|PP=0|TP=1|ip-26-0-169-139]: No checkpoint path provided. |
|
[default2]:07/04/2024 00:02:09 [INFO|DP=1|PP=0|TP=0|ip-26-0-169-139]: No checkpoint path provided. |
|
[default4]:07/04/2024 00:02:09 [INFO|DP=0|PP=1|TP=0|ip-26-0-169-139]: Local number of parameters: 261M (498.24MiB) |
|
[default4]:07/04/2024 00:02:09 [INFO|DP=0|PP=1|TP=0|ip-26-0-169-139]: [After model building] Memory usage: 508.26MiB. Peak allocated: 510.29MiB Peak reserved: 526.00MiB |
|
[default4]:07/04/2024 00:02:09 [INFO|DP=0|PP=1|TP=0|ip-26-0-169-139]: No checkpoint path provided. |
|
[default1]:07/04/2024 00:02:09 [INFO|DP=0|PP=0|TP=1|ip-26-0-169-139]: Local number of parameters: 345M (658.27MiB) |
|
[default1]:07/04/2024 00:02:09 [INFO|DP=0|PP=0|TP=1|ip-26-0-169-139]: [After model building] Memory usage: 672.29MiB. Peak allocated: 674.32MiB Peak reserved: 690.00MiB |
|
[default1]:07/04/2024 00:02:09 [INFO|DP=0|PP=0|TP=1|ip-26-0-169-139]: No checkpoint path provided. |
|
[default5]:07/04/2024 00:02:09 [INFO|DP=0|PP=1|TP=1|ip-26-0-169-139]: Local number of parameters: 261M (498.24MiB) |
|
[default5]:07/04/2024 00:02:09 [INFO|DP=0|PP=1|TP=1|ip-26-0-169-139]: [After model building] Memory usage: 508.26MiB. Peak allocated: 510.29MiB Peak reserved: 526.00MiB |
|
[default5]:07/04/2024 00:02:09 [INFO|DP=0|PP=1|TP=1|ip-26-0-169-139]: No checkpoint path provided. |
|
[default7]:07/04/2024 00:02:09 [INFO|DP=1|PP=1|TP=1|ip-26-0-169-139]: No checkpoint path provided. |
|
[default6]:07/04/2024 00:02:09 [INFO|DP=1|PP=1|TP=0|ip-26-0-169-139]: No checkpoint path provided. |
|
[default0]:07/04/2024 00:02:11 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Optimizer Building] Using LearningRateForSP as learning rate |
|
[default0]:07/04/2024 00:02:11 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [ZeRO sharding] Size of optimizer params per rank: |
|
[default0]:07/04/2024 00:02:11 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [ZeRO sharding] DP Rank 0 has 173M out of 345M (50.00%) params' optimizer states |
|
[default0]:07/04/2024 00:02:11 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [ZeRO sharding] DP Rank 1 has 173M out of 345M (50.00%) params' optimizer states |
|
[default0]:07/04/2024 00:02:13 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples |
|
[default0]:07/04/2024 00:02:13 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Using `datasets` library |
|
[default0]:07/04/2024 00:02:13 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4') |
|
[default0]:07/04/2024 00:02:13 [WARNING|DP=0|PP=0|TP=0|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/04/2024 00:02:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Training Plan] There are 1 training stages |
|
[default0]:07/04/2024 00:02:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Stage Training Stage] start from step 1 |
|
[default0]:07/04/2024 00:02:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: |
|
[default0]:07/04/2024 00:02:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: [Start training] datetime: 2024-07-04 00:02:14.232785 | mbs: 16 | grad_accum: 32 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0 |
|
[default0]:07/04/2024 00:02:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps |
|
[default0]:07/04/2024 00:02:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-169-139]: Memory usage: 2647.09MiB. Peak allocated 2647.09MiB. Peak reserved: 2668.00MiB |
|
[default5]:07/04/2024 00:02:14 [WARNING|DP=0|PP=1|TP=1|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/04/2024 00:02:14 [WARNING|DP=1|PP=0|TP=1|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/04/2024 00:02:14 [WARNING|DP=1|PP=0|TP=0|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/04/2024 00:02:14 [WARNING|DP=0|PP=1|TP=0|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/04/2024 00:02:14 [WARNING|DP=0|PP=0|TP=1|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/04/2024 00:02:14 [WARNING|DP=1|PP=1|TP=0|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/04/2024 00:02:14 [WARNING|DP=1|PP=1|TP=1|ip-26-0-169-139]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:[rank0]: Traceback (most recent call last): |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default0]:[rank0]: trainer.train(dataloader) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default0]:[rank0]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default0]:[rank0]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default0]:[rank0]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default0]:[rank0]: output = model(**micro_batch) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default0]:[rank0]: sharded_logits = self.model( |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default0]:[rank0]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default0]:[rank0]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default0]:[rank0]: output = self.pp_block(**new_kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default0]:[rank0]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 172, in forward |
|
[default0]:[rank0]: hidden_states = self.down_proj(self.split_silu_mul(merged_states)) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 128, in forward |
|
[default0]:[rank0]: return self.act(gate_states) * up_states |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/nn/activations.py", line 149, in forward |
|
[default0]:[rank0]: return nn.functional.silu(input) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/functional.py", line 2102, in silu |
|
[default0]:[rank0]: return torch._C._nn.silu(input) |
|
[default0]:[rank0]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB. GPU |
|
[default1]:[rank1]: Traceback (most recent call last): |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default1]:[rank1]: trainer.train(dataloader) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default1]:[rank1]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default1]:[rank1]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default1]:[rank1]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default1]:[rank1]: output = model(**micro_batch) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default1]:[rank1]: sharded_logits = self.model( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default1]:[rank1]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default1]:[rank1]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default1]:[rank1]: output = self.pp_block(**new_kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default1]:[rank1]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 171, in forward |
|
[default1]:[rank1]: merged_states = self.gate_up_proj(hidden_states) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default1]:[rank1]: return self._call_impl(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default1]:[rank1]: return forward_call(*args, **kwargs) |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 87, in forward |
|
[default1]:[rank1]: return column_linear( |
|
[default1]:[rank1]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 359, in column_linear |
|
[default1]:[rank1]: return F.linear(input, weight, bias) |
|
[default1]:[rank1]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB. GPU has a total capacity of 79.33 GiB of which 497.94 MiB is free. Including non-PyTorch memory, this process has 78.83 GiB memory in use. Of the allocated memory 67.60 GiB is allocated by PyTorch, and 301.87 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default3]:[rank3]: Traceback (most recent call last): |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default3]:[rank3]: trainer.train(dataloader) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default3]:[rank3]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default3]:[rank3]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default3]:[rank3]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default3]:[rank3]: output = model(**micro_batch) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default3]:[rank3]: sharded_logits = self.model( |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default3]:[rank3]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default3]:[rank3]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default3]:[rank3]: output = self.pp_block(**new_kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default3]:[rank3]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 171, in forward |
|
[default3]:[rank3]: merged_states = self.gate_up_proj(hidden_states) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default3]:[rank3]: return self._call_impl(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default3]:[rank3]: return forward_call(*args, **kwargs) |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 87, in forward |
|
[default3]:[rank3]: return column_linear( |
|
[default3]:[rank3]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 359, in column_linear |
|
[default3]:[rank3]: return F.linear(input, weight, bias) |
|
[default3]:[rank3]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB. GPU has a total capacity of 79.33 GiB of which 497.94 MiB is free. Including non-PyTorch memory, this process has 78.83 GiB memory in use. Of the allocated memory 67.60 GiB is allocated by PyTorch, and 301.87 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
[default2]:[rank2]: Traceback (most recent call last): |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default2]:[rank2]: trainer.train(dataloader) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default2]:[rank2]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default2]:[rank2]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 278, in train_batch_iter |
|
[default2]:[rank2]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default2]:[rank2]: output = model(**micro_batch) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default2]:[rank2]: sharded_logits = self.model( |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default2]:[rank2]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default2]:[rank2]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default2]:[rank2]: output = self.pp_block(**new_kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default2]:[rank2]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 171, in forward |
|
[default2]:[rank2]: merged_states = self.gate_up_proj(hidden_states) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default2]:[rank2]: return self._call_impl(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default2]:[rank2]: return forward_call(*args, **kwargs) |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 87, in forward |
|
[default2]:[rank2]: return column_linear( |
|
[default2]:[rank2]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 359, in column_linear |
|
[default2]:[rank2]: return F.linear(input, weight, bias) |
|
[default2]:[rank2]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB. GPU has a total capacity of 79.33 GiB of which 497.94 MiB is free. Including non-PyTorch memory, this process has 78.83 GiB memory in use. Of the allocated memory 67.60 GiB is allocated by PyTorch, and 301.87 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
W0704 00:02:26.667000 139693704775488 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 677683 closing signal SIGTERM |
|
W0704 00:02:26.668000 139693704775488 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 677684 closing signal SIGTERM |
|
W0704 00:02:26.668000 139693704775488 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 677685 closing signal SIGTERM |
|
W0704 00:02:26.668000 139693704775488 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 677686 closing signal SIGTERM |
|
W0704 00:02:26.668000 139693704775488 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 677687 closing signal SIGTERM |
|
W0704 00:02:26.668000 139693704775488 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 677688 closing signal SIGTERM |
|
W0704 00:02:26.670000 139693704775488 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 677689 closing signal SIGTERM |
|
E0704 00:02:28.584000 139693704775488 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 677682) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 |
|
Traceback (most recent call last): |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module> |
|
sys.exit(main()) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper |
|
return f(*args, **kwargs) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main |
|
run(args) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run |
|
elastic_launch( |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ |
|
return launch_agent(self._config, self._entrypoint, list(args)) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent |
|
raise ChildFailedError( |
|
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: |
|
============================================================ |
|
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED |
|
------------------------------------------------------------ |
|
Failures: |
|
<NO_OTHER_FAILURES> |
|
------------------------------------------------------------ |
|
Root Cause (first observed failure): |
|
[0]: |
|
time : 2024-07-04_00:02:26 |
|
host : ip-26-0-169-139.ec2.internal |
|
rank : 0 (local_rank: 0) |
|
exitcode : 1 (pid: 677682) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
============================================================ |
|
srun: error: ip-26-0-169-139: task 0: Exited with exit code 1 |
|
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details. |
|
|