|
======================== |
|
START TIME: Wed Jul 3 23:34:39 UTC 2024 |
|
python3 version = Python 3.10.14 |
|
======================== |
|
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well. |
|
Token is valid (permission: write). |
|
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token |
|
Login successful |
|
Already on 'bench_cluster' |
|
M examples/config_tiny_llama.py |
|
M examples/config_tiny_llama.yaml |
|
M examples/train_tiny_llama.sh |
|
M src/nanotron/models/llama.py |
|
M src/nanotron/trainer.py |
|
Your branch is up to date with 'origin/bench_cluster'. |
|
Job status: RUNNING |
|
W0703 23:34:42.322000 139899475093312 torch/distributed/run.py:757] |
|
W0703 23:34:42.322000 139899475093312 torch/distributed/run.py:757] ***************************************** |
|
W0703 23:34:42.322000 139899475093312 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0703 23:34:42.322000 139899475093312 torch/distributed/run.py:757] ***************************************** |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Config: |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Config(general=GeneralArgs(project='bench_cluster', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: run='%date_%jobid', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: seed=42, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: step=None, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: consumed_train_samples=None, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: benchmark_csv_path=None, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: ignore_sanity_checks=True), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: parallelism=ParallelismArgs(dp=1, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: pp=8, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: tp=1, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7f205f67c730>, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: tp_linear_async_communication=False, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: expert_parallel_size=1), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: eos_token_id=2, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: hidden_act='silu', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: hidden_size=2048, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: initializer_range=0.02, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: intermediate_size=4096, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: is_llama_config=True, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: max_position_embeddings=4096, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: num_attention_heads=32, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: num_hidden_layers=24, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: num_key_value_heads=32, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: pad_token_id=None, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: pretraining_tp=1, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: rms_norm_eps=1e-05, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: rope_scaling=None, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: rope_theta=10000.0, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: tie_word_embeddings=True, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: use_cache=True, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: vocab_size=50257), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: init_method=RandomInit(std=0.025), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: dtype=torch.bfloat16, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: make_vocab_size_divisible_by=1, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: ddp_bucket_cap_mb=25), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: tokenizer_revision=None, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: tokenizer_max_length=None), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: checkpoint_interval=100000, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: save_initial_state=False, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: resume_checkpoint_path=None, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: checkpoints_path_is_shared_file_system=False), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: logging=LoggingArgs(log_level='info', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: log_level_replica='info', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: iteration_step_info_interval=1), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: tokens=TokensArgs(sequence_length=4096, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: train_steps=20, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: micro_batch_size=8, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: batch_accumulation_per_replica=128, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: val_check_interval=-1, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: limit_val_batches=0, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: limit_test_batches=0), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: adam_beta1=0.9, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: adam_beta2=0.95, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: torch_adam_is_fused=True, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: name='adamW'), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: zero_stage=1, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: weight_decay=0.01, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: clip_grad=1.0, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: accumulate_grad_in_fp32=True, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: lr_warmup_steps=1, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: lr_warmup_style='linear', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: lr_decay_style='linear', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: lr_decay_steps=19, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: lr_decay_starting_step=None, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: min_decay_lr=1e-05)), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: data_stages=[DatasetStageArgs(name='Training Stage', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: start_training_step=1, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: hf_dataset_splits='train', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: hf_dataset_config_name=None, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: dataset_processing_num_proc_per_process=64, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: dataset_overwrite_cache=False, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: text_column_name='text'), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: seed=42, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: num_loading_workers=0))], |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/8_GPUS/dp-1_tp-1_pp-8_mbz-8')), |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: lighteval=None) |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Model Config: |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: LlamaConfig(bos_token_id=1, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: eos_token_id=2, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: hidden_act='silu', |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: hidden_size=2048, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: initializer_range=0.02, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: intermediate_size=4096, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: is_llama_config=True, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: max_position_embeddings=4096, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: num_attention_heads=32, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: num_hidden_layers=24, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: num_key_value_heads=32, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: pad_token_id=None, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: pretraining_tp=1, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: rms_norm_eps=1e-05, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: rope_scaling=None, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: rope_theta=10000.0, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: tie_word_embeddings=True, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: use_cache=True, |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: vocab_size=50257) |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Building model.. |
|
[default0]:07/03/2024 23:34:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Setting PP block ranks... |
|
[default2]:07/03/2024 23:35:14 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-88]: Local number of parameters: 126M (240.02MiB) |
|
[default7]:07/03/2024 23:35:14 [INFO|DP=0|PP=7|TP=0|ip-26-0-171-88]: Local number of parameters: 103M (196.32MiB) |
|
[default2]:07/03/2024 23:35:14 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 243.03MiB. Peak allocated: 245.06MiB Peak reserved: 262.00MiB |
|
[default2]:07/03/2024 23:35:14 [INFO|DP=0|PP=2|TP=0|ip-26-0-171-88]: No checkpoint path provided. |
|
[default7]:07/03/2024 23:35:14 [INFO|DP=0|PP=7|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 196.33MiB. Peak allocated: 196.33MiB Peak reserved: 200.00MiB |
|
[default7]:07/03/2024 23:35:14 [INFO|DP=0|PP=7|TP=0|ip-26-0-171-88]: No checkpoint path provided. |
|
[default1]:07/03/2024 23:35:14 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-88]: Local number of parameters: 126M (240.02MiB) |
|
[default1]:07/03/2024 23:35:14 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 243.03MiB. Peak allocated: 245.06MiB Peak reserved: 262.00MiB |
|
[default1]:07/03/2024 23:35:14 [INFO|DP=0|PP=1|TP=0|ip-26-0-171-88]: No checkpoint path provided. |
|
[default5]:07/03/2024 23:35:14 [INFO|DP=0|PP=5|TP=0|ip-26-0-171-88]: Local number of parameters: 126M (240.02MiB) |
|
[default5]:07/03/2024 23:35:14 [INFO|DP=0|PP=5|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 243.03MiB. Peak allocated: 245.06MiB Peak reserved: 262.00MiB |
|
[default5]:07/03/2024 23:35:14 [INFO|DP=0|PP=5|TP=0|ip-26-0-171-88]: No checkpoint path provided. |
|
[default4]:07/03/2024 23:35:14 [INFO|DP=0|PP=4|TP=0|ip-26-0-171-88]: Local number of parameters: 126M (240.02MiB) |
|
[default4]:07/03/2024 23:35:14 [INFO|DP=0|PP=4|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 243.03MiB. Peak allocated: 245.06MiB Peak reserved: 262.00MiB |
|
[default4]:07/03/2024 23:35:14 [INFO|DP=0|PP=4|TP=0|ip-26-0-171-88]: No checkpoint path provided. |
|
[default6]:07/03/2024 23:35:14 [INFO|DP=0|PP=6|TP=0|ip-26-0-171-88]: Local number of parameters: 168M (320.03MiB) |
|
[default6]:07/03/2024 23:35:14 [INFO|DP=0|PP=6|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 324.04MiB. Peak allocated: 326.07MiB Peak reserved: 336.00MiB |
|
[default6]:07/03/2024 23:35:14 [INFO|DP=0|PP=6|TP=0|ip-26-0-171-88]: No checkpoint path provided. |
|
[default3]:07/03/2024 23:35:14 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: Local number of parameters: 168M (320.03MiB) |
|
[default3]:07/03/2024 23:35:14 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 324.04MiB. Peak allocated: 326.07MiB Peak reserved: 336.00MiB |
|
[default3]:07/03/2024 23:35:14 [INFO|DP=0|PP=3|TP=0|ip-26-0-171-88]: No checkpoint path provided. |
|
[default0]:07/03/2024 23:35:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Total number of parameters: 1.21G (2312.82MiB) |
|
[default0]:07/03/2024 23:35:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Local number of parameters: 271M (516.35MiB) |
|
[default0]:07/03/2024 23:35:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: [After model building] Memory usage: 520.36MiB. Peak allocated: 522.39MiB Peak reserved: 534.00MiB |
|
[default0]:07/03/2024 23:35:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided. |
|
[default0]:07/03/2024 23:35:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Parametrizing model parameters using StandardParametrizator |
|
[default0]:07/03/2024 23:35:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: [Optimizer Building] Using LearningRateForSP as learning rate |
|
[default0]:07/03/2024 23:35:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: [ZeRO sharding] Size of optimizer params per rank: |
|
[default0]:07/03/2024 23:35:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: [ZeRO sharding] DP Rank 0 has 271M out of 271M (100.00%) params' optimizer states |
|
[default0]:07/03/2024 23:35:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples |
|
[default0]:07/03/2024 23:35:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Using `datasets` library |
|
[default0]:07/03/2024 23:35:15 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4') |
|
[default0]:07/03/2024 23:35:15 [WARNING|DP=0|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/03/2024 23:35:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: [Training Plan] There are 1 training stages |
|
[default0]:07/03/2024 23:35:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: [Stage Training Stage] start from step 1 |
|
[default0]:07/03/2024 23:35:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: |
|
[default0]:07/03/2024 23:35:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: [Start training] datetime: 2024-07-03 23:35:16.319770 | mbs: 8 | grad_accum: 128 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0 |
|
[default0]:07/03/2024 23:35:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps |
|
[default0]:07/03/2024 23:35:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-171-88]: Memory usage: 2585.75MiB. Peak allocated 2585.75MiB. Peak reserved: 2602.00MiB |
|
[default7]:07/03/2024 23:35:16 [WARNING|DP=0|PP=7|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/03/2024 23:35:16 [WARNING|DP=0|PP=2|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/03/2024 23:35:16 [WARNING|DP=0|PP=1|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/03/2024 23:35:16 [WARNING|DP=0|PP=6|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/03/2024 23:35:16 [WARNING|DP=0|PP=3|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/03/2024 23:35:16 [WARNING|DP=0|PP=5|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/03/2024 23:35:16 [WARNING|DP=0|PP=4|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:[rank0]: Traceback (most recent call last): |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py", line 237, in <module> |
|
[default0]:[rank0]: trainer.train(dataloader) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 429, in train |
|
[default0]:[rank0]: outputs, loss_avg = self.training_step(dataloader=self.current_dataloader) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/trainer.py", line 462, in training_step |
|
[default0]:[rank0]: outputs = self.pipeline_engine.train_batch_iter( |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 252, in train_batch_iter |
|
[default0]:[rank0]: output = self.forward(context=context, state=state, micro_batch=micro_batch, model=model) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/engine.py", line 44, in forward |
|
[default0]:[rank0]: output = model(**micro_batch) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 891, in forward |
|
[default0]:[rank0]: sharded_logits = self.model( |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 764, in forward |
|
[default0]:[rank0]: return self.forward_with_hidden_states(input_ids=input_ids, input_mask=input_mask)[0] |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 780, in forward_with_hidden_states |
|
[default0]:[rank0]: hidden_encoder_states = encoder_block(**hidden_encoder_states) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/pipeline_parallel/block.py", line 151, in forward |
|
[default0]:[rank0]: output = self.pp_block(**new_kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 637, in forward |
|
[default0]:[rank0]: hidden_states = self.mlp(hidden_states=hidden_states)["hidden_states"] |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/models/llama.py", line 171, in forward |
|
[default0]:[rank0]: merged_states = self.gate_up_proj(hidden_states) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl |
|
[default0]:[rank0]: return self._call_impl(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl |
|
[default0]:[rank0]: return forward_call(*args, **kwargs) |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/nn.py", line 87, in forward |
|
[default0]:[rank0]: return column_linear( |
|
[default0]:[rank0]: File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/src/nanotron/parallel/tensor_parallel/functional.py", line 359, in column_linear |
|
[default0]:[rank0]: return F.linear(input, weight, bias) |
|
[default0]:[rank0]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB. GPU |
|
W0703 23:35:37.617000 139899475093312 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1103425 closing signal SIGTERM |
|
W0703 23:35:37.617000 139899475093312 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1103426 closing signal SIGTERM |
|
W0703 23:35:37.617000 139899475093312 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1103427 closing signal SIGTERM |
|
W0703 23:35:37.620000 139899475093312 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1103428 closing signal SIGTERM |
|
W0703 23:35:37.620000 139899475093312 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1103429 closing signal SIGTERM |
|
W0703 23:35:37.622000 139899475093312 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1103430 closing signal SIGTERM |
|
W0703 23:35:37.622000 139899475093312 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 1103431 closing signal SIGTERM |
|
E0703 23:35:39.844000 139899475093312 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 1103424) of binary: /fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/python3.10 |
|
Traceback (most recent call last): |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/bin/torchrun", line 8, in <module> |
|
sys.exit(main()) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper |
|
return f(*args, **kwargs) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main |
|
run(args) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run |
|
elastic_launch( |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ |
|
return launch_agent(self._config, self._entrypoint, list(args)) |
|
File "/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent |
|
raise ChildFailedError( |
|
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: |
|
============================================================ |
|
/fsx/ferdinandmom/ferdinand-hf/bench_cluster/nanotron/run_train.py FAILED |
|
------------------------------------------------------------ |
|
Failures: |
|
<NO_OTHER_FAILURES> |
|
------------------------------------------------------------ |
|
Root Cause (first observed failure): |
|
[0]: |
|
time : 2024-07-03_23:35:37 |
|
host : ip-26-0-171-88.ec2.internal |
|
rank : 0 (local_rank: 0) |
|
exitcode : 1 (pid: 1103424) |
|
error_file: <N/A> |
|
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
|
============================================================ |
|
srun: error: ip-26-0-171-88: task 0: Exited with exit code 1 |
|
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details. |
|
|