|
======================== |
|
START TIME: Wed Jul 3 04:02:26 UTC 2024 |
|
python3 version = Python 3.10.14 |
|
======================== |
|
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well. |
|
Token is valid (permission: write). |
|
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token |
|
Login successful |
|
Already on 'bench_cluster' |
|
M examples/config_tiny_llama.py |
|
M examples/config_tiny_llama.yaml |
|
M examples/train_tiny_llama.sh |
|
M src/nanotron/models/llama.py |
|
M src/nanotron/trainer.py |
|
Your branch is up to date with 'origin/bench_cluster'. |
|
Job status: RUNNING |
|
W0703 04:02:31.549000 140123801147200 torch/distributed/run.py:757] |
|
W0703 04:02:31.549000 140123801147200 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.549000 140123801147200 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0703 04:02:31.549000 140123801147200 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.605000 140361945982784 torch/distributed/run.py:757] |
|
W0703 04:02:31.605000 140361945982784 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.605000 140361945982784 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0703 04:02:31.605000 140361945982784 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.738000 139688233817920 torch/distributed/run.py:757] |
|
W0703 04:02:31.738000 139688233817920 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.738000 139688233817920 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0703 04:02:31.738000 139688233817920 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.745000 140199852533568 torch/distributed/run.py:757] |
|
W0703 04:02:31.745000 140199852533568 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.745000 140199852533568 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0703 04:02:31.745000 140199852533568 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.813000 139794691024704 torch/distributed/run.py:757] |
|
W0703 04:02:31.813000 139794691024704 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.813000 139794691024704 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0703 04:02:31.813000 139794691024704 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.857000 140620062652224 torch/distributed/run.py:757] |
|
W0703 04:02:31.857000 140620062652224 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.857000 140620062652224 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0703 04:02:31.857000 140620062652224 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.871000 140681368938304 torch/distributed/run.py:757] |
|
W0703 04:02:31.871000 140681368938304 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.871000 140681368938304 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0703 04:02:31.871000 140681368938304 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.960000 140383846967104 torch/distributed/run.py:757] |
|
W0703 04:02:31.960000 140383846967104 torch/distributed/run.py:757] ***************************************** |
|
W0703 04:02:31.960000 140383846967104 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
|
W0703 04:02:31.960000 140383846967104 torch/distributed/run.py:757] ***************************************** |
|
[default0]:07/03/2024 04:02:56 [WARNING|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Vocab Size Padding] Padded vocab (size: 50257) with 3 dummy tokens (new size: 50260) |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Config: |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Config(general=GeneralArgs(project='bench_cluster', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: run='%date_%jobid', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: seed=42, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: step=None, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: consumed_train_samples=None, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: benchmark_csv_path=None, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: ignore_sanity_checks=True), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: parallelism=ParallelismArgs(dp=16, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pp=1, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tp=4, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7fc1616188b0>, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tp_linear_async_communication=False, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: expert_parallel_size=1), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: eos_token_id=2, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hidden_act='silu', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hidden_size=2048, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: initializer_range=0.02, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: intermediate_size=4096, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: is_llama_config=True, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: max_position_embeddings=4096, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_attention_heads=32, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_hidden_layers=24, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_key_value_heads=32, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pad_token_id=None, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pretraining_tp=1, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rms_norm_eps=1e-05, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rope_scaling=None, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rope_theta=10000.0, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tie_word_embeddings=True, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: use_cache=True, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: vocab_size=50260), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: init_method=RandomInit(std=0.025), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: dtype=torch.bfloat16, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: make_vocab_size_divisible_by=1, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: ddp_bucket_cap_mb=25), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tokenizer_revision=None, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tokenizer_max_length=None), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: checkpoint_interval=100000, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: save_initial_state=False, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: resume_checkpoint_path=None, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: checkpoints_path_is_shared_file_system=False), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: logging=LoggingArgs(log_level='info', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: log_level_replica='info', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration_step_info_interval=1), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tokens=TokensArgs(sequence_length=4096, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: train_steps=20, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: micro_batch_size=8, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: batch_accumulation_per_replica=8, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: val_check_interval=-1, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: limit_val_batches=0, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: limit_test_batches=0), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: adam_beta1=0.9, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: adam_beta2=0.95, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: torch_adam_is_fused=True, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: name='adamW'), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: zero_stage=1, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: weight_decay=0.01, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: clip_grad=1.0, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: accumulate_grad_in_fp32=True, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lr_warmup_steps=1, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lr_warmup_style='linear', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lr_decay_style='linear', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lr_decay_steps=19, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lr_decay_starting_step=None, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: min_decay_lr=1e-05)), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: data_stages=[DatasetStageArgs(name='Training Stage', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: start_training_step=1, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hf_dataset_splits='train', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hf_dataset_config_name=None, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: dataset_processing_num_proc_per_process=64, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: dataset_overwrite_cache=False, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: text_column_name='text'), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: seed=42, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_loading_workers=0))], |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/64_GPUS/dp-16_tp-4_pp-1_mbz-8')), |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: lighteval=None) |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Model Config: |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: LlamaConfig(bos_token_id=1, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: eos_token_id=2, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hidden_act='silu', |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: hidden_size=2048, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: initializer_range=0.02, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: intermediate_size=4096, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: is_llama_config=True, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: max_position_embeddings=4096, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_attention_heads=32, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_hidden_layers=24, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: num_key_value_heads=32, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pad_token_id=None, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: pretraining_tp=1, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rms_norm_eps=1e-05, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rope_scaling=None, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: rope_theta=10000.0, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: tie_word_embeddings=True, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: use_cache=True, |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: vocab_size=50260) |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Building model.. |
|
[default0]:07/03/2024 04:02:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Setting PP block ranks... |
|
[default2]:07/03/2024 04:03:10 [INFO|DP=10|PP=0|TP=2|ip-26-0-171-102]: No checkpoint path provided. |
|
[default0]:07/03/2024 04:03:10 [INFO|DP=10|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided. |
|
[default3]:07/03/2024 04:03:10 [INFO|DP=10|PP=0|TP=3|ip-26-0-171-102]: No checkpoint path provided. |
|
[default1]:07/03/2024 04:03:10 [INFO|DP=10|PP=0|TP=1|ip-26-0-171-102]: No checkpoint path provided. |
|
[default1]:07/03/2024 04:03:11 [INFO|DP=6|PP=0|TP=1|ip-26-0-161-153]: No checkpoint path provided. |
|
[default0]:07/03/2024 04:03:11 [INFO|DP=6|PP=0|TP=0|ip-26-0-161-153]: No checkpoint path provided. |
|
[default2]:07/03/2024 04:03:11 [INFO|DP=6|PP=0|TP=2|ip-26-0-161-153]: No checkpoint path provided. |
|
[default3]:07/03/2024 04:03:11 [INFO|DP=6|PP=0|TP=3|ip-26-0-161-153]: No checkpoint path provided. |
|
[default1]:07/03/2024 04:03:11 [INFO|DP=4|PP=0|TP=1|ip-26-0-161-138]: No checkpoint path provided. |
|
[default4]:07/03/2024 04:03:11 [INFO|DP=5|PP=0|TP=0|ip-26-0-161-138]: No checkpoint path provided. |
|
[default7]:07/03/2024 04:03:11 [INFO|DP=5|PP=0|TP=3|ip-26-0-161-138]: No checkpoint path provided. |
|
[default2]:07/03/2024 04:03:11 [INFO|DP=4|PP=0|TP=2|ip-26-0-161-138]: No checkpoint path provided. |
|
[default3]:07/03/2024 04:03:11 [INFO|DP=4|PP=0|TP=3|ip-26-0-161-138]: No checkpoint path provided. |
|
[default0]:07/03/2024 04:03:11 [INFO|DP=4|PP=0|TP=0|ip-26-0-161-138]: No checkpoint path provided. |
|
[default5]:07/03/2024 04:03:11 [INFO|DP=5|PP=0|TP=1|ip-26-0-161-138]: No checkpoint path provided. |
|
[default6]:07/03/2024 04:03:11 [INFO|DP=5|PP=0|TP=2|ip-26-0-161-138]: No checkpoint path provided. |
|
[default7]:07/03/2024 04:03:11 [INFO|DP=7|PP=0|TP=3|ip-26-0-161-153]: No checkpoint path provided. |
|
[default4]:07/03/2024 04:03:11 [INFO|DP=15|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided. |
|
[default6]:07/03/2024 04:03:11 [INFO|DP=15|PP=0|TP=2|ip-26-0-171-88]: No checkpoint path provided. |
|
[default5]:07/03/2024 04:03:11 [INFO|DP=15|PP=0|TP=1|ip-26-0-171-88]: No checkpoint path provided. |
|
[default6]:07/03/2024 04:03:11 [INFO|DP=11|PP=0|TP=2|ip-26-0-171-102]: No checkpoint path provided. |
|
[default4]:07/03/2024 04:03:11 [INFO|DP=11|PP=0|TP=0|ip-26-0-171-102]: No checkpoint path provided. |
|
[default2]:07/03/2024 04:03:11 [INFO|DP=14|PP=0|TP=2|ip-26-0-171-88]: No checkpoint path provided. |
|
[default6]:07/03/2024 04:03:11 [INFO|DP=7|PP=0|TP=2|ip-26-0-161-153]: No checkpoint path provided. |
|
[default1]:07/03/2024 04:03:11 [INFO|DP=14|PP=0|TP=1|ip-26-0-171-88]: No checkpoint path provided. |
|
[default0]:07/03/2024 04:03:11 [INFO|DP=14|PP=0|TP=0|ip-26-0-171-88]: No checkpoint path provided. |
|
[default3]:07/03/2024 04:03:11 [INFO|DP=14|PP=0|TP=3|ip-26-0-171-88]: No checkpoint path provided. |
|
[default5]:07/03/2024 04:03:11 [INFO|DP=7|PP=0|TP=1|ip-26-0-161-153]: No checkpoint path provided. |
|
[default5]:07/03/2024 04:03:11 [INFO|DP=11|PP=0|TP=1|ip-26-0-171-102]: No checkpoint path provided. |
|
[default7]:07/03/2024 04:03:11 [INFO|DP=11|PP=0|TP=3|ip-26-0-171-102]: No checkpoint path provided. |
|
[default4]:07/03/2024 04:03:11 [INFO|DP=7|PP=0|TP=0|ip-26-0-161-153]: No checkpoint path provided. |
|
[default7]:07/03/2024 04:03:11 [INFO|DP=15|PP=0|TP=3|ip-26-0-171-88]: No checkpoint path provided. |
|
[default3]:07/03/2024 04:03:11 [INFO|DP=2|PP=0|TP=3|ip-26-0-161-103]: No checkpoint path provided. |
|
[default0]:07/03/2024 04:03:11 [INFO|DP=2|PP=0|TP=0|ip-26-0-161-103]: No checkpoint path provided. |
|
[default2]:07/03/2024 04:03:11 [INFO|DP=2|PP=0|TP=2|ip-26-0-161-103]: No checkpoint path provided. |
|
[default1]:07/03/2024 04:03:11 [INFO|DP=2|PP=0|TP=1|ip-26-0-161-103]: No checkpoint path provided. |
|
[default0]:07/03/2024 04:03:11 [INFO|DP=8|PP=0|TP=0|ip-26-0-161-78]: No checkpoint path provided. |
|
[default5]:07/03/2024 04:03:11 [INFO|DP=3|PP=0|TP=1|ip-26-0-161-103]: No checkpoint path provided. |
|
[default1]:07/03/2024 04:03:11 [INFO|DP=8|PP=0|TP=1|ip-26-0-161-78]: No checkpoint path provided. |
|
[default2]:07/03/2024 04:03:11 [INFO|DP=8|PP=0|TP=2|ip-26-0-161-78]: No checkpoint path provided. |
|
[default3]:07/03/2024 04:03:11 [INFO|DP=8|PP=0|TP=3|ip-26-0-161-78]: No checkpoint path provided. |
|
[default4]:07/03/2024 04:03:11 [INFO|DP=3|PP=0|TP=0|ip-26-0-161-103]: No checkpoint path provided. |
|
[default7]:07/03/2024 04:03:11 [INFO|DP=3|PP=0|TP=3|ip-26-0-161-103]: No checkpoint path provided. |
|
[default3]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=3|ip-26-0-160-225]: Local number of parameters: 277M (529.27MiB) |
|
[default3]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=3|ip-26-0-160-225]: [After model building] Memory usage: 554.21MiB. Peak allocated: 606.24MiB Peak reserved: 608.00MiB |
|
[default7]:07/03/2024 04:03:11 [INFO|DP=1|PP=0|TP=3|ip-26-0-160-225]: No checkpoint path provided. |
|
[default6]:07/03/2024 04:03:11 [INFO|DP=1|PP=0|TP=2|ip-26-0-160-225]: No checkpoint path provided. |
|
[default1]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=1|ip-26-0-160-225]: Local number of parameters: 277M (529.27MiB) |
|
[default3]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=3|ip-26-0-160-225]: No checkpoint path provided. |
|
[default1]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=1|ip-26-0-160-225]: [After model building] Memory usage: 554.21MiB. Peak allocated: 606.24MiB Peak reserved: 608.00MiB |
|
[default1]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=1|ip-26-0-160-225]: No checkpoint path provided. |
|
[default2]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=2|ip-26-0-160-225]: Local number of parameters: 277M (529.27MiB) |
|
[default2]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=2|ip-26-0-160-225]: [After model building] Memory usage: 554.21MiB. Peak allocated: 606.24MiB Peak reserved: 608.00MiB |
|
[default5]:07/03/2024 04:03:11 [INFO|DP=1|PP=0|TP=1|ip-26-0-160-225]: No checkpoint path provided. |
|
[default2]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=2|ip-26-0-160-225]: No checkpoint path provided. |
|
[default0]:07/03/2024 04:03:11 [INFO|DP=12|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided. |
|
[default5]:07/03/2024 04:03:11 [INFO|DP=13|PP=0|TP=1|ip-26-0-171-62]: No checkpoint path provided. |
|
[default4]:07/03/2024 04:03:11 [INFO|DP=1|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided. |
|
[default0]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Total number of parameters: 1.11G (2117.09MiB) |
|
[default0]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Local number of parameters: 277M (529.27MiB) |
|
[default0]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [After model building] Memory usage: 554.21MiB. Peak allocated: 606.24MiB Peak reserved: 608.00MiB |
|
[default0]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: No checkpoint path provided. |
|
[default0]:07/03/2024 04:03:11 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Parametrizing model parameters using StandardParametrizator |
|
[default6]:07/03/2024 04:03:11 [INFO|DP=13|PP=0|TP=2|ip-26-0-171-62]: No checkpoint path provided. |
|
[default1]:07/03/2024 04:03:11 [INFO|DP=12|PP=0|TP=1|ip-26-0-171-62]: No checkpoint path provided. |
|
[default2]:07/03/2024 04:03:11 [INFO|DP=12|PP=0|TP=2|ip-26-0-171-62]: No checkpoint path provided. |
|
[default4]:07/03/2024 04:03:11 [INFO|DP=13|PP=0|TP=0|ip-26-0-171-62]: No checkpoint path provided. |
|
[default3]:07/03/2024 04:03:11 [INFO|DP=12|PP=0|TP=3|ip-26-0-171-62]: No checkpoint path provided. |
|
[default6]:07/03/2024 04:03:11 [INFO|DP=3|PP=0|TP=2|ip-26-0-161-103]: No checkpoint path provided. |
|
[default7]:07/03/2024 04:03:11 [INFO|DP=13|PP=0|TP=3|ip-26-0-171-62]: No checkpoint path provided. |
|
[default4]:07/03/2024 04:03:11 [INFO|DP=9|PP=0|TP=0|ip-26-0-161-78]: No checkpoint path provided. |
|
[default7]:07/03/2024 04:03:11 [INFO|DP=9|PP=0|TP=3|ip-26-0-161-78]: No checkpoint path provided. |
|
[default6]:07/03/2024 04:03:11 [INFO|DP=9|PP=0|TP=2|ip-26-0-161-78]: No checkpoint path provided. |
|
[default5]:07/03/2024 04:03:11 [INFO|DP=9|PP=0|TP=1|ip-26-0-161-78]: No checkpoint path provided. |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Optimizer Building] Using LearningRateForSP as learning rate |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] Size of optimizer params per rank: |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 0 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 1 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 2 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 3 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 4 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 5 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 6 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 7 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 8 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 9 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 10 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 11 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 12 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 13 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 14 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:14 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [ZeRO sharding] DP Rank 15 has 17.3M out of 277M (6.25%) params' optimizer states |
|
[default0]:07/03/2024 04:03:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples |
|
[default0]:07/03/2024 04:03:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Using `datasets` library |
|
[default0]:07/03/2024 04:03:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4') |
|
[default0]:07/03/2024 04:03:16 [WARNING|DP=0|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/03/2024 04:03:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Training Plan] There are 1 training stages |
|
[default0]:07/03/2024 04:03:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Stage Training Stage] start from step 1 |
|
[default0]:07/03/2024 04:03:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: |
|
[default0]:07/03/2024 04:03:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: [Start training] datetime: 2024-07-03 04:03:18.220719 | mbs: 8 | grad_accum: 8 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0 |
|
[default0]:07/03/2024 04:03:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps |
|
[default0]:07/03/2024 04:03:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1678.92MiB. Peak allocated 1678.92MiB. Peak reserved: 1736.00MiB |
|
[default7]:07/03/2024 04:03:18 [WARNING|DP=9|PP=0|TP=3|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/03/2024 04:03:18 [WARNING|DP=9|PP=0|TP=2|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/03/2024 04:03:18 [WARNING|DP=9|PP=0|TP=1|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/03/2024 04:03:18 [WARNING|DP=8|PP=0|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/03/2024 04:03:18 [WARNING|DP=15|PP=0|TP=1|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/03/2024 04:03:18 [WARNING|DP=15|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/03/2024 04:03:18 [WARNING|DP=2|PP=0|TP=3|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/03/2024 04:03:18 [WARNING|DP=8|PP=0|TP=2|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/03/2024 04:03:18 [WARNING|DP=8|PP=0|TP=3|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/03/2024 04:03:18 [WARNING|DP=2|PP=0|TP=0|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/03/2024 04:03:18 [WARNING|DP=5|PP=0|TP=3|ip-26-0-161-138]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/03/2024 04:03:18 [WARNING|DP=5|PP=0|TP=0|ip-26-0-161-138]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/03/2024 04:03:18 [WARNING|DP=4|PP=0|TP=3|ip-26-0-161-138]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/03/2024 04:03:18 [WARNING|DP=2|PP=0|TP=2|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/03/2024 04:03:18 [WARNING|DP=10|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/03/2024 04:03:18 [WARNING|DP=2|PP=0|TP=1|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/03/2024 04:03:18 [WARNING|DP=11|PP=0|TP=0|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/03/2024 04:03:18 [WARNING|DP=3|PP=0|TP=3|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/03/2024 04:03:18 [WARNING|DP=11|PP=0|TP=2|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/03/2024 04:03:18 [WARNING|DP=10|PP=0|TP=1|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/03/2024 04:03:18 [WARNING|DP=0|PP=0|TP=2|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/03/2024 04:03:18 [WARNING|DP=1|PP=0|TP=3|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/03/2024 04:03:18 [WARNING|DP=1|PP=0|TP=2|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/03/2024 04:03:18 [WARNING|DP=0|PP=0|TP=1|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/03/2024 04:03:18 [WARNING|DP=1|PP=0|TP=1|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/03/2024 04:03:18 [WARNING|DP=1|PP=0|TP=0|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/03/2024 04:03:18 [WARNING|DP=12|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/03/2024 04:03:18 [WARNING|DP=13|PP=0|TP=2|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/03/2024 04:03:18 [WARNING|DP=6|PP=0|TP=1|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/03/2024 04:03:18 [WARNING|DP=12|PP=0|TP=1|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/03/2024 04:03:18 [WARNING|DP=14|PP=0|TP=0|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/03/2024 04:03:18 [WARNING|DP=6|PP=0|TP=0|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/03/2024 04:03:18 [WARNING|DP=14|PP=0|TP=3|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/03/2024 04:03:18 [WARNING|DP=6|PP=0|TP=3|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/03/2024 04:03:18 [WARNING|DP=7|PP=0|TP=1|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/03/2024 04:03:18 [WARNING|DP=6|PP=0|TP=2|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/03/2024 04:03:18 [WARNING|DP=11|PP=0|TP=3|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/03/2024 04:03:18 [WARNING|DP=3|PP=0|TP=2|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/03/2024 04:03:18 [WARNING|DP=13|PP=0|TP=0|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/03/2024 04:03:18 [WARNING|DP=12|PP=0|TP=3|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/03/2024 04:03:18 [WARNING|DP=5|PP=0|TP=2|ip-26-0-161-138]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/03/2024 04:03:18 [WARNING|DP=7|PP=0|TP=3|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/03/2024 04:03:18 [WARNING|DP=15|PP=0|TP=3|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:07/03/2024 04:03:18 [WARNING|DP=13|PP=0|TP=3|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default7]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/03/2024 04:03:18 [WARNING|DP=9|PP=0|TP=0|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/03/2024 04:03:18 [WARNING|DP=15|PP=0|TP=2|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/03/2024 04:03:18 [WARNING|DP=14|PP=0|TP=2|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/03/2024 04:03:18 [WARNING|DP=4|PP=0|TP=1|ip-26-0-161-138]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/03/2024 04:03:18 [WARNING|DP=4|PP=0|TP=2|ip-26-0-161-138]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/03/2024 04:03:18 [WARNING|DP=3|PP=0|TP=0|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/03/2024 04:03:18 [WARNING|DP=10|PP=0|TP=2|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/03/2024 04:03:18 [WARNING|DP=0|PP=0|TP=3|ip-26-0-160-225]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:07/03/2024 04:03:18 [WARNING|DP=4|PP=0|TP=0|ip-26-0-161-138]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/03/2024 04:03:18 [WARNING|DP=5|PP=0|TP=1|ip-26-0-161-138]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:07/03/2024 04:03:18 [WARNING|DP=7|PP=0|TP=2|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/03/2024 04:03:18 [WARNING|DP=14|PP=0|TP=1|ip-26-0-171-88]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:07/03/2024 04:03:18 [WARNING|DP=7|PP=0|TP=0|ip-26-0-161-153]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default0]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default6]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default4]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:07/03/2024 04:03:18 [WARNING|DP=8|PP=0|TP=1|ip-26-0-161-78]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/03/2024 04:03:18 [WARNING|DP=3|PP=0|TP=1|ip-26-0-161-103]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:07/03/2024 04:03:18 [WARNING|DP=10|PP=0|TP=3|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/03/2024 04:03:18 [WARNING|DP=13|PP=0|TP=1|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:07/03/2024 04:03:18 [WARNING|DP=11|PP=0|TP=1|ip-26-0-171-102]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:07/03/2024 04:03:18 [WARNING|DP=12|PP=0|TP=2|ip-26-0-171-62]: Repo card metadata block was not found. Setting CardData to empty. |
|
[default5]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default2]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default3]:Repo card metadata block was not found. Setting CardData to empty. |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default1]: warnings.warn( |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default2]: warnings.warn( |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default3]: warnings.warn( |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default0]: warnings.warn( |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default3]: warnings.warn( |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default1]: warnings.warn( |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default2]: warnings.warn( |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default0]: warnings.warn( |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default5]: warnings.warn( |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default4]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default6]: warnings.warn( |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default7]: warnings.warn( |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default3]: warnings.warn( |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default5]: warnings.warn( |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default0]: warnings.warn( |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default2]: warnings.warn( |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default1]: warnings.warn( |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default7]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default6]: warnings.warn( |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default4]: warnings.warn( |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default7]: warnings.warn( |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default4]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default6]: warnings.warn( |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default5]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default0]: warnings.warn( |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default3]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default2]: warnings.warn( |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default1]: warnings.warn( |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.) |
|
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default5]: warnings.warn( |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default7]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default6]: warnings.warn( |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default4]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default6]: warnings.warn( |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default5]: warnings.warn( |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default7]: warnings.warn( |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default4]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default6]: warnings.warn( |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default4]: warnings.warn( |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default5]: warnings.warn( |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default7]: warnings.warn( |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default7]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default6]: warnings.warn( |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default5]: warnings.warn( |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default4]: warnings.warn( |
|
[default0]:07/03/2024 04:03:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1755.48MiB. Peak allocated 24283.99MiB. Peak reserved: 25132.00MiB |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default3]: warnings.warn( |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default1]: warnings.warn( |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default0]: warnings.warn( |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default1]: warnings.warn( |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default3]: warnings.warn( |
|
[default2]: warnings.warn( |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default2]: warnings.warn( |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default0]: warnings.warn( |
|
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default4]: warnings.warn( |
|
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default7]: warnings.warn( |
|
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default6]: warnings.warn( |
|
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default5]: warnings.warn( |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default0]: warnings.warn( |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default2]: warnings.warn( |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default3]: warnings.warn( |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default1]: warnings.warn( |
|
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default0]: warnings.warn( |
|
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default2]: warnings.warn( |
|
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default3]: warnings.warn( |
|
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions |
|
[default1]: warnings.warn( |
|
[default0]:07/03/2024 04:03:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 1 / 20 | consumed_tokens: 4.19M | elapsed_time_per_iteration_ms: 19.8K | tokens_per_sec: 212K | tokens_per_sec_per_gpu: 3.31K | global_batch_size: 1.02K | lm_loss: 11.4 | lr: 0.0001 | model_tflops_per_gpu: 30.1 | hardware_tflops_per_gpu: 30.1 | grad_norm: 20.6 | cuda_memory_allocated: 1.98G | cuda_max_memory_reserved: 26.5G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.5G | hd_free_memory_tb: 246G |
|
[default0]:07/03/2024 04:03:38 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 2979.50MiB. Peak reserved: 25226.00MiB |
|
[default0]:07/03/2024 04:03:40 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:03:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 2 / 20 | consumed_tokens: 8.39M | elapsed_time_per_iteration_ms: 10.1K | tokens_per_sec: 417K | tokens_per_sec_per_gpu: 6.52K | global_batch_size: 1.02K | lm_loss: 11.4 | lr: 9.53e-05 | model_tflops_per_gpu: 59.2 | hardware_tflops_per_gpu: 59.2 | grad_norm: 20.7 | cuda_memory_allocated: 1.98G | cuda_max_memory_reserved: 26.5G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.5G | hd_free_memory_tb: 246G |
|
[default0]:07/03/2024 04:03:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 2979.51MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:03:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:03:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 3 / 20 | consumed_tokens: 12.6M | elapsed_time_per_iteration_ms: 9.91K | tokens_per_sec: 423K | tokens_per_sec_per_gpu: 6.62K | global_batch_size: 1.02K | lm_loss: 11.6 | lr: 9.05e-05 | model_tflops_per_gpu: 60 | hardware_tflops_per_gpu: 60 | grad_norm: 194 | cuda_memory_allocated: 1.98G | cuda_max_memory_reserved: 26.5G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.5G | hd_free_memory_tb: 246G |
|
[default0]:07/03/2024 04:03:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 2979.51MiB. Peak reserved: 25252.00MiB |
|
[default0]:STAGE:2024-07-03 04:03:58 22598:22598 ActivityProfilerController.cpp:314] Completed Stage: Warm Up |
|
[default0]:07/03/2024 04:04:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:04:07 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 4 / 20 | consumed_tokens: 16.8M | elapsed_time_per_iteration_ms: 9.94K | tokens_per_sec: 422K | tokens_per_sec_per_gpu: 6.59K | global_batch_size: 1.02K | lm_loss: 13.6 | lr: 8.58e-05 | model_tflops_per_gpu: 59.8 | hardware_tflops_per_gpu: 59.8 | grad_norm: 28 | cuda_memory_allocated: 1.98G | cuda_max_memory_reserved: 26.5G | hd_total_memory_tb: 312G | hd_used_memory_tb: 66.5G | hd_free_memory_tb: 246G |
|
[default0]:07/03/2024 04:04:07 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 2979.51MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:04:17 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 5 / 20 | consumed_tokens: 21M | elapsed_time_per_iteration_ms: 9.9K | tokens_per_sec: 424K | tokens_per_sec_per_gpu: 6.62K | global_batch_size: 1.02K | lm_loss: 12 | lr: 8.11e-05 | model_tflops_per_gpu: 60.1 | hardware_tflops_per_gpu: 60.1 | grad_norm: 49 |
|
[default0]:07/03/2024 04:04:17 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:04:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 6 / 20 | consumed_tokens: 25.2M | elapsed_time_per_iteration_ms: 9.93K | tokens_per_sec: 422K | tokens_per_sec_per_gpu: 6.6K | global_batch_size: 1.02K | lm_loss: 10.9 | lr: 7.63e-05 | model_tflops_per_gpu: 59.9 | hardware_tflops_per_gpu: 59.9 | grad_norm: 19.9 |
|
[default0]:STAGE:2024-07-03 04:04:32 22598:22598 ActivityProfilerController.cpp:320] Completed Stage: Collection |
|
[default0]:STAGE:2024-07-03 04:04:32 22598:22598 ActivityProfilerController.cpp:324] Completed Stage: Post Processing |
|
[default0]:07/03/2024 04:05:08 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:05:11 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 7 / 20 | consumed_tokens: 29.4M | elapsed_time_per_iteration_ms: 2.5K | tokens_per_sec: 1.68M | tokens_per_sec_per_gpu: 26.2K | global_batch_size: 1.02K | lm_loss: 10.4 | lr: 7.16e-05 | model_tflops_per_gpu: 238 | hardware_tflops_per_gpu: 238 | grad_norm: 8.64 |
|
[default0]:07/03/2024 04:05:11 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:05:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 8 / 20 | consumed_tokens: 33.6M | elapsed_time_per_iteration_ms: 9.91K | tokens_per_sec: 423K | tokens_per_sec_per_gpu: 6.61K | global_batch_size: 1.02K | lm_loss: 9.67 | lr: 6.68e-05 | model_tflops_per_gpu: 60 | hardware_tflops_per_gpu: 60 | grad_norm: 6.92 |
|
[default0]:07/03/2024 04:05:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:05:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 9 / 20 | consumed_tokens: 37.7M | elapsed_time_per_iteration_ms: 9.91K | tokens_per_sec: 423K | tokens_per_sec_per_gpu: 6.61K | global_batch_size: 1.02K | lm_loss: 11.3 | lr: 6.21e-05 | model_tflops_per_gpu: 60 | hardware_tflops_per_gpu: 60 | grad_norm: 53.3 |
|
[default0]:07/03/2024 04:05:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:05:40 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 10 / 20 | consumed_tokens: 41.9M | elapsed_time_per_iteration_ms: 9.94K | tokens_per_sec: 422K | tokens_per_sec_per_gpu: 6.59K | global_batch_size: 1.02K | lm_loss: 9.13 | lr: 5.74e-05 | model_tflops_per_gpu: 59.8 | hardware_tflops_per_gpu: 59.8 | grad_norm: 16.9 |
|
[default0]:07/03/2024 04:05:40 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:05:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 11 / 20 | consumed_tokens: 46.1M | elapsed_time_per_iteration_ms: 9.92K | tokens_per_sec: 423K | tokens_per_sec_per_gpu: 6.61K | global_batch_size: 1.02K | lm_loss: 8.6 | lr: 5.26e-05 | model_tflops_per_gpu: 59.9 | hardware_tflops_per_gpu: 59.9 | grad_norm: 7.95 |
|
[default0]:07/03/2024 04:05:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:06:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 12 / 20 | consumed_tokens: 50.3M | elapsed_time_per_iteration_ms: 9.94K | tokens_per_sec: 422K | tokens_per_sec_per_gpu: 6.59K | global_batch_size: 1.02K | lm_loss: 8.39 | lr: 4.79e-05 | model_tflops_per_gpu: 59.8 | hardware_tflops_per_gpu: 59.8 | grad_norm: 5.82 |
|
[default0]:07/03/2024 04:06:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:06:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 13 / 20 | consumed_tokens: 54.5M | elapsed_time_per_iteration_ms: 9.94K | tokens_per_sec: 422K | tokens_per_sec_per_gpu: 6.59K | global_batch_size: 1.02K | lm_loss: 8.18 | lr: 4.32e-05 | model_tflops_per_gpu: 59.8 | hardware_tflops_per_gpu: 59.8 | grad_norm: 5.62 |
|
[default0]:07/03/2024 04:06:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:06:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 14 / 20 | consumed_tokens: 58.7M | elapsed_time_per_iteration_ms: 9.92K | tokens_per_sec: 423K | tokens_per_sec_per_gpu: 6.61K | global_batch_size: 1.02K | lm_loss: 7.93 | lr: 3.84e-05 | model_tflops_per_gpu: 60 | hardware_tflops_per_gpu: 60 | grad_norm: 5.41 |
|
[default0]:07/03/2024 04:06:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:06:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 15 / 20 | consumed_tokens: 62.9M | elapsed_time_per_iteration_ms: 9.95K | tokens_per_sec: 422K | tokens_per_sec_per_gpu: 6.59K | global_batch_size: 1.02K | lm_loss: 7.7 | lr: 3.37e-05 | model_tflops_per_gpu: 59.8 | hardware_tflops_per_gpu: 59.8 | grad_norm: 5 |
|
[default0]:07/03/2024 04:06:30 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:06:40 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 16 / 20 | consumed_tokens: 67.1M | elapsed_time_per_iteration_ms: 9.95K | tokens_per_sec: 421K | tokens_per_sec_per_gpu: 6.59K | global_batch_size: 1.02K | lm_loss: 7.55 | lr: 2.89e-05 | model_tflops_per_gpu: 59.7 | hardware_tflops_per_gpu: 59.7 | grad_norm: 4.9 |
|
[default0]:07/03/2024 04:06:40 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:06:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 17 / 20 | consumed_tokens: 71.3M | elapsed_time_per_iteration_ms: 9.91K | tokens_per_sec: 423K | tokens_per_sec_per_gpu: 6.62K | global_batch_size: 1.02K | lm_loss: 7.46 | lr: 2.42e-05 | model_tflops_per_gpu: 60 | hardware_tflops_per_gpu: 60 | grad_norm: 4.93 |
|
[default0]:07/03/2024 04:06:50 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:07:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 18 / 20 | consumed_tokens: 75.5M | elapsed_time_per_iteration_ms: 9.94K | tokens_per_sec: 422K | tokens_per_sec_per_gpu: 6.59K | global_batch_size: 1.02K | lm_loss: 7.38 | lr: 1.95e-05 | model_tflops_per_gpu: 59.8 | hardware_tflops_per_gpu: 59.8 | grad_norm: 5.8 |
|
[default0]:07/03/2024 04:07:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:07:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 19 / 20 | consumed_tokens: 79.7M | elapsed_time_per_iteration_ms: 9.94K | tokens_per_sec: 422K | tokens_per_sec_per_gpu: 6.59K | global_batch_size: 1.02K | lm_loss: 7.25 | lr: 1.47e-05 | model_tflops_per_gpu: 59.8 | hardware_tflops_per_gpu: 59.8 | grad_norm: 4.46 |
|
[default0]:07/03/2024 04:07:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: Memory usage: 1887.88MiB. Peak allocated 24416.38MiB. Peak reserved: 25252.00MiB |
|
[default0]:07/03/2024 04:07:20 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 20 / 20 | consumed_tokens: 83.9M | elapsed_time_per_iteration_ms: 9.92K | tokens_per_sec: 423K | tokens_per_sec_per_gpu: 6.61K | global_batch_size: 1.02K | lm_loss: 7.15 | lr: 1e-05 | model_tflops_per_gpu: 60 | hardware_tflops_per_gpu: 60 | grad_norm: 2.95 |
|
W0703 04:07:34.207000 140383846967104 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-88.ec2.internal_911164_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. |
|
W0703 04:07:34.211000 140383846967104 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1203] The node 'ip-26-0-171-88.ec2.internal_911164_0' has failed to shutdown the rendezvous 'none' due to an error of type RendezvousConnectionError. |
|
Saved 1 csv files over 1 completed logs |
|
Processing file: /fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/64_GPUS/dp-16_tp-4_pp-1_mbz-8/profiler/ip-26-0-160-225_22598.1719979501348614418.pt.trace.json |
|
Results written to /fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/64_GPUS/dp-16_tp-4_pp-1_mbz-8/profiler.csv |
|
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details. |
|
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 0%| | 0.00/1.16G [00:00<?, ?B/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 1%| | 8.67M/1.16G [00:00<00:13, 86.7MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 2%|β | 17.3M/1.16G [00:00<00:30, 37.0MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 3%|β | 32.0M/1.16G [00:00<00:26, 42.5MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 4%|β | 48.0M/1.16G [00:01<00:23, 47.6MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 6%|β | 64.0M/1.16G [00:01<00:20, 54.3MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 7%|β | 80.0M/1.16G [00:01<00:18, 59.0MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 8%|β | 96.0M/1.16G [00:01<00:16, 62.6MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 10%|β | 112M/1.16G [00:01<00:16, 63.4MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 11%|β | 128M/1.16G [00:02<00:17, 60.1MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 12%|ββ | 144M/1.16G [00:02<00:16, 62.3MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 14%|ββ | 160M/1.16G [00:02<00:17, 57.9MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 15%|ββ | 176M/1.16G [00:03<00:16, 58.0MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 17%|ββ | 192M/1.16G [00:03<00:15, 61.6MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 18%|ββ | 208M/1.16G [00:03<00:14, 64.1MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 19%|ββ | 224M/1.16G [00:03<00:13, 68.5MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 21%|ββ | 240M/1.16G [00:04<00:13, 65.9MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 22%|βββ | 256M/1.16G [00:04<00:14, 63.9MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 24%|βββ | 272M/1.16G [00:04<00:13, 65.5MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 25%|βββ | 288M/1.16G [00:05<00:28, 30.9MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 26%|βββ | 304M/1.16G [00:05<00:23, 35.7MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 28%|βββ | 320M/1.16G [00:06<00:21, 38.2MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 29%|βββ | 336M/1.16G [00:06<00:19, 42.5MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 30%|βββ | 352M/1.16G [00:06<00:16, 47.9MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 32%|ββββ | 368M/1.16G [00:07<00:16, 48.3MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 33%|ββββ | 384M/1.16G [00:07<00:15, 50.3MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 35%|ββββ | 400M/1.16G [00:07<00:15, 49.7MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 36%|ββββ | 416M/1.16G [00:08<00:14, 50.4MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 37%|ββββ | 432M/1.16G [00:08<00:13, 55.0MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 39%|ββββ | 448M/1.16G [00:08<00:12, 56.2MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 40%|ββββ | 464M/1.16G [00:08<00:12, 53.6MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 42%|βββββ | 480M/1.16G [00:09<00:12, 52.7MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 43%|βββββ | 496M/1.16G [00:09<00:13, 49.7MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 44%|βββββ | 512M/1.16G [00:09<00:12, 51.3MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 46%|βββββ | 528M/1.16G [00:10<00:11, 52.8MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 47%|βββββ | 544M/1.16G [00:10<00:11, 52.5MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 48%|βββββ | 560M/1.16G [00:10<00:10, 55.8MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 50%|βββββ | 576M/1.16G [00:11<00:10, 54.0MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 51%|ββββββ | 592M/1.16G [00:11<00:09, 59.6MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 53%|ββββββ | 608M/1.16G [00:11<00:11, 47.1MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 54%|ββββββ | 624M/1.16G [00:12<00:10, 49.3MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 55%|ββββββ | 640M/1.16G [00:12<00:09, 54.4MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 57%|ββββββ | 656M/1.16G [00:12<00:09, 51.9MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 58%|ββββββ | 672M/1.16G [00:12<00:08, 54.1MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 60%|ββββββ | 688M/1.16G [00:13<00:08, 53.9MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 61%|ββββββ | 704M/1.16G [00:13<00:08, 55.3MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 62%|βββββββ | 720M/1.16G [00:13<00:08, 52.0MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 64%|βββββββ | 736M/1.16G [00:14<00:07, 54.0MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 65%|βββββββ | 752M/1.16G [00:14<00:07, 53.9MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 66%|βββββββ | 768M/1.16G [00:14<00:06, 56.8MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 68%|βββββββ | 784M/1.16G [00:15<00:08, 44.6MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 69%|βββββββ | 800M/1.16G [00:15<00:07, 48.7MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 71%|βββββββ | 816M/1.16G [00:15<00:06, 51.8MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 72%|ββββββββ | 832M/1.16G [00:16<00:07, 45.5MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 73%|ββββββββ | 848M/1.16G [00:16<00:08, 34.8MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 75%|ββββββββ | 864M/1.16G [00:17<00:07, 41.2MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 76%|ββββββββ | 880M/1.16G [00:17<00:05, 46.4MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 78%|ββββββββ | 896M/1.16G [00:17<00:05, 50.8MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 79%|ββββββββ | 912M/1.16G [00:17<00:04, 56.1MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 80%|ββββββββ | 928M/1.16G [00:18<00:04, 56.2MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 82%|βββββββββ | 944M/1.16G [00:18<00:03, 59.4MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 83%|βββββββββ | 960M/1.16G [00:18<00:03, 62.6MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 84%|βββββββββ | 976M/1.16G [00:18<00:02, 65.1MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 86%|βββββββββ | 992M/1.16G [00:18<00:02, 60.8MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 87%|βββββββββ | 1.01G/1.16G [00:19<00:02, 61.0MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 89%|βββββββββ | 1.02G/1.16G [00:19<00:02, 56.8MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 90%|βββββββββ | 1.04G/1.16G [00:19<00:02, 56.1MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 91%|ββββββββββ| 1.06G/1.16G [00:20<00:01, 59.1MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 93%|ββββββββββ| 1.07G/1.16G [00:20<00:01, 50.0MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 94%|ββββββββββ| 1.09G/1.16G [00:20<00:01, 56.9MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 96%|ββββββββββ| 1.10G/1.16G [00:21<00:01, 50.9MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 97%|ββββββββββ| 1.12G/1.16G [00:21<00:00, 54.8MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 98%|ββββββββββ| 1.14G/1.16G [00:21<00:00, 60.3MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 100%|ββββββββββ| 1.15G/1.16G [00:21<00:00, 63.3MB/s]
ip-26-0-160-225_22598.1719979501348614418.pt.trace.json: 100%|ββββββββββ| 1.16G/1.16G [00:21<00:00, 52.7MB/s] |
|
|