3outeille's picture
3outeille HF staff
Upload llama-1B/16_GPUS/dp-1_tp-1_pp-16_mbz-4
9ed9bf9 verified
raw
history blame
81 kB
========================
START TIME: Tue Jul 2 20:01:10 UTC 2024
python3 version = Python 3.10.14
========================
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
Token is valid (permission: write).
Your token has been saved to /admin/home/ferdinand_mom/.cache/huggingface/token
Login successful
Already on 'bench_cluster'
M examples/config_tiny_llama.py
M examples/config_tiny_llama.yaml
M examples/train_tiny_llama.sh
M src/nanotron/models/llama.py
M src/nanotron/trainer.py
Your branch is up to date with 'origin/bench_cluster'.
Job status: RUNNING
W0702 20:01:16.443000 140095634925376 torch/distributed/run.py:757]
W0702 20:01:16.443000 140095634925376 torch/distributed/run.py:757] *****************************************
W0702 20:01:16.443000 140095634925376 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0702 20:01:16.443000 140095634925376 torch/distributed/run.py:757] *****************************************
W0702 20:01:16.934000 140137724114752 torch/distributed/run.py:757]
W0702 20:01:16.934000 140137724114752 torch/distributed/run.py:757] *****************************************
W0702 20:01:16.934000 140137724114752 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0702 20:01:16.934000 140137724114752 torch/distributed/run.py:757] *****************************************
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Config:
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Config(general=GeneralArgs(project='bench_cluster',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: run='%date_%jobid',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: seed=42,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: step=None,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: consumed_train_samples=None,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: benchmark_csv_path=None,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: ignore_sanity_checks=True),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: parallelism=ParallelismArgs(dp=1,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: pp=16,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: tp=1,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: pp_engine=<nanotron.parallel.pipeline_parallel.engine.OneForwardOneBackwardPipelineEngine object at 0x7f0d09a38910>,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: tp_mode=<TensorParallelLinearMode.REDUCE_SCATTER: 2>,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: tp_linear_async_communication=False,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: expert_parallel_size=1),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: model=ModelArgs(model_config=LlamaConfig(bos_token_id=1,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: eos_token_id=2,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: hidden_act='silu',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: hidden_size=2048,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: initializer_range=0.02,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: intermediate_size=4096,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: is_llama_config=True,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: max_position_embeddings=4096,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: num_attention_heads=32,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: num_hidden_layers=24,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: num_key_value_heads=32,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: pad_token_id=None,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: pretraining_tp=1,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: rms_norm_eps=1e-05,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: rope_scaling=None,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: rope_theta=10000.0,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: tie_word_embeddings=True,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: use_cache=True,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: vocab_size=50257),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: init_method=RandomInit(std=0.025),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: dtype=torch.bfloat16,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: make_vocab_size_divisible_by=1,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: ddp_bucket_cap_mb=25),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: tokenizer=TokenizerArgs(tokenizer_name_or_path='openai-community/gpt2',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: tokenizer_revision=None,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: tokenizer_max_length=None),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: checkpoints=CheckpointsArgs(checkpoints_path=Path('/dev/null'),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: checkpoint_interval=100000,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: save_initial_state=False,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: resume_checkpoint_path=None,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: checkpoints_path_is_shared_file_system=False),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: logging=LoggingArgs(log_level='info',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: log_level_replica='info',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: iteration_step_info_interval=1),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: tokens=TokensArgs(sequence_length=4096,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: train_steps=20,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: micro_batch_size=4,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: batch_accumulation_per_replica=256,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: val_check_interval=-1,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: limit_val_batches=0,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: limit_test_batches=0),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: optimizer=OptimizerArgs(optimizer_factory=AdamWOptimizerArgs(adam_eps=1e-08,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: adam_beta1=0.9,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: adam_beta2=0.95,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: torch_adam_is_fused=True,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: name='adamW'),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: zero_stage=1,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: weight_decay=0.01,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: clip_grad=1.0,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: accumulate_grad_in_fp32=True,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: learning_rate_scheduler=LRSchedulerArgs(learning_rate=0.0001,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: lr_warmup_steps=1,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: lr_warmup_style='linear',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: lr_decay_style='linear',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: lr_decay_steps=19,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: lr_decay_starting_step=None,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: min_decay_lr=1e-05)),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: data_stages=[DatasetStageArgs(name='Training Stage',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: start_training_step=1,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: data=DataArgs(dataset=PretrainDatasetsArgs(hf_dataset_or_datasets='roneneldan/TinyStories',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: hf_dataset_splits='train',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: hf_dataset_config_name=None,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: dataset_processing_num_proc_per_process=64,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: dataset_overwrite_cache=False,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: text_column_name='text'),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: seed=42,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: num_loading_workers=32))],
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: profiler=ProfilerArgs(profiler_export_path=Path('/fsx/ferdinandmom/ferdinand-hf/bench_cluster/results/llama-1B/16_GPUS/dp-1_tp-1_pp-16_mbz-4')),
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: lighteval=None)
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Model Config:
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: LlamaConfig(bos_token_id=1,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: eos_token_id=2,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: hidden_act='silu',
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: hidden_size=2048,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: initializer_range=0.02,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: intermediate_size=4096,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: is_llama_config=True,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: max_position_embeddings=4096,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: num_attention_heads=32,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: num_hidden_layers=24,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: num_key_value_heads=32,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: pad_token_id=None,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: pretraining_tp=1,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: rms_norm_eps=1e-05,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: rope_scaling=None,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: rope_theta=10000.0,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: tie_word_embeddings=True,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: use_cache=True,
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: vocab_size=50257)
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Building model..
[default0]:07/02/2024 20:01:39 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Setting PP block ranks...
[default5]:07/02/2024 20:01:56 [INFO|DP=0|PP=5|TP=0|ip-26-0-165-24]: Local number of parameters: 41.9M (80.01MiB)
[default5]:07/02/2024 20:01:56 [INFO|DP=0|PP=5|TP=0|ip-26-0-165-24]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default5]:07/02/2024 20:01:56 [INFO|DP=0|PP=5|TP=0|ip-26-0-165-24]: No checkpoint path provided.
[default0]:07/02/2024 20:01:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Total number of parameters: 1.21G (2312.82MiB)
[default0]:07/02/2024 20:01:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Local number of parameters: 187M (356.33MiB)
[default0]:07/02/2024 20:01:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: [After model building] Memory usage: 358.34MiB. Peak allocated: 360.37MiB Peak reserved: 368.00MiB
[default0]:07/02/2024 20:01:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: No checkpoint path provided.
[default0]:07/02/2024 20:01:56 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Parametrizing model parameters using StandardParametrizator
[default1]:07/02/2024 20:01:56 [INFO|DP=0|PP=1|TP=0|ip-26-0-165-24]: Local number of parameters: 83.9M (160.02MiB)
[default1]:07/02/2024 20:01:56 [INFO|DP=0|PP=1|TP=0|ip-26-0-165-24]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default1]:07/02/2024 20:01:56 [INFO|DP=0|PP=1|TP=0|ip-26-0-165-24]: No checkpoint path provided.
[default2]:07/02/2024 20:01:56 [INFO|DP=0|PP=2|TP=0|ip-26-0-165-24]: Local number of parameters: 41.9M (80.01MiB)
[default2]:07/02/2024 20:01:56 [INFO|DP=0|PP=2|TP=0|ip-26-0-165-24]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default2]:07/02/2024 20:01:56 [INFO|DP=0|PP=2|TP=0|ip-26-0-165-24]: No checkpoint path provided.
[default4]:07/02/2024 20:01:56 [INFO|DP=0|PP=4|TP=0|ip-26-0-165-24]: Local number of parameters: 83.9M (160.02MiB)
[default4]:07/02/2024 20:01:56 [INFO|DP=0|PP=4|TP=0|ip-26-0-165-24]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default4]:07/02/2024 20:01:56 [INFO|DP=0|PP=4|TP=0|ip-26-0-165-24]: No checkpoint path provided.
[default3]:07/02/2024 20:01:56 [INFO|DP=0|PP=3|TP=0|ip-26-0-165-24]: Local number of parameters: 83.9M (160.02MiB)
[default3]:07/02/2024 20:01:56 [INFO|DP=0|PP=3|TP=0|ip-26-0-165-24]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default3]:07/02/2024 20:01:56 [INFO|DP=0|PP=3|TP=0|ip-26-0-165-24]: No checkpoint path provided.
[default6]:07/02/2024 20:01:56 [INFO|DP=0|PP=6|TP=0|ip-26-0-165-24]: Local number of parameters: 83.9M (160.02MiB)
[default6]:07/02/2024 20:01:56 [INFO|DP=0|PP=6|TP=0|ip-26-0-165-24]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default6]:07/02/2024 20:01:56 [INFO|DP=0|PP=6|TP=0|ip-26-0-165-24]: No checkpoint path provided.
[default7]:07/02/2024 20:01:56 [INFO|DP=0|PP=7|TP=0|ip-26-0-165-24]: Local number of parameters: 83.9M (160.02MiB)
[default7]:07/02/2024 20:01:56 [INFO|DP=0|PP=7|TP=0|ip-26-0-165-24]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default7]:07/02/2024 20:01:56 [INFO|DP=0|PP=7|TP=0|ip-26-0-165-24]: No checkpoint path provided.
[default0]:07/02/2024 20:01:56 [INFO|DP=0|PP=8|TP=0|ip-26-0-170-160]: Local number of parameters: 41.9M (80.01MiB)
[default0]:07/02/2024 20:01:56 [INFO|DP=0|PP=8|TP=0|ip-26-0-170-160]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default0]:07/02/2024 20:01:56 [INFO|DP=0|PP=8|TP=0|ip-26-0-170-160]: No checkpoint path provided.
[default6]:07/02/2024 20:01:56 [INFO|DP=0|PP=14|TP=0|ip-26-0-170-160]: Local number of parameters: 103M (196.32MiB)
[default6]:07/02/2024 20:01:56 [INFO|DP=0|PP=14|TP=0|ip-26-0-170-160]: [After model building] Memory usage: 196.33MiB. Peak allocated: 196.34MiB Peak reserved: 200.00MiB
[default6]:07/02/2024 20:01:56 [INFO|DP=0|PP=14|TP=0|ip-26-0-170-160]: No checkpoint path provided.
[default1]:07/02/2024 20:01:56 [INFO|DP=0|PP=9|TP=0|ip-26-0-170-160]: Local number of parameters: 83.9M (160.02MiB)
[default1]:07/02/2024 20:01:56 [INFO|DP=0|PP=9|TP=0|ip-26-0-170-160]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default1]:07/02/2024 20:01:56 [INFO|DP=0|PP=9|TP=0|ip-26-0-170-160]: No checkpoint path provided.
[default2]:07/02/2024 20:01:56 [INFO|DP=0|PP=10|TP=0|ip-26-0-170-160]: Local number of parameters: 83.9M (160.02MiB)
[default2]:07/02/2024 20:01:56 [INFO|DP=0|PP=10|TP=0|ip-26-0-170-160]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default2]:07/02/2024 20:01:56 [INFO|DP=0|PP=10|TP=0|ip-26-0-170-160]: No checkpoint path provided.
[default4]:07/02/2024 20:01:56 [INFO|DP=0|PP=12|TP=0|ip-26-0-170-160]: Local number of parameters: 83.9M (160.02MiB)
[default4]:07/02/2024 20:01:56 [INFO|DP=0|PP=12|TP=0|ip-26-0-170-160]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default4]:07/02/2024 20:01:56 [INFO|DP=0|PP=12|TP=0|ip-26-0-170-160]: No checkpoint path provided.
[default3]:07/02/2024 20:01:56 [INFO|DP=0|PP=11|TP=0|ip-26-0-170-160]: Local number of parameters: 41.9M (80.01MiB)
[default3]:07/02/2024 20:01:56 [INFO|DP=0|PP=11|TP=0|ip-26-0-170-160]: [After model building] Memory usage: 81.02MiB. Peak allocated: 83.05MiB Peak reserved: 96.00MiB
[default3]:07/02/2024 20:01:56 [INFO|DP=0|PP=11|TP=0|ip-26-0-170-160]: No checkpoint path provided.
[default5]:07/02/2024 20:01:56 [INFO|DP=0|PP=13|TP=0|ip-26-0-170-160]: Local number of parameters: 83.9M (160.02MiB)
[default5]:07/02/2024 20:01:56 [INFO|DP=0|PP=13|TP=0|ip-26-0-170-160]: [After model building] Memory usage: 162.03MiB. Peak allocated: 164.06MiB Peak reserved: 170.00MiB
[default5]:07/02/2024 20:01:56 [INFO|DP=0|PP=13|TP=0|ip-26-0-170-160]: No checkpoint path provided.
[default7]:07/02/2024 20:01:56 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: Local number of parameters: 0 (0.00MiB)
[default7]:07/02/2024 20:01:56 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: [After model building] Memory usage: 0.01MiB. Peak allocated: 0.02MiB Peak reserved: 2.00MiB
[default7]:07/02/2024 20:01:56 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: No checkpoint path provided.
[default0]:07/02/2024 20:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: [Optimizer Building] Using LearningRateForSP as learning rate
[default0]:07/02/2024 20:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: [ZeRO sharding] Size of optimizer params per rank:
[default0]:07/02/2024 20:01:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: [ZeRO sharding] DP Rank 0 has 187M out of 187M (100.00%) params' optimizer states
[default0]:07/02/2024 20:01:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: [Training Plan] Stage Training Stage has 19 remaining training steps and has consumed 0 samples
[default0]:07/02/2024 20:01:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Using `datasets` library
[default0]:07/02/2024 20:01:58 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Loading tokenizer from openai-community/gpt2 and transformers/hf_hub versions ('4.41.2', '0.23.4')
[default0]:07/02/2024 20:01:58 [WARNING|DP=0|PP=0|TP=0|ip-26-0-165-24]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/02/2024 20:02:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: [Training Plan] There are 1 training stages
[default0]:07/02/2024 20:02:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: [Stage Training Stage] start from step 1
[default0]:07/02/2024 20:02:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]:
[default0]:07/02/2024 20:02:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: [Start training] datetime: 2024-07-02 20:02:00.957769 | mbs: 4 | grad_accum: 256 | global_batch_size: 1024 | sequence_length: 4096 | train_steps: 20 | start_iteration_step: 0 | consumed_train_samples: 0
[default0]:07/02/2024 20:02:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Resuming training from stage Training Stage, it has trained for 0 samples and has 19 remaining train steps
[default0]:07/02/2024 20:02:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 1783.67MiB. Peak allocated 1783.67MiB. Peak reserved: 1796.00MiB
[default5]:07/02/2024 20:02:01 [WARNING|DP=0|PP=5|TP=0|ip-26-0-165-24]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/02/2024 20:02:01 [WARNING|DP=0|PP=1|TP=0|ip-26-0-165-24]: Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/02/2024 20:02:01 [WARNING|DP=0|PP=2|TP=0|ip-26-0-165-24]: Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/02/2024 20:02:01 [WARNING|DP=0|PP=3|TP=0|ip-26-0-165-24]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/02/2024 20:02:01 [WARNING|DP=0|PP=7|TP=0|ip-26-0-165-24]: Repo card metadata block was not found. Setting CardData to empty.
[default1]:07/02/2024 20:02:01 [WARNING|DP=0|PP=9|TP=0|ip-26-0-170-160]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:07/02/2024 20:02:01 [WARNING|DP=0|PP=10|TP=0|ip-26-0-170-160]: Repo card metadata block was not found. Setting CardData to empty.
[default5]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:07/02/2024 20:02:01 [WARNING|DP=0|PP=11|TP=0|ip-26-0-170-160]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/02/2024 20:02:01 [WARNING|DP=0|PP=12|TP=0|ip-26-0-170-160]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default3]:Repo card metadata block was not found. Setting CardData to empty.
[default1]:Repo card metadata block was not found. Setting CardData to empty.
[default5]:07/02/2024 20:02:01 [WARNING|DP=0|PP=13|TP=0|ip-26-0-170-160]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:07/02/2024 20:02:01 [WARNING|DP=0|PP=15|TP=0|ip-26-0-170-160]: Repo card metadata block was not found. Setting CardData to empty.
[default7]:Repo card metadata block was not found. Setting CardData to empty.
[default2]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/02/2024 20:02:01 [WARNING|DP=0|PP=6|TP=0|ip-26-0-165-24]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:07/02/2024 20:02:01 [WARNING|DP=0|PP=14|TP=0|ip-26-0-170-160]: Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:Repo card metadata block was not found. Setting CardData to empty.
[default0]:07/02/2024 20:02:01 [WARNING|DP=0|PP=8|TP=0|ip-26-0-170-160]: Repo card metadata block was not found. Setting CardData to empty.
[default0]:Repo card metadata block was not found. Setting CardData to empty.
[default4]:07/02/2024 20:02:01 [WARNING|DP=0|PP=4|TP=0|ip-26-0-165-24]: Repo card metadata block was not found. Setting CardData to empty.
[default4]:Repo card metadata block was not found. Setting CardData to empty.
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Attempting to run cuBLAS, but there was no current CUDA context! Attempting to set the primary context... (Triggered internally at ../aten/src/ATen/cuda/CublasHandlePool.cpp:135.)
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default7]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default7]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default5]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default5]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default4]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default4]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default3]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default2]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default1]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Attempting to run cuBLAS, but there was no current CUDA context! Attempting to set the primary context... (Triggered internally at ../aten/src/ATen/cuda/CublasHandlePool.cpp:135.)
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: c10d::allreduce_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
[default0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[default6]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default6]: warnings.warn(
[default0]:07/02/2024 20:03:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 1848.81MiB. Peak allocated 41166.75MiB. Peak reserved: 41406.00MiB
[default0]:/fsx/ferdinandmom/miniforge3/envs/env-bench-cluster/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:2261: UserWarning: torch.distributed.all_reduce_coalesced will be deprecated. If you must use it, please revisit our documentation later at https://pytorch.org/docs/master/distributed.html#collective-functions
[default0]: warnings.warn(
[default0]:07/02/2024 20:03:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 3274.15MiB. Peak reserved: 42194.00MiB
[default7]:07/02/2024 20:03:29 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 1 / 20 | consumed_tokens: 4.19M | elapsed_time_per_iteration_ms: 81.4K | tokens_per_sec: 51.5K | tokens_per_sec_per_gpu: 3.22K | global_batch_size: 1.02K | lm_loss: 11.1 | lr: 0.0001 | model_tflops_per_gpu: 29.2 | hardware_tflops_per_gpu: 29.2 | grad_norm: 25.6 | cuda_memory_allocated: 289K | cuda_max_memory_reserved: 11.5G | hd_total_memory_tb: 312G | hd_used_memory_tb: 68.6G | hd_free_memory_tb: 244G
[default0]:07/02/2024 20:04:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default0]:07/02/2024 20:04:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 3274.15MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:04:00 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 2 / 20 | consumed_tokens: 8.39M | elapsed_time_per_iteration_ms: 30.8K | tokens_per_sec: 136K | tokens_per_sec_per_gpu: 8.5K | global_batch_size: 1.02K | lm_loss: 11.1 | lr: 9.53e-05 | model_tflops_per_gpu: 77.1 | hardware_tflops_per_gpu: 77.1 | grad_norm: 25.9 | cuda_memory_allocated: 289K | cuda_max_memory_reserved: 11.5G | hd_total_memory_tb: 312G | hd_used_memory_tb: 68.6G | hd_free_memory_tb: 244G
[default0]:07/02/2024 20:04:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default0]:07/02/2024 20:04:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 3274.15MiB. Peak reserved: 43028.00MiB
[default0]:STAGE:2024-07-02 20:04:29 803527:803527 ActivityProfilerController.cpp:314] Completed Stage: Warm Up
[default7]:07/02/2024 20:04:29 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 3 / 20 | consumed_tokens: 12.6M | elapsed_time_per_iteration_ms: 28.8K | tokens_per_sec: 145K | tokens_per_sec_per_gpu: 9.09K | global_batch_size: 1.02K | lm_loss: 9.9 | lr: 9.05e-05 | model_tflops_per_gpu: 82.5 | hardware_tflops_per_gpu: 82.5 | grad_norm: 40.4 | cuda_memory_allocated: 289K | cuda_max_memory_reserved: 11.5G | hd_total_memory_tb: 312G | hd_used_memory_tb: 68.6G | hd_free_memory_tb: 244G
[default0]:07/02/2024 20:04:59 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default0]:07/02/2024 20:04:59 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 3274.15MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:04:59 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 4 / 20 | consumed_tokens: 16.8M | elapsed_time_per_iteration_ms: 30K | tokens_per_sec: 140K | tokens_per_sec_per_gpu: 8.75K | global_batch_size: 1.02K | lm_loss: 11.9 | lr: 8.58e-05 | model_tflops_per_gpu: 79.4 | hardware_tflops_per_gpu: 79.4 | grad_norm: 61.2 | cuda_memory_allocated: 289K | cuda_max_memory_reserved: 11.5G | hd_total_memory_tb: 312G | hd_used_memory_tb: 68.6G | hd_free_memory_tb: 244G
[default0]:07/02/2024 20:05:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:05:31 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 5 / 20 | consumed_tokens: 21M | elapsed_time_per_iteration_ms: 31.8K | tokens_per_sec: 132K | tokens_per_sec_per_gpu: 8.25K | global_batch_size: 1.02K | lm_loss: 9.05 | lr: 8.11e-05 | model_tflops_per_gpu: 74.9 | hardware_tflops_per_gpu: 74.9 | grad_norm: 8.31
[default7]:07/02/2024 20:06:01 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 6 / 20 | consumed_tokens: 25.2M | elapsed_time_per_iteration_ms: 30.6K | tokens_per_sec: 137K | tokens_per_sec_per_gpu: 8.55K | global_batch_size: 1.02K | lm_loss: 8.85 | lr: 7.63e-05 | model_tflops_per_gpu: 77.6 | hardware_tflops_per_gpu: 77.6 | grad_norm: 6.61
[default0]:STAGE:2024-07-02 20:06:15 803527:803527 ActivityProfilerController.cpp:320] Completed Stage: Collection
[default0]:STAGE:2024-07-02 20:06:16 803527:803527 ActivityProfilerController.cpp:324] Completed Stage: Post Processing
[default0]:07/02/2024 20:07:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default0]:07/02/2024 20:08:28 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:08:28 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 7 / 20 | consumed_tokens: 29.4M | elapsed_time_per_iteration_ms: 146K | tokens_per_sec: 28.7K | tokens_per_sec_per_gpu: 1.79K | global_batch_size: 1.02K | lm_loss: 8.37 | lr: 7.16e-05 | model_tflops_per_gpu: 16.3 | hardware_tflops_per_gpu: 16.3 | grad_norm: 4.93
[default7]:07/02/2024 20:08:57 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 8 / 20 | consumed_tokens: 33.6M | elapsed_time_per_iteration_ms: 29.8K | tokens_per_sec: 141K | tokens_per_sec_per_gpu: 8.81K | global_batch_size: 1.02K | lm_loss: 7.97 | lr: 6.68e-05 | model_tflops_per_gpu: 79.9 | hardware_tflops_per_gpu: 79.9 | grad_norm: 3.12
[default0]:07/02/2024 20:08:57 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default0]:07/02/2024 20:09:29 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:09:29 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 9 / 20 | consumed_tokens: 37.7M | elapsed_time_per_iteration_ms: 31.5K | tokens_per_sec: 133K | tokens_per_sec_per_gpu: 8.32K | global_batch_size: 1.02K | lm_loss: 7.83 | lr: 6.21e-05 | model_tflops_per_gpu: 75.5 | hardware_tflops_per_gpu: 75.5 | grad_norm: 9.04
[default7]:07/02/2024 20:10:00 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 10 / 20 | consumed_tokens: 41.9M | elapsed_time_per_iteration_ms: 31.4K | tokens_per_sec: 134K | tokens_per_sec_per_gpu: 8.35K | global_batch_size: 1.02K | lm_loss: 7.62 | lr: 5.74e-05 | model_tflops_per_gpu: 75.8 | hardware_tflops_per_gpu: 75.8 | grad_norm: 5.09
[default0]:07/02/2024 20:10:00 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default0]:07/02/2024 20:10:31 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:10:31 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 11 / 20 | consumed_tokens: 46.1M | elapsed_time_per_iteration_ms: 30.3K | tokens_per_sec: 138K | tokens_per_sec_per_gpu: 8.65K | global_batch_size: 1.02K | lm_loss: 7.47 | lr: 5.26e-05 | model_tflops_per_gpu: 78.5 | hardware_tflops_per_gpu: 78.5 | grad_norm: 4.06
[default0]:07/02/2024 20:11:03 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:11:03 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 12 / 20 | consumed_tokens: 50.3M | elapsed_time_per_iteration_ms: 32K | tokens_per_sec: 131K | tokens_per_sec_per_gpu: 8.18K | global_batch_size: 1.02K | lm_loss: 7.34 | lr: 4.79e-05 | model_tflops_per_gpu: 74.2 | hardware_tflops_per_gpu: 74.2 | grad_norm: 3.13
[default7]:07/02/2024 20:11:35 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 13 / 20 | consumed_tokens: 54.5M | elapsed_time_per_iteration_ms: 32.4K | tokens_per_sec: 129K | tokens_per_sec_per_gpu: 8.08K | global_batch_size: 1.02K | lm_loss: 7.23 | lr: 4.32e-05 | model_tflops_per_gpu: 73.4 | hardware_tflops_per_gpu: 73.4 | grad_norm: 2.73
[default0]:07/02/2024 20:11:35 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:12:06 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 14 / 20 | consumed_tokens: 58.7M | elapsed_time_per_iteration_ms: 30.9K | tokens_per_sec: 136K | tokens_per_sec_per_gpu: 8.48K | global_batch_size: 1.02K | lm_loss: 7.14 | lr: 3.84e-05 | model_tflops_per_gpu: 77 | hardware_tflops_per_gpu: 77 | grad_norm: 2.33
[default0]:07/02/2024 20:12:06 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:12:37 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 15 / 20 | consumed_tokens: 62.9M | elapsed_time_per_iteration_ms: 30.6K | tokens_per_sec: 137K | tokens_per_sec_per_gpu: 8.56K | global_batch_size: 1.02K | lm_loss: 7.06 | lr: 3.37e-05 | model_tflops_per_gpu: 77.7 | hardware_tflops_per_gpu: 77.7 | grad_norm: 2.47
[default0]:07/02/2024 20:12:37 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:13:09 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 16 / 20 | consumed_tokens: 67.1M | elapsed_time_per_iteration_ms: 32.1K | tokens_per_sec: 131K | tokens_per_sec_per_gpu: 8.17K | global_batch_size: 1.02K | lm_loss: 6.98 | lr: 2.89e-05 | model_tflops_per_gpu: 74.1 | hardware_tflops_per_gpu: 74.1 | grad_norm: 2.69
[default0]:07/02/2024 20:13:09 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:13:40 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 17 / 20 | consumed_tokens: 71.3M | elapsed_time_per_iteration_ms: 31.3K | tokens_per_sec: 134K | tokens_per_sec_per_gpu: 8.37K | global_batch_size: 1.02K | lm_loss: 6.9 | lr: 2.42e-05 | model_tflops_per_gpu: 75.9 | hardware_tflops_per_gpu: 75.9 | grad_norm: 1.91
[default0]:07/02/2024 20:13:40 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:14:10 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 18 / 20 | consumed_tokens: 75.5M | elapsed_time_per_iteration_ms: 29.9K | tokens_per_sec: 140K | tokens_per_sec_per_gpu: 8.77K | global_batch_size: 1.02K | lm_loss: 6.84 | lr: 1.95e-05 | model_tflops_per_gpu: 79.5 | hardware_tflops_per_gpu: 79.5 | grad_norm: 1.62
[default0]:07/02/2024 20:14:10 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:14:42 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 19 / 20 | consumed_tokens: 79.7M | elapsed_time_per_iteration_ms: 31.7K | tokens_per_sec: 132K | tokens_per_sec_per_gpu: 8.27K | global_batch_size: 1.02K | lm_loss: 6.8 | lr: 1.47e-05 | model_tflops_per_gpu: 75 | hardware_tflops_per_gpu: 75 | grad_norm: 1.85
[default0]:07/02/2024 20:14:42 [INFO|DP=0|PP=0|TP=0|ip-26-0-165-24]: Memory usage: 3274.15MiB. Peak allocated 42592.08MiB. Peak reserved: 43028.00MiB
[default7]:07/02/2024 20:15:12 [INFO|DP=0|PP=15|TP=0|ip-26-0-170-160]: iteration: 20 / 20 | consumed_tokens: 83.9M | elapsed_time_per_iteration_ms: 30.8K | tokens_per_sec: 136K | tokens_per_sec_per_gpu: 8.5K | global_batch_size: 1.02K | lm_loss: 6.77 | lr: 1e-05 | model_tflops_per_gpu: 77.1 | hardware_tflops_per_gpu: 77.1 | grad_norm: 1.82
W0702 20:15:33.535000 140132057294592 torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1252] The node 'ip-26-0-170-160.ec2.internal_826913_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousTimeoutError.
Traceback (most recent call last):
File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/main.py", line 4, in <module>
from bench_cluster.submit_jobs import submit_jobs, check_status
ImportError: cannot import name 'check_status' from 'bench_cluster.submit_jobs' (/fsx/ferdinandmom/ferdinand-hf/bench_cluster/bench_cluster/submit_jobs.py)
Traceback (most recent call last):
File "/fsx/ferdinandmom/ferdinand-hf/bench_cluster/main.py", line 4, in <module>
from bench_cluster.submit_jobs import submit_jobs, check_status
ImportError: cannot import name 'check_status' from 'bench_cluster.submit_jobs' (/fsx/ferdinandmom/ferdinand-hf/bench_cluster/bench_cluster/submit_jobs.py)
Consider using `hf_transfer` for faster uploads. This solution comes with some limitations. See https://huggingface.co/docs/huggingface_hub/hf_transfer for more details.
ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 0%| | 0.00/3.35G [00:00<?, ?B/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 0%| | 9.72M/3.35G [00:00<00:34, 95.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 1%| | 19.3M/3.35G [00:00<01:37, 34.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 1%| | 32.0M/3.35G [00:00<01:23, 39.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 1%|▏ | 48.0M/3.35G [00:01<01:17, 42.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 2%|▏ | 64.0M/3.35G [00:01<01:09, 47.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 2%|▏ | 80.0M/3.35G [00:01<01:07, 48.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 3%|β–Ž | 96.0M/3.35G [00:02<01:11, 45.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 3%|β–Ž | 112M/3.35G [00:02<01:02, 51.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 4%|▍ | 128M/3.35G [00:02<00:55, 57.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 4%|▍ | 144M/3.35G [00:02<01:00, 53.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 5%|▍ | 160M/3.35G [00:03<01:02, 51.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 5%|β–Œ | 176M/3.35G [00:03<01:02, 50.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 6%|β–Œ | 192M/3.35G [00:03<00:56, 56.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 6%|β–Œ | 208M/3.35G [00:04<00:54, 57.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 7%|β–‹ | 224M/3.35G [00:04<00:53, 58.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 7%|β–‹ | 240M/3.35G [00:04<00:50, 61.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 8%|β–Š | 256M/3.35G [00:04<00:47, 65.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 8%|β–Š | 272M/3.35G [00:05<00:58, 52.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 9%|β–Š | 288M/3.35G [00:05<00:59, 51.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 9%|β–‰ | 304M/3.35G [00:05<01:02, 48.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 10%|β–‰ | 320M/3.35G [00:06<01:00, 50.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 10%|β–ˆ | 336M/3.35G [00:06<01:00, 50.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 11%|β–ˆ | 352M/3.35G [00:06<00:55, 54.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 11%|β–ˆ | 368M/3.35G [00:07<00:53, 55.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 11%|β–ˆβ– | 384M/3.35G [00:08<01:52, 26.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 12%|β–ˆβ– | 400M/3.35G [00:08<01:33, 31.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 12%|β–ˆβ– | 416M/3.35G [00:08<01:17, 37.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 13%|β–ˆβ–Ž | 432M/3.35G [00:09<01:09, 41.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 13%|β–ˆβ–Ž | 448M/3.35G [00:09<01:15, 38.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 14%|β–ˆβ– | 464M/3.35G [00:09<01:04, 44.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 14%|β–ˆβ– | 480M/3.35G [00:10<00:56, 50.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 15%|β–ˆβ– | 496M/3.35G [00:10<00:51, 55.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 15%|β–ˆβ–Œ | 512M/3.35G [00:10<00:50, 56.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 16%|β–ˆβ–Œ | 528M/3.35G [00:11<01:01, 46.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 16%|β–ˆβ–‹ | 544M/3.35G [00:11<00:55, 50.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 17%|β–ˆβ–‹ | 560M/3.35G [00:11<00:50, 55.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 17%|β–ˆβ–‹ | 576M/3.35G [00:11<00:47, 58.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 18%|β–ˆβ–Š | 592M/3.35G [00:12<00:46, 58.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 18%|β–ˆβ–Š | 608M/3.35G [00:12<00:45, 60.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 19%|β–ˆβ–Š | 624M/3.35G [00:12<00:45, 59.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 19%|β–ˆβ–‰ | 640M/3.35G [00:12<00:43, 62.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 20%|β–ˆβ–‰ | 656M/3.35G [00:13<00:46, 57.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 20%|β–ˆβ–ˆ | 672M/3.35G [00:13<00:47, 56.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 21%|β–ˆβ–ˆ | 688M/3.35G [00:13<00:45, 58.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 21%|β–ˆβ–ˆ | 704M/3.35G [00:13<00:42, 62.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 22%|β–ˆβ–ˆβ– | 720M/3.35G [00:14<00:48, 54.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 22%|β–ˆβ–ˆβ– | 736M/3.35G [00:14<00:44, 58.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 22%|β–ˆβ–ˆβ– | 752M/3.35G [00:14<00:45, 57.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 23%|β–ˆβ–ˆβ–Ž | 768M/3.35G [00:15<00:42, 61.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 23%|β–ˆβ–ˆβ–Ž | 784M/3.35G [00:15<00:39, 64.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 24%|β–ˆβ–ˆβ– | 800M/3.35G [00:15<00:41, 61.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 24%|β–ˆβ–ˆβ– | 816M/3.35G [00:15<00:44, 57.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 25%|β–ˆβ–ˆβ– | 832M/3.35G [00:16<00:45, 54.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 25%|β–ˆβ–ˆβ–Œ | 848M/3.35G [00:16<00:59, 41.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 26%|β–ˆβ–ˆβ–Œ | 864M/3.35G [00:17<01:01, 40.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 26%|β–ˆβ–ˆβ–‹ | 880M/3.35G [00:17<00:59, 41.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 27%|β–ˆβ–ˆβ–‹ | 896M/3.35G [00:17<00:55, 43.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 27%|β–ˆβ–ˆβ–‹ | 912M/3.35G [00:18<00:51, 46.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 28%|β–ˆβ–ˆβ–Š | 928M/3.35G [00:18<00:51, 47.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 28%|β–ˆβ–ˆβ–Š | 944M/3.35G [00:18<00:44, 53.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 29%|β–ˆβ–ˆβ–Š | 960M/3.35G [00:19<00:45, 52.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 29%|β–ˆβ–ˆβ–‰ | 976M/3.35G [00:19<00:44, 53.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 30%|β–ˆβ–ˆβ–‰ | 992M/3.35G [00:19<00:42, 55.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 30%|β–ˆβ–ˆβ–ˆ | 1.01G/3.35G [00:19<00:38, 60.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 31%|β–ˆβ–ˆβ–ˆ | 1.02G/3.35G [00:20<00:36, 63.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 31%|β–ˆβ–ˆβ–ˆ | 1.04G/3.35G [00:20<00:39, 57.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 32%|β–ˆβ–ˆβ–ˆβ– | 1.06G/3.35G [00:20<00:38, 58.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 32%|β–ˆβ–ˆβ–ˆβ– | 1.07G/3.35G [00:20<00:40, 55.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 33%|β–ˆβ–ˆβ–ˆβ–Ž | 1.09G/3.35G [00:21<00:38, 58.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 33%|β–ˆβ–ˆβ–ˆβ–Ž | 1.10G/3.35G [00:21<00:37, 60.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 33%|β–ˆβ–ˆβ–ˆβ–Ž | 1.12G/3.35G [00:21<00:37, 60.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 34%|β–ˆβ–ˆβ–ˆβ– | 1.14G/3.35G [00:21<00:36, 60.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 34%|β–ˆβ–ˆβ–ˆβ– | 1.15G/3.35G [00:22<00:35, 61.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 35%|β–ˆβ–ˆβ–ˆβ– | 1.17G/3.35G [00:22<00:33, 64.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 35%|β–ˆβ–ˆβ–ˆβ–Œ | 1.18G/3.35G [00:22<00:33, 64.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 36%|β–ˆβ–ˆβ–ˆβ–Œ | 1.20G/3.35G [00:23<00:38, 56.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 36%|β–ˆβ–ˆβ–ˆβ–‹ | 1.22G/3.35G [00:23<00:35, 59.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 37%|β–ˆβ–ˆβ–ˆβ–‹ | 1.23G/3.35G [00:23<00:39, 54.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 37%|β–ˆβ–ˆβ–ˆβ–‹ | 1.25G/3.35G [00:23<00:37, 56.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 38%|β–ˆβ–ˆβ–ˆβ–Š | 1.26G/3.35G [00:24<00:36, 57.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 38%|β–ˆβ–ˆβ–ˆβ–Š | 1.28G/3.35G [00:24<00:34, 59.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 39%|β–ˆβ–ˆβ–ˆβ–Š | 1.30G/3.35G [00:24<00:36, 55.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 39%|β–ˆβ–ˆβ–ˆβ–‰ | 1.31G/3.35G [00:25<00:39, 51.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 40%|β–ˆβ–ˆβ–ˆβ–‰ | 1.33G/3.35G [00:25<00:36, 55.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 1.34G/3.35G [00:25<00:32, 61.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 41%|β–ˆβ–ˆβ–ˆβ–ˆ | 1.36G/3.35G [00:25<00:31, 63.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 41%|β–ˆβ–ˆβ–ˆβ–ˆ | 1.38G/3.35G [00:26<00:32, 60.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 1.39G/3.35G [00:26<00:32, 60.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 1.41G/3.35G [00:26<00:30, 62.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1.42G/3.35G [00:26<00:31, 60.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1.44G/3.35G [00:27<00:38, 49.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1.46G/3.35G [00:27<00:38, 48.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 1.47G/3.35G [00:27<00:37, 50.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 1.49G/3.35G [00:28<00:40, 46.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 1.50G/3.35G [00:28<00:40, 45.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 1.52G/3.35G [00:29<00:39, 46.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 1.54G/3.35G [00:29<00:36, 49.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 1.55G/3.35G [00:29<00:38, 46.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 1.57G/3.35G [00:30<00:37, 48.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 1.58G/3.35G [00:30<00:34, 50.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 1.60G/3.35G [00:30<00:32, 53.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 1.62G/3.35G [00:30<00:28, 60.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 1.63G/3.35G [00:30<00:28, 61.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 1.65G/3.35G [00:31<00:31, 53.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 1.66G/3.35G [00:31<00:31, 53.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1.68G/3.35G [00:32<00:32, 51.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1.70G/3.35G [00:32<00:36, 44.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1.71G/3.35G [00:32<00:31, 51.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1.73G/3.35G [00:33<00:40, 40.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1.74G/3.35G [00:33<00:34, 46.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1.76G/3.35G [00:33<00:32, 49.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1.78G/3.35G [00:34<00:30, 51.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1.79G/3.35G [00:34<00:27, 56.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1.81G/3.35G [00:35<00:41, 36.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1.82G/3.35G [00:35<00:39, 38.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1.84G/3.35G [00:35<00:34, 44.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 1.86G/3.35G [00:36<00:35, 42.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 1.87G/3.35G [00:36<00:30, 48.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 1.89G/3.35G [00:36<00:28, 50.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 1.90G/3.35G [00:36<00:26, 55.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 1.92G/3.35G [00:37<00:27, 52.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 1.94G/3.35G [00:37<00:25, 55.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 1.95G/3.35G [00:37<00:24, 56.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 1.97G/3.35G [00:37<00:22, 60.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 1.98G/3.35G [00:38<00:28, 47.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.00G/3.35G [00:38<00:26, 50.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2.02G/3.35G [00:38<00:23, 56.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2.03G/3.35G [00:39<00:23, 57.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2.05G/3.35G [00:39<00:21, 60.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.06G/3.35G [00:39<00:25, 50.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.08G/3.35G [00:40<00:22, 56.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 2.10G/3.35G [00:40<00:21, 59.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 2.11G/3.35G [00:40<00:23, 53.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 2.13G/3.35G [00:40<00:23, 52.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.14G/3.35G [00:41<00:22, 54.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.16G/3.35G [00:41<00:20, 57.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2.18G/3.35G [00:41<00:19, 61.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2.19G/3.35G [00:42<00:20, 57.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2.21G/3.35G [00:42<00:20, 54.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2.22G/3.35G [00:42<00:19, 57.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2.24G/3.35G [00:42<00:20, 53.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2.26G/3.35G [00:43<00:19, 57.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 2.27G/3.35G [00:43<00:19, 55.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 2.29G/3.35G [00:43<00:20, 52.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.30G/3.35G [00:44<00:19, 54.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.32G/3.35G [00:44<00:18, 56.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.34G/3.35G [00:44<00:18, 54.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2.35G/3.35G [00:44<00:16, 61.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2.37G/3.35G [00:45<00:17, 54.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2.38G/3.35G [00:45<00:16, 58.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.40G/3.35G [00:45<00:16, 58.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.42G/3.35G [00:45<00:15, 59.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 2.43G/3.35G [00:46<00:15, 57.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 2.45G/3.35G [00:46<00:14, 60.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 2.46G/3.35G [00:46<00:15, 55.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.48G/3.35G [00:47<00:14, 60.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.50G/3.35G [00:47<00:13, 62.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2.51G/3.35G [00:47<00:16, 50.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2.53G/3.35G [00:48<00:15, 52.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2.54G/3.35G [00:48<00:16, 48.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2.56G/3.35G [00:48<00:15, 51.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2.58G/3.35G [00:48<00:14, 53.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2.59G/3.35G [00:49<00:13, 56.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 2.61G/3.35G [00:49<00:11, 61.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 2.62G/3.35G [00:49<00:11, 60.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.64G/3.35G [00:49<00:12, 58.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.66G/3.35G [00:50<00:11, 57.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.67G/3.35G [00:50<00:10, 63.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2.69G/3.35G [00:50<00:09, 68.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2.70G/3.35G [00:50<00:09, 66.8MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.72G/3.35G [00:51<00:09, 65.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.74G/3.35G [00:51<00:09, 66.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.75G/3.35G [00:52<00:21, 27.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 83%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 2.77G/3.35G [00:53<00:17, 32.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 83%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 2.78G/3.35G [00:53<00:14, 38.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 2.80G/3.35G [00:53<00:12, 42.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.82G/3.35G [00:53<00:10, 48.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 85%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2.83G/3.35G [00:54<00:09, 51.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 85%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2.85G/3.35G [00:54<00:10, 49.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2.86G/3.35G [00:54<00:08, 54.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2.88G/3.35G [00:55<00:13, 34.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2.90G/3.35G [00:55<00:12, 37.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2.91G/3.35G [00:56<00:10, 43.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2.93G/3.35G [00:56<00:08, 49.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 2.94G/3.35G [00:56<00:07, 54.0MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 2.96G/3.35G [00:56<00:07, 52.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.98G/3.35G [00:57<00:06, 55.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.99G/3.35G [00:57<00:05, 61.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 3.01G/3.35G [00:57<00:05, 66.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 3.02G/3.35G [00:57<00:05, 61.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 91%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 3.04G/3.35G [00:58<00:05, 56.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 91%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 3.06G/3.35G [00:58<00:05, 57.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 3.07G/3.35G [00:58<00:04, 55.4MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 3.09G/3.35G [00:58<00:04, 58.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 93%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 3.10G/3.35G [00:59<00:04, 50.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 93%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 3.12G/3.35G [00:59<00:04, 55.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 94%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 3.14G/3.35G [00:59<00:03, 59.6MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 94%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 3.15G/3.35G [01:00<00:03, 53.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 3.17G/3.35G [01:00<00:03, 57.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 3.18G/3.35G [01:00<00:02, 57.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 3.20G/3.35G [01:00<00:02, 59.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 3.22G/3.35G [01:01<00:02, 57.7MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 3.23G/3.35G [01:01<00:01, 58.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 3.25G/3.35G [01:01<00:01, 54.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 98%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 3.26G/3.35G [01:02<00:01, 51.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 98%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 3.28G/3.35G [01:02<00:01, 57.1MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 98%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 3.30G/3.35G [01:02<00:00, 55.2MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 3.31G/3.35G [01:02<00:00, 55.9MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 3.33G/3.35G [01:03<00:00, 52.5MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 3.34G/3.35G [01:03<00:00, 46.3MB/s] ip-26-0-165-24_803527.1719950856056307941.pt.trace.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.35G/3.35G [01:03<00:00, 52.3MB/s]