Submitting job: /common/home/users/d/dh.huang.2023/code/rapget-translation/scripts/tune-mac-4gpu.sh Current Directory: /common/home/users/d/dh.huang.2023/code/rapget-translation Fri Aug 9 23:18:55 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA L40 On | 00000000:01:00.0 Off | 0 | | N/A 39C P0 79W / 300W | 25036MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA L40 On | 00000000:41:00.0 Off | 0 | | N/A 32C P8 36W / 300W | 1MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 NVIDIA L40 On | 00000000:81:00.0 Off | 0 | | N/A 31C P8 34W / 300W | 1MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 3 NVIDIA L40 On | 00000000:C1:00.0 Off | 0 | | N/A 31C P8 33W / 300W | 1MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| +-----------------------------------------------------------------------------------------+ Linux lithium 4.18.0-553.5.1.el8_10.x86_64 #1 SMP Thu Jun 6 09:41:19 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux NAME="Rocky Linux" VERSION="8.10 (Green Obsidian)" ID="rocky" ID_LIKE="rhel centos fedora" VERSION_ID="8.10" PLATFORM_ID="platform:el8" PRETTY_NAME="Rocky Linux 8.10 (Green Obsidian)" ANSI_COLOR="0;32" LOGO="fedora-logo-icon" CPE_NAME="cpe:/o:rocky:rocky:8:GA" HOME_URL="https://rockylinux.org/" BUG_REPORT_URL="https://bugs.rockylinux.org/" SUPPORT_END="2029-05-31" ROCKY_SUPPORT_PRODUCT="Rocky-Linux-8" ROCKY_SUPPORT_PRODUCT_VERSION="8.10" REDHAT_SUPPORT_PRODUCT="Rocky Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.10" Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 25 Model: 1 Model name: AMD EPYC 7763 64-Core Processor Stepping: 1 CPU MHz: 2450.000 CPU max MHz: 3529.0520 CPU min MHz: 1500.0000 BogoMIPS: 4890.62 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 32768K NUMA node0 CPU(s): 0-127 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm MemTotal: 527670712 kB Current Directory: /common/home/users/d/dh.huang.2023/code/rapget-translation/llama-factory Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--meteor/ea1b63f27faab173b022be2d3cc3df2fc44a894247f833943ea98c8a7caeb1e8 (last modified on Wed Jun 19 06:03:37 2024) since it couldn't be found locally at evaluate-metric--meteor, or remotely on the Hugging Face Hub. [nltk_data] Error loading wordnet: [nltk_data] Error loading punkt: [nltk_data] Error loading omw-1.4: Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--bleu/9e0985c1200e367cce45605ce0ecb5ede079894e0f24f54613fca08eeb8aff76 (last modified on Wed May 8 07:48:30 2024) since it couldn't be found locally at evaluate-metric--bleu, or remotely on the Hugging Face Hub. Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--rouge/b01e0accf3bd6dd24839b769a5fda24e14995071570870922c71970b3a6ed886 (last modified on Wed May 8 07:48:31 2024) since it couldn't be found locally at evaluate-metric--rouge, or remotely on the Hugging Face Hub. Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--meteor/ea1b63f27faab173b022be2d3cc3df2fc44a894247f833943ea98c8a7caeb1e8 (last modified on Wed Jun 19 06:03:37 2024) since it couldn't be found locally at evaluate-metric--meteor, or remotely on the Hugging Face Hub. [nltk_data] Error loading wordnet: [nltk_data] Error loading punkt: [nltk_data] Error loading omw-1.4: Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--accuracy/f887c0aab52c2d38e1f8a215681126379eca617f96c447638f751434e8e65b14 (last modified on Wed Jun 19 06:03:39 2024) since it couldn't be found locally at evaluate-metric--accuracy, or remotely on the Hugging Face Hub. loading env vars from: /common/home/users/d/dh.huang.2023/common2/code/rapget-translation/.env Adding /common/home/users/d/dh.huang.2023/common2/code/rapget-translation to sys.path loading: /common/home/users/d/dh.huang.2023/common2/code/rapget-translation/eval_modules/calc_repetitions.py loading /common/home/users/d/dh.huang.2023/common2/code/rapget-translation/llm_toolkit/translation_utils.py Qwen Qwen2-72B-Instruct qwen config/mac_template_4gpu.yaml ../datasets/mac/mac.tsv Writing to config/models/Qwen2-72B-Instruct.yaml config/models/Qwen2-72B-Instruct.yaml: { "model_name_or_path": "Qwen/Qwen2-72B-Instruct", "quantization_bit": 4, "stage": "sft", "do_train": true, "finetuning_type": "lora", "lora_target": "all", "dataset": "alpaca_mac", "template": "qwen", "cutoff_len": 1024, "max_samples": 4528, "overwrite_cache": true, "preprocessing_num_workers": 16, "output_dir": "saves/Qwen2-72B-Instruct", "logging_steps": 5, "save_steps": 70, "plot_loss": true, "per_device_train_batch_size": 2, "gradient_accumulation_steps": 8, "learning_rate": 0.0001, "num_train_epochs": 6.0, "lr_scheduler_type": "cosine", "warmup_ratio": 0.1, "bf16": true, "ddp_timeout": 180000000, "val_size": 0.01, "per_device_eval_batch_size": 1, "eval_strategy": "steps", "eval_steps": 70, "report_to": "wandb", "run_name": "Qwen2-72B-Instruct_lora_sft" } loading existing data from: data/alpaca_mac.json -------------------------------------------------- system: You are a helpful assistant that translates Chinese to English. -------------------------------------------------- instruction: You will be given a Chinese sentence to translate. If it is an incomplete sentence, or if you are unsure about the meaning, simply copy the input text as your output. Do not output any additional sentence such as explanation or reasoning. Chinese: 全仗着狐仙搭救。 English: -------------------------------------------------- input: -------------------------------------------------- output: Because I was protected by a fox fairy. -------------------------------------------------- system: You are a helpful assistant that translates Chinese to English. -------------------------------------------------- instruction: You will be given a Chinese sentence to translate. If it is an incomplete sentence, or if you are unsure about the meaning, simply copy the input text as your output. Do not output any additional sentence such as explanation or reasoning. Chinese: 上面说,这样写缺少细节。 English: -------------------------------------------------- input: -------------------------------------------------- output: This time the opinions from above said it needed more detail. 08/09/2024 23:29:48 - INFO - llamafactory.cli - Initializing distributed tasks at: 127.0.0.1:26045 W0809 23:29:49.400000 140581888873472 torch/distributed/run.py:779] W0809 23:29:49.400000 140581888873472 torch/distributed/run.py:779] ***************************************** W0809 23:29:49.400000 140581888873472 torch/distributed/run.py:779] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0809 23:29:49.400000 140581888873472 torch/distributed/run.py:779] ***************************************** [W809 23:29:57.066308894 CUDAAllocatorConfig.h:28] Warning: expandable_segments not supported on this platform (function operator()) [W809 23:29:57.070599007 CUDAAllocatorConfig.h:28] Warning: expandable_segments not supported on this platform (function operator()) [W809 23:29:57.071350192 CUDAAllocatorConfig.h:28] Warning: expandable_segments not supported on this platform (function operator()) [W809 23:29:57.073219621 CUDAAllocatorConfig.h:28] Warning: expandable_segments not supported on this platform (function operator()) 08/09/2024 23:29:57 - WARNING - llamafactory.hparams.parser - We recommend enable `upcast_layernorm` in quantized training. 08/09/2024 23:29:57 - WARNING - llamafactory.hparams.parser - `ddp_find_unused_parameters` needs to be set as False for LoRA in DDP training. 08/09/2024 23:29:57 - INFO - llamafactory.hparams.parser - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 08/09/2024 23:29:57 - WARNING - llamafactory.hparams.parser - We recommend enable `upcast_layernorm` in quantized training. 08/09/2024 23:29:57 - WARNING - llamafactory.hparams.parser - `ddp_find_unused_parameters` needs to be set as False for LoRA in DDP training. 08/09/2024 23:29:57 - INFO - llamafactory.hparams.parser - Process rank: 2, device: cuda:2, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 08/09/2024 23:29:57 - WARNING - llamafactory.hparams.parser - We recommend enable `upcast_layernorm` in quantized training. 08/09/2024 23:29:57 - WARNING - llamafactory.hparams.parser - `ddp_find_unused_parameters` needs to be set as False for LoRA in DDP training. 08/09/2024 23:29:57 - INFO - llamafactory.hparams.parser - Process rank: 3, device: cuda:3, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 08/09/2024 23:29:57 - WARNING - llamafactory.hparams.parser - We recommend enable `upcast_layernorm` in quantized training. 08/09/2024 23:29:57 - WARNING - llamafactory.hparams.parser - `ddp_find_unused_parameters` needs to be set as False for LoRA in DDP training. 08/09/2024 23:29:57 - INFO - llamafactory.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 [INFO|tokenization_utils_base.py:2289] 2024-08-09 23:30:08,901 >> loading file vocab.json from cache at /common/scratch/users/d/dh.huang.2023/transformers/hub/models--Qwen--Qwen2-72B-Instruct/snapshots/1af63c698f59c4235668ec9c1395468cb7cd7e79/vocab.json [INFO|tokenization_utils_base.py:2289] 2024-08-09 23:30:08,902 >> loading file merges.txt from cache at /common/scratch/users/d/dh.huang.2023/transformers/hub/models--Qwen--Qwen2-72B-Instruct/snapshots/1af63c698f59c4235668ec9c1395468cb7cd7e79/merges.txt [INFO|tokenization_utils_base.py:2289] 2024-08-09 23:30:08,902 >> loading file tokenizer.json from cache at /common/scratch/users/d/dh.huang.2023/transformers/hub/models--Qwen--Qwen2-72B-Instruct/snapshots/1af63c698f59c4235668ec9c1395468cb7cd7e79/tokenizer.json [INFO|tokenization_utils_base.py:2289] 2024-08-09 23:30:08,902 >> loading file added_tokens.json from cache at None [INFO|tokenization_utils_base.py:2289] 2024-08-09 23:30:08,902 >> loading file special_tokens_map.json from cache at None [INFO|tokenization_utils_base.py:2289] 2024-08-09 23:30:08,902 >> loading file tokenizer_config.json from cache at /common/scratch/users/d/dh.huang.2023/transformers/hub/models--Qwen--Qwen2-72B-Instruct/snapshots/1af63c698f59c4235668ec9c1395468cb7cd7e79/tokenizer_config.json [INFO|tokenization_utils_base.py:2533] 2024-08-09 23:30:09,074 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 08/09/2024 23:30:09 - INFO - llamafactory.data.template - Replace eos token: <|im_end|> 08/09/2024 23:30:09 - INFO - llamafactory.data.template - Replace eos token: <|im_end|> 08/09/2024 23:30:09 - INFO - llamafactory.data.template - Replace eos token: <|im_end|> 08/09/2024 23:30:09 - INFO - llamafactory.data.loader - Loading dataset alpaca_mac.json... 08/09/2024 23:30:09 - INFO - llamafactory.data.template - Replace eos token: <|im_end|> Converting format of dataset (num_proc=16): 0%| | 0/4528 [00:00system You are a helpful assistant that translates Chinese to English.<|im_end|> <|im_start|>user You will be given a Chinese sentence to translate. If it is an incomplete sentence, or if you are unsure about the meaning, simply copy the input text as your output. Do not output any additional sentence such as explanation or reasoning. Chinese: 全仗着狐仙搭救。 English:<|im_end|> <|im_start|>assistant Because I was protected by a fox fairy.<|im_end|> label_ids: [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 17949, 358, 572, 2617, 553, 264, 38835, 44486, 13, 151645] labels: Because I was protected by a fox fairy.<|im_end|> [INFO|configuration_utils.py:733] 2024-08-09 23:30:47,107 >> loading configuration file config.json from cache at /common/scratch/users/d/dh.huang.2023/transformers/hub/models--Qwen--Qwen2-72B-Instruct/snapshots/1af63c698f59c4235668ec9c1395468cb7cd7e79/config.json [INFO|configuration_utils.py:800] 2024-08-09 23:30:47,109 >> Model config Qwen2Config { "_name_or_path": "Qwen/Qwen2-72B-Instruct", "architectures": [ "Qwen2ForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 8192, "initializer_range": 0.02, "intermediate_size": 29568, "max_position_embeddings": 32768, "max_window_layers": 80, "model_type": "qwen2", "num_attention_heads": 64, "num_hidden_layers": 80, "num_key_value_heads": 8, "rms_norm_eps": 1e-06, "rope_theta": 1000000.0, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.43.3", "use_cache": true, "use_sliding_window": false, "vocab_size": 152064 } 08/09/2024 23:30:47 - INFO - llamafactory.model.model_utils.quantization - Quantizing model to 4 bit with bitsandbytes. 08/09/2024 23:30:47 - INFO - llamafactory.model.model_utils.quantization - Quantizing model to 4 bit with bitsandbytes. 08/09/2024 23:30:47 - INFO - llamafactory.model.model_utils.quantization - Quantizing model to 4 bit with bitsandbytes. 08/09/2024 23:30:47 - INFO - llamafactory.model.model_utils.quantization - Quantizing model to 4 bit with bitsandbytes. [INFO|modeling_utils.py:3634] 2024-08-09 23:30:47,139 >> loading weights file model.safetensors from cache at /common/scratch/users/d/dh.huang.2023/transformers/hub/models--Qwen--Qwen2-72B-Instruct/snapshots/1af63c698f59c4235668ec9c1395468cb7cd7e79/model.safetensors.index.json [INFO|modeling_utils.py:1572] 2024-08-09 23:30:47,143 >> Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. [INFO|configuration_utils.py:1038] 2024-08-09 23:30:47,144 >> Generate config GenerationConfig { "bos_token_id": 151643, "eos_token_id": 151645 } Loading checkpoint shards: 0%| | 0/37 [00:00 [rank0]: launch() [rank0]: File "/common/home/users/d/dh.huang.2023/common2/code/LLaMA-Factory/src/llamafactory/launcher.py", line 19, in launch [rank0]: run_exp() [rank0]: File "/common/home/users/d/dh.huang.2023/common2/code/LLaMA-Factory/src/llamafactory/train/tuner.py", line 50, in run_exp [rank0]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) [rank0]: File "/common/home/users/d/dh.huang.2023/common2/code/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 47, in run_sft [rank0]: model = load_model(tokenizer, model_args, finetuning_args, training_args.do_train) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/common/home/users/d/dh.huang.2023/common2/code/LLaMA-Factory/src/llamafactory/model/loader.py", line 153, in load_model [rank0]: model = AutoModelForCausalLM.from_pretrained(**init_kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained [rank0]: return model_class.from_pretrained( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3916, in from_pretrained [rank0]: ) = cls._load_pretrained_model( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/transformers/modeling_utils.py", line 4390, in _load_pretrained_model [rank0]: new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/transformers/modeling_utils.py", line 938, in _load_state_dict_into_meta_model [rank0]: hf_quantizer.create_quantized_param(model, param, param_name, param_device, state_dict, unexpected_keys) [rank0]: File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/transformers/quantizers/quantizer_bnb_4bit.py", line 217, in create_quantized_param [rank0]: new_value = bnb.nn.Params4bit(new_value, requires_grad=False, **kwargs).to(target_device) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/bitsandbytes/nn/modules.py", line 332, in to [rank0]: return self._quantize(device) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/bitsandbytes/nn/modules.py", line 297, in _quantize [rank0]: w_4bit, quant_state = bnb.functional.quantize_4bit( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/bitsandbytes/functional.py", line 1173, in quantize_4bit [rank0]: out = torch.zeros(((n + 1) // mod, 1), dtype=quant_storage, device=A.device) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 116.00 MiB. GPU 0 has a total capacity of 44.31 GiB of which 71.00 MiB is free. Process 1465015 has 24.44 GiB memory in use. Including non-PyTorch memory, this process has 19.78 GiB memory in use. Of the allocated memory 18.93 GiB is allocated by PyTorch, and 136.43 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) Loading checkpoint shards: 49%|████▊ | 18/37 [00:58<01:03, 3.33s/it] Loading checkpoint shards: 49%|████▊ | 18/37 [00:58<01:03, 3.35s/it]W0809 23:31:50.550000 140581888873472 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 1621573 closing signal SIGTERM W0809 23:31:50.551000 140581888873472 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 1621574 closing signal SIGTERM W0809 23:31:50.551000 140581888873472 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 1621575 closing signal SIGTERM E0809 23:31:51.266000 140581888873472 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 1621572) of binary: /common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/bin/python Traceback (most recent call last): File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/bin/torchrun", line 8, in sys.exit(main()) ^^^^^^ File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/torch/distributed/run.py", line 901, in main run(args) File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/torch/distributed/run.py", line 892, in run elastic_launch( File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 133, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/common/home/users/d/dh.huang.2023/.conda/envs/llm-perf-bench/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ /common/home/users/d/dh.huang.2023/common2/code/LLaMA-Factory/src/llamafactory/launcher.py FAILED ------------------------------------------------------------ Failures: ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-08-09_23:31:50 host : lithium.smu.edu.sg rank : 0 (local_rank: 0) exitcode : 1 (pid: 1621572) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ Current Directory: /common/home/users/d/dh.huang.2023/code/rapget-translation Evaluating Qwen/Qwen2-72B-Instruct Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--meteor/ea1b63f27faab173b022be2d3cc3df2fc44a894247f833943ea98c8a7caeb1e8 (last modified on Wed Jun 19 06:03:37 2024) since it couldn't be found locally at evaluate-metric--meteor, or remotely on the Hugging Face Hub. [nltk_data] Error loading wordnet: [nltk_data] Error loading punkt: [nltk_data] Error loading omw-1.4: Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--bleu/9e0985c1200e367cce45605ce0ecb5ede079894e0f24f54613fca08eeb8aff76 (last modified on Wed May 8 07:48:30 2024) since it couldn't be found locally at evaluate-metric--bleu, or remotely on the Hugging Face Hub. Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--rouge/b01e0accf3bd6dd24839b769a5fda24e14995071570870922c71970b3a6ed886 (last modified on Wed May 8 07:48:31 2024) since it couldn't be found locally at evaluate-metric--rouge, or remotely on the Hugging Face Hub. Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--meteor/ea1b63f27faab173b022be2d3cc3df2fc44a894247f833943ea98c8a7caeb1e8 (last modified on Wed Jun 19 06:03:37 2024) since it couldn't be found locally at evaluate-metric--meteor, or remotely on the Hugging Face Hub. [nltk_data] Error loading wordnet: [nltk_data] Error loading punkt: [nltk_data] Error loading omw-1.4: Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--accuracy/f887c0aab52c2d38e1f8a215681126379eca617f96c447638f751434e8e65b14 (last modified on Wed Jun 19 06:03:39 2024) since it couldn't be found locally at evaluate-metric--accuracy, or remotely on the Hugging Face Hub. loading env vars from: /common/home/users/d/dh.huang.2023/common2/code/rapget-translation/.env workding dir: /common/home/users/d/dh.huang.2023/common2/code/rapget-translation adding /common/home/users/d/dh.huang.2023/common2/code/rapget-translation to sys.path loading: /common/home/users/d/dh.huang.2023/common2/code/rapget-translation/eval_modules/calc_repetitions.py loading /common/home/users/d/dh.huang.2023/common2/code/rapget-translation/llm_toolkit/translation_utils.py Qwen/Qwen2-72B-Instruct llama-factory/saves/Qwen2-72B-Instruct True 1 results/mac-results_fine_tuned.csv CUDA is available, we have found 4 GPU(s) NVIDIA L40 CUDA version: 12.1 Evaluating model: Qwen/Qwen2-72B-Instruct on cuda (0) GPU = NVIDIA L40. Max memory = 44.309 GB. 0.0 GB of memory reserved. loading model: Qwen/Qwen2-72B-Instruct with adapter: None Loading checkpoint shards: 0%| | 0/37 [00:00system You are a helpful assistant that translates Chinese to English.<|im_end|> <|im_start|>user You will be given a Chinese sentence to translate. If it is an incomplete sentence, or if you are unsure about the meaning, simply copy the input text as your output. Do not output any additional sentence such as explanation or reasoning. Chinese: 老耿端起枪,眯缝起一只三角眼,一搂扳机响了枪,冰雹般的金麻雀劈哩啪啦往下落,铁砂子在柳枝间飞迸着,嚓嚓有声。 English:<|im_end|> <|im_start|>assistant Old Geng picked up his shotgun, squinted, and pulled the trigger. Two sparrows crashed to the ground like hailstones as shotgun pellets tore noisily through the branches.<|im_end|> -------------------------------------------------- prompt: <|im_start|>system You are a helpful assistant that translates Chinese to English.<|im_end|> <|im_start|>user You will be given a Chinese sentence to translate. If it is an incomplete sentence, or if you are unsure about the meaning, simply copy the input text as your output. Do not output any additional sentence such as explanation or reasoning. Chinese: 老耿端起枪,眯缝起一只三角眼,一搂扳机响了枪,冰雹般的金麻雀劈哩啪啦往下落,铁砂子在柳枝间飞迸着,嚓嚓有声。 English:<|im_end|> <|im_start|>assistant (1) GPU = NVIDIA L40. Max memory = 44.309 GB. 15.434 GB of memory reserved. Traceback (most recent call last): File "/common/home/users/d/dh.huang.2023/common2/code/rapget-translation/llm_toolkit/eval_epochs.py", line 135, in evaluate_model_all_epochs( File "/common/home/users/d/dh.huang.2023/common2/code/rapget-translation/llm_toolkit/eval_epochs.py", line 45, in evaluate_model_all_epochs for d in os.listdir(adapter_path_base) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: 'llama-factory/saves/Qwen2-72B-Instruct' Current Directory: /common/home/users/d/dh.huang.2023/code/rapget-translation Evaluating shenzhi-wang/Llama3.1-70B-Chinese-Chat Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--meteor/ea1b63f27faab173b022be2d3cc3df2fc44a894247f833943ea98c8a7caeb1e8 (last modified on Wed Jun 19 06:03:37 2024) since it couldn't be found locally at evaluate-metric--meteor, or remotely on the Hugging Face Hub. [nltk_data] Error loading wordnet: [nltk_data] Error loading punkt: [nltk_data] Error loading omw-1.4: Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--bleu/9e0985c1200e367cce45605ce0ecb5ede079894e0f24f54613fca08eeb8aff76 (last modified on Wed May 8 07:48:30 2024) since it couldn't be found locally at evaluate-metric--bleu, or remotely on the Hugging Face Hub. Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--rouge/b01e0accf3bd6dd24839b769a5fda24e14995071570870922c71970b3a6ed886 (last modified on Wed May 8 07:48:31 2024) since it couldn't be found locally at evaluate-metric--rouge, or remotely on the Hugging Face Hub. Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--meteor/ea1b63f27faab173b022be2d3cc3df2fc44a894247f833943ea98c8a7caeb1e8 (last modified on Wed Jun 19 06:03:37 2024) since it couldn't be found locally at evaluate-metric--meteor, or remotely on the Hugging Face Hub. [nltk_data] Error loading wordnet: [nltk_data] Error loading punkt: [nltk_data] Error loading omw-1.4: Using the latest cached version of the module from /common/scratch/users/d/dh.huang.2023/transformers/modules/evaluate_modules/metrics/evaluate-metric--accuracy/f887c0aab52c2d38e1f8a215681126379eca617f96c447638f751434e8e65b14 (last modified on Wed Jun 19 06:03:39 2024) since it couldn't be found locally at evaluate-metric--accuracy, or remotely on the Hugging Face Hub. loading env vars from: /common/home/users/d/dh.huang.2023/common2/code/rapget-translation/.env workding dir: /common/home/users/d/dh.huang.2023/common2/code/rapget-translation adding /common/home/users/d/dh.huang.2023/common2/code/rapget-translation to sys.path loading: /common/home/users/d/dh.huang.2023/common2/code/rapget-translation/eval_modules/calc_repetitions.py loading /common/home/users/d/dh.huang.2023/common2/code/rapget-translation/llm_toolkit/translation_utils.py shenzhi-wang/Llama3.1-70B-Chinese-Chat llama-factory/saves/Llama3.1-70B-Chinese-Chat True 1 results/mac-results_fine_tuned.csv CUDA is available, we have found 4 GPU(s) NVIDIA L40 CUDA version: 12.1 Evaluating model: shenzhi-wang/Llama3.1-70B-Chinese-Chat on cuda (0) GPU = NVIDIA L40. Max memory = 44.309 GB. 0.0 GB of memory reserved. loading model: shenzhi-wang/Llama3.1-70B-Chinese-Chat with adapter: None Loading checkpoint shards: 0%| | 0/30 [00:00<|start_header_id|>system<|end_header_id|> You are a helpful assistant that translates Chinese to English.<|eot_id|><|start_header_id|>user<|end_header_id|> You will be given a Chinese sentence to translate. If it is an incomplete sentence, or if you are unsure about the meaning, simply copy the input text as your output. Do not output any additional sentence such as explanation or reasoning. Chinese: 老耿端起枪,眯缝起一只三角眼,一搂扳机响了枪,冰雹般的金麻雀劈哩啪啦往下落,铁砂子在柳枝间飞迸着,嚓嚓有声。 English:<|eot_id|><|start_header_id|>assistant<|end_header_id|> Old Geng picked up his shotgun, squinted, and pulled the trigger. Two sparrows crashed to the ground like hailstones as shotgun pellets tore noisily through the branches.<|eot_id|> -------------------------------------------------- prompt: <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful assistant that translates Chinese to English.<|eot_id|><|start_header_id|>user<|end_header_id|> You will be given a Chinese sentence to translate. If it is an incomplete sentence, or if you are unsure about the meaning, simply copy the input text as your output. Do not output any additional sentence such as explanation or reasoning. Chinese: 老耿端起枪,眯缝起一只三角眼,一搂扳机响了枪,冰雹般的金麻雀劈哩啪啦往下落,铁砂子在柳枝间飞迸着,嚓嚓有声。 English:<|eot_id|><|start_header_id|>assistant<|end_header_id|> (1) GPU = NVIDIA L40. Max memory = 44.309 GB. 13.654 GB of memory reserved. found 6 checkpoints: ['checkpoint-70', 'checkpoint-140', 'checkpoint-210', 'checkpoint-280', 'checkpoint-350', 'checkpoint-420'] Running from epoch 1 to 6 Epoch 1 loading adapter: llama-factory/saves/Llama3.1-70B-Chinese-Chat/checkpoint-70 0%| | 0/1133 [00:00