qwen1.5-7b-chat-sa-v0.1 / running_log.txt
sci-m-wang's picture
Upload 16 files
1148b15 verified
raw
history blame
82.9 kB
05/29/2024 23:00:05 - INFO - transformers.tokenization_utils_base - loading file vocab.json
05/29/2024 23:00:05 - INFO - transformers.tokenization_utils_base - loading file merges.txt
05/29/2024 23:00:05 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json
05/29/2024 23:00:05 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json
05/29/2024 23:00:05 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json
05/29/2024 23:00:05 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json
05/29/2024 23:00:05 - WARNING - transformers.tokenization_utils_base - Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
05/29/2024 23:00:05 - INFO - llmtuner.data.template - Replace eos token: <|im_end|>
05/29/2024 23:00:05 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/LangGPT_community.jsonl...
05/29/2024 23:00:05 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json.
05/29/2024 23:00:06 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/langgpt_alpaca.jsonl...
05/29/2024 23:00:06 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json.
05/29/2024 23:00:08 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/langgpt_seed.jsonl...
05/29/2024 23:00:08 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json.
05/29/2024 23:00:27 - INFO - transformers.configuration_utils - loading configuration file /datas/huggingface/Qwen1.5-7B-Chat/config.json
05/29/2024 23:00:27 - INFO - transformers.configuration_utils - Model config Qwen2Config {
"_name_or_path": "/datas/huggingface/Qwen1.5-7B-Chat",
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 32768,
"max_window_layers": 28,
"model_type": "qwen2",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"rms_norm_eps": 1e-06,
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.40.2",
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 151936
}
05/29/2024 23:00:27 - INFO - transformers.modeling_utils - loading weights file /datas/huggingface/Qwen1.5-7B-Chat/model.safetensors.index.json
05/29/2024 23:00:27 - INFO - transformers.modeling_utils - Instantiating Qwen2ForCausalLM model under default dtype torch.float16.
05/29/2024 23:00:27 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645,
"use_cache": false
}
05/29/2024 23:02:53 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing Qwen2ForCausalLM.
05/29/2024 23:02:53 - INFO - transformers.modeling_utils - All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /datas/huggingface/Qwen1.5-7B-Chat.
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training.
05/29/2024 23:02:53 - INFO - transformers.generation.configuration_utils - loading configuration file /datas/huggingface/Qwen1.5-7B-Chat/generation_config.json
05/29/2024 23:02:53 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.7,
"top_k": 20,
"top_p": 0.8
}
05/29/2024 23:02:53 - INFO - llmtuner.model.utils.checkpointing - Gradient checkpointing enabled.
05/29/2024 23:02:53 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference.
05/29/2024 23:02:53 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA
05/29/2024 23:02:54 - INFO - llmtuner.model.loader - trainable params: 4194304 || all params: 7725518848 || trainable%: 0.0543
05/29/2024 23:02:54 - INFO - transformers.trainer - Using auto half precision backend
05/29/2024 23:02:54 - INFO - transformers.trainer - ***** Running training *****
05/29/2024 23:02:54 - INFO - transformers.trainer - Num examples = 8,531
05/29/2024 23:02:54 - INFO - transformers.trainer - Num Epochs = 5
05/29/2024 23:02:54 - INFO - transformers.trainer - Instantaneous batch size per device = 2
05/29/2024 23:02:54 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16
05/29/2024 23:02:54 - INFO - transformers.trainer - Gradient Accumulation steps = 8
05/29/2024 23:02:54 - INFO - transformers.trainer - Total optimization steps = 2,665
05/29/2024 23:02:54 - INFO - transformers.trainer - Number of trainable parameters = 4,194,304
05/29/2024 23:04:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.2495, 'learning_rate': 5.0000e-05, 'epoch': 0.01}
05/29/2024 23:05:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.0913, 'learning_rate': 4.9998e-05, 'epoch': 0.02}
05/29/2024 23:06:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.9163, 'learning_rate': 4.9996e-05, 'epoch': 0.03}
05/29/2024 23:07:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.9189, 'learning_rate': 4.9993e-05, 'epoch': 0.04}
05/29/2024 23:08:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.8406, 'learning_rate': 4.9989e-05, 'epoch': 0.05}
05/29/2024 23:09:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.7963, 'learning_rate': 4.9984e-05, 'epoch': 0.06}
05/29/2024 23:10:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.8229, 'learning_rate': 4.9979e-05, 'epoch': 0.07}
05/29/2024 23:11:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.8240, 'learning_rate': 4.9972e-05, 'epoch': 0.08}
05/29/2024 23:13:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.8236, 'learning_rate': 4.9965e-05, 'epoch': 0.08}
05/29/2024 23:14:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.7696, 'learning_rate': 4.9957e-05, 'epoch': 0.09}
05/29/2024 23:15:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.7585, 'learning_rate': 4.9947e-05, 'epoch': 0.10}
05/29/2024 23:16:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.7549, 'learning_rate': 4.9937e-05, 'epoch': 0.11}
05/29/2024 23:17:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.7345, 'learning_rate': 4.9927e-05, 'epoch': 0.12}
05/29/2024 23:18:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.7273, 'learning_rate': 4.9915e-05, 'epoch': 0.13}
05/29/2024 23:19:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.7803, 'learning_rate': 4.9902e-05, 'epoch': 0.14}
05/29/2024 23:20:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6957, 'learning_rate': 4.9889e-05, 'epoch': 0.15}
05/29/2024 23:22:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.7076, 'learning_rate': 4.9875e-05, 'epoch': 0.16}
05/29/2024 23:23:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.7231, 'learning_rate': 4.9859e-05, 'epoch': 0.17}
05/29/2024 23:24:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6879, 'learning_rate': 4.9843e-05, 'epoch': 0.18}
05/29/2024 23:25:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6650, 'learning_rate': 4.9826e-05, 'epoch': 0.19}
05/29/2024 23:25:23 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-100
05/29/2024 23:25:23 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-100/tokenizer_config.json
05/29/2024 23:25:23 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-100/special_tokens_map.json
05/29/2024 23:26:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.7560, 'learning_rate': 4.9809e-05, 'epoch': 0.20}
05/29/2024 23:27:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6963, 'learning_rate': 4.9790e-05, 'epoch': 0.21}
05/29/2024 23:28:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.7310, 'learning_rate': 4.9771e-05, 'epoch': 0.22}
05/29/2024 23:29:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.7397, 'learning_rate': 4.9750e-05, 'epoch': 0.23}
05/29/2024 23:31:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.7006, 'learning_rate': 4.9729e-05, 'epoch': 0.23}
05/29/2024 23:32:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.7231, 'learning_rate': 4.9707e-05, 'epoch': 0.24}
05/29/2024 23:33:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.7323, 'learning_rate': 4.9684e-05, 'epoch': 0.25}
05/29/2024 23:34:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.6721, 'learning_rate': 4.9660e-05, 'epoch': 0.26}
05/29/2024 23:35:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6722, 'learning_rate': 4.9636e-05, 'epoch': 0.27}
05/29/2024 23:36:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.7208, 'learning_rate': 4.9610e-05, 'epoch': 0.28}
05/29/2024 23:37:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6621, 'learning_rate': 4.9584e-05, 'epoch': 0.29}
05/29/2024 23:38:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.7114, 'learning_rate': 4.9557e-05, 'epoch': 0.30}
05/29/2024 23:39:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.7235, 'learning_rate': 4.9529e-05, 'epoch': 0.31}
05/29/2024 23:41:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6993, 'learning_rate': 4.9500e-05, 'epoch': 0.32}
05/29/2024 23:42:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.7186, 'learning_rate': 4.9470e-05, 'epoch': 0.33}
05/29/2024 23:43:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6867, 'learning_rate': 4.9439e-05, 'epoch': 0.34}
05/29/2024 23:44:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6907, 'learning_rate': 4.9408e-05, 'epoch': 0.35}
05/29/2024 23:45:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6693, 'learning_rate': 4.9376e-05, 'epoch': 0.36}
05/29/2024 23:46:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.7322, 'learning_rate': 4.9342e-05, 'epoch': 0.37}
05/29/2024 23:48:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.7351, 'learning_rate': 4.9308e-05, 'epoch': 0.38}
05/29/2024 23:48:06 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-200
05/29/2024 23:48:06 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-200/tokenizer_config.json
05/29/2024 23:48:06 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-200/special_tokens_map.json
05/29/2024 23:49:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.6996, 'learning_rate': 4.9274e-05, 'epoch': 0.38}
05/29/2024 23:50:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.7124, 'learning_rate': 4.9238e-05, 'epoch': 0.39}
05/29/2024 23:51:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6734, 'learning_rate': 4.9201e-05, 'epoch': 0.40}
05/29/2024 23:52:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6914, 'learning_rate': 4.9164e-05, 'epoch': 0.41}
05/29/2024 23:53:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.6584, 'learning_rate': 4.9126e-05, 'epoch': 0.42}
05/29/2024 23:55:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6326, 'learning_rate': 4.9087e-05, 'epoch': 0.43}
05/29/2024 23:56:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6931, 'learning_rate': 4.9047e-05, 'epoch': 0.44}
05/29/2024 23:57:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6769, 'learning_rate': 4.9006e-05, 'epoch': 0.45}
05/29/2024 23:58:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6957, 'learning_rate': 4.8965e-05, 'epoch': 0.46}
05/29/2024 23:59:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6949, 'learning_rate': 4.8922e-05, 'epoch': 0.47}
05/30/2024 00:00:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.7084, 'learning_rate': 4.8879e-05, 'epoch': 0.48}
05/30/2024 00:01:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.6439, 'learning_rate': 4.8835e-05, 'epoch': 0.49}
05/30/2024 00:02:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6540, 'learning_rate': 4.8790e-05, 'epoch': 0.50}
05/30/2024 00:04:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.6513, 'learning_rate': 4.8744e-05, 'epoch': 0.51}
05/30/2024 00:05:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6406, 'learning_rate': 4.8698e-05, 'epoch': 0.52}
05/30/2024 00:06:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6761, 'learning_rate': 4.8650e-05, 'epoch': 0.53}
05/30/2024 00:07:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.6440, 'learning_rate': 4.8602e-05, 'epoch': 0.53}
05/30/2024 00:08:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.6619, 'learning_rate': 4.8553e-05, 'epoch': 0.54}
05/30/2024 00:09:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6855, 'learning_rate': 4.8503e-05, 'epoch': 0.55}
05/30/2024 00:10:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.6716, 'learning_rate': 4.8453e-05, 'epoch': 0.56}
05/30/2024 00:10:44 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-300
05/30/2024 00:10:44 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-300/tokenizer_config.json
05/30/2024 00:10:44 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-300/special_tokens_map.json
05/30/2024 00:11:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6784, 'learning_rate': 4.8401e-05, 'epoch': 0.57}
05/30/2024 00:12:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6587, 'learning_rate': 4.8349e-05, 'epoch': 0.58}
05/30/2024 00:14:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6411, 'learning_rate': 4.8296e-05, 'epoch': 0.59}
05/30/2024 00:15:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.7191, 'learning_rate': 4.8242e-05, 'epoch': 0.60}
05/30/2024 00:16:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.6679, 'learning_rate': 4.8188e-05, 'epoch': 0.61}
05/30/2024 00:17:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6270, 'learning_rate': 4.8132e-05, 'epoch': 0.62}
05/30/2024 00:18:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.7084, 'learning_rate': 4.8076e-05, 'epoch': 0.63}
05/30/2024 00:19:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.6555, 'learning_rate': 4.8019e-05, 'epoch': 0.64}
05/30/2024 00:20:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6865, 'learning_rate': 4.7961e-05, 'epoch': 0.65}
05/30/2024 00:22:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6968, 'learning_rate': 4.7902e-05, 'epoch': 0.66}
05/30/2024 00:23:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6370, 'learning_rate': 4.7843e-05, 'epoch': 0.67}
05/30/2024 00:24:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6898, 'learning_rate': 4.7782e-05, 'epoch': 0.68}
05/30/2024 00:25:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6258, 'learning_rate': 4.7721e-05, 'epoch': 0.68}
05/30/2024 00:26:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6418, 'learning_rate': 4.7659e-05, 'epoch': 0.69}
05/30/2024 00:27:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6957, 'learning_rate': 4.7597e-05, 'epoch': 0.70}
05/30/2024 00:28:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6813, 'learning_rate': 4.7533e-05, 'epoch': 0.71}
05/30/2024 00:29:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6565, 'learning_rate': 4.7469e-05, 'epoch': 0.72}
05/30/2024 00:30:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.6557, 'learning_rate': 4.7404e-05, 'epoch': 0.73}
05/30/2024 00:32:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6715, 'learning_rate': 4.7338e-05, 'epoch': 0.74}
05/30/2024 00:33:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6929, 'learning_rate': 4.7272e-05, 'epoch': 0.75}
05/30/2024 00:33:08 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-400
05/30/2024 00:33:08 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-400/tokenizer_config.json
05/30/2024 00:33:08 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-400/special_tokens_map.json
05/30/2024 00:34:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6474, 'learning_rate': 4.7204e-05, 'epoch': 0.76}
05/30/2024 00:35:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6519, 'learning_rate': 4.7136e-05, 'epoch': 0.77}
05/30/2024 00:36:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6600, 'learning_rate': 4.7068e-05, 'epoch': 0.78}
05/30/2024 00:37:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.7041, 'learning_rate': 4.6998e-05, 'epoch': 0.79}
05/30/2024 00:38:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6676, 'learning_rate': 4.6928e-05, 'epoch': 0.80}
05/30/2024 00:39:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6382, 'learning_rate': 4.6856e-05, 'epoch': 0.81}
05/30/2024 00:41:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5761, 'learning_rate': 4.6784e-05, 'epoch': 0.82}
05/30/2024 00:42:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6149, 'learning_rate': 4.6712e-05, 'epoch': 0.83}
05/30/2024 00:43:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6610, 'learning_rate': 4.6638e-05, 'epoch': 0.83}
05/30/2024 00:44:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6477, 'learning_rate': 4.6564e-05, 'epoch': 0.84}
05/30/2024 00:45:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6267, 'learning_rate': 4.6489e-05, 'epoch': 0.85}
05/30/2024 00:46:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6776, 'learning_rate': 4.6414e-05, 'epoch': 0.86}
05/30/2024 00:47:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6450, 'learning_rate': 4.6337e-05, 'epoch': 0.87}
05/30/2024 00:48:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.6486, 'learning_rate': 4.6260e-05, 'epoch': 0.88}
05/30/2024 00:50:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6784, 'learning_rate': 4.6182e-05, 'epoch': 0.89}
05/30/2024 00:51:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.6595, 'learning_rate': 4.6103e-05, 'epoch': 0.90}
05/30/2024 00:52:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6403, 'learning_rate': 4.6024e-05, 'epoch': 0.91}
05/30/2024 00:53:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6334, 'learning_rate': 4.5944e-05, 'epoch': 0.92}
05/30/2024 00:54:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.7385, 'learning_rate': 4.5863e-05, 'epoch': 0.93}
05/30/2024 00:55:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6463, 'learning_rate': 4.5782e-05, 'epoch': 0.94}
05/30/2024 00:55:38 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-500
05/30/2024 00:55:38 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-500/tokenizer_config.json
05/30/2024 00:55:38 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-500/special_tokens_map.json
05/30/2024 00:56:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6693, 'learning_rate': 4.5699e-05, 'epoch': 0.95}
05/30/2024 00:57:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.7034, 'learning_rate': 4.5616e-05, 'epoch': 0.96}
05/30/2024 00:59:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6520, 'learning_rate': 4.5533e-05, 'epoch': 0.97}
05/30/2024 01:00:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6399, 'learning_rate': 4.5448e-05, 'epoch': 0.98}
05/30/2024 01:01:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6426, 'learning_rate': 4.5363e-05, 'epoch': 0.98}
05/30/2024 01:02:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6600, 'learning_rate': 4.5277e-05, 'epoch': 0.99}
05/30/2024 01:03:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.6243, 'learning_rate': 4.5191e-05, 'epoch': 1.00}
05/30/2024 01:04:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6001, 'learning_rate': 4.5103e-05, 'epoch': 1.01}
05/30/2024 01:05:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.6071, 'learning_rate': 4.5016e-05, 'epoch': 1.02}
05/30/2024 01:07:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6617, 'learning_rate': 4.4927e-05, 'epoch': 1.03}
05/30/2024 01:08:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6291, 'learning_rate': 4.4838e-05, 'epoch': 1.04}
05/30/2024 01:09:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6011, 'learning_rate': 4.4748e-05, 'epoch': 1.05}
05/30/2024 01:10:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6631, 'learning_rate': 4.4657e-05, 'epoch': 1.06}
05/30/2024 01:11:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6544, 'learning_rate': 4.4565e-05, 'epoch': 1.07}
05/30/2024 01:12:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6332, 'learning_rate': 4.4473e-05, 'epoch': 1.08}
05/30/2024 01:14:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.6426, 'learning_rate': 4.4381e-05, 'epoch': 1.09}
05/30/2024 01:15:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6362, 'learning_rate': 4.4287e-05, 'epoch': 1.10}
05/30/2024 01:16:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6136, 'learning_rate': 4.4193e-05, 'epoch': 1.11}
05/30/2024 01:17:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6360, 'learning_rate': 4.4098e-05, 'epoch': 1.12}
05/30/2024 01:18:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6253, 'learning_rate': 4.4003e-05, 'epoch': 1.13}
05/30/2024 01:18:38 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-600
05/30/2024 01:18:38 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-600/tokenizer_config.json
05/30/2024 01:18:38 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-600/special_tokens_map.json
05/30/2024 01:19:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6351, 'learning_rate': 4.3907e-05, 'epoch': 1.13}
05/30/2024 01:20:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5974, 'learning_rate': 4.3810e-05, 'epoch': 1.14}
05/30/2024 01:21:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6023, 'learning_rate': 4.3713e-05, 'epoch': 1.15}
05/30/2024 01:23:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6096, 'learning_rate': 4.3615e-05, 'epoch': 1.16}
05/30/2024 01:24:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6156, 'learning_rate': 4.3516e-05, 'epoch': 1.17}
05/30/2024 01:25:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6406, 'learning_rate': 4.3417e-05, 'epoch': 1.18}
05/30/2024 01:26:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6224, 'learning_rate': 4.3317e-05, 'epoch': 1.19}
05/30/2024 01:27:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6188, 'learning_rate': 4.3216e-05, 'epoch': 1.20}
05/30/2024 01:28:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6088, 'learning_rate': 4.3115e-05, 'epoch': 1.21}
05/30/2024 01:29:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6570, 'learning_rate': 4.3013e-05, 'epoch': 1.22}
05/30/2024 01:30:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6417, 'learning_rate': 4.2911e-05, 'epoch': 1.23}
05/30/2024 01:32:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6402, 'learning_rate': 4.2807e-05, 'epoch': 1.24}
05/30/2024 01:33:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6393, 'learning_rate': 4.2704e-05, 'epoch': 1.25}
05/30/2024 01:34:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.7175, 'learning_rate': 4.2599e-05, 'epoch': 1.26}
05/30/2024 01:35:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6800, 'learning_rate': 4.2494e-05, 'epoch': 1.27}
05/30/2024 01:36:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5966, 'learning_rate': 4.2389e-05, 'epoch': 1.28}
05/30/2024 01:37:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6432, 'learning_rate': 4.2283e-05, 'epoch': 1.28}
05/30/2024 01:38:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6688, 'learning_rate': 4.2176e-05, 'epoch': 1.29}
05/30/2024 01:40:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6407, 'learning_rate': 4.2069e-05, 'epoch': 1.30}
05/30/2024 01:41:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6084, 'learning_rate': 4.1961e-05, 'epoch': 1.31}
05/30/2024 01:41:05 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-700
05/30/2024 01:41:05 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-700/tokenizer_config.json
05/30/2024 01:41:05 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-700/special_tokens_map.json
05/30/2024 01:42:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6407, 'learning_rate': 4.1852e-05, 'epoch': 1.32}
05/30/2024 01:43:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6430, 'learning_rate': 4.1743e-05, 'epoch': 1.33}
05/30/2024 01:44:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5899, 'learning_rate': 4.1633e-05, 'epoch': 1.34}
05/30/2024 01:45:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6299, 'learning_rate': 4.1523e-05, 'epoch': 1.35}
05/30/2024 01:46:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6462, 'learning_rate': 4.1412e-05, 'epoch': 1.36}
05/30/2024 01:47:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6558, 'learning_rate': 4.1301e-05, 'epoch': 1.37}
05/30/2024 01:48:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.6316, 'learning_rate': 4.1189e-05, 'epoch': 1.38}
05/30/2024 01:49:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6143, 'learning_rate': 4.1076e-05, 'epoch': 1.39}
05/30/2024 01:51:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6358, 'learning_rate': 4.0963e-05, 'epoch': 1.40}
05/30/2024 01:52:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6276, 'learning_rate': 4.0849e-05, 'epoch': 1.41}
05/30/2024 01:53:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.6238, 'learning_rate': 4.0735e-05, 'epoch': 1.42}
05/30/2024 01:54:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6775, 'learning_rate': 4.0620e-05, 'epoch': 1.43}
05/30/2024 01:55:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6330, 'learning_rate': 4.0505e-05, 'epoch': 1.43}
05/30/2024 01:56:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.6307, 'learning_rate': 4.0389e-05, 'epoch': 1.44}
05/30/2024 01:57:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5959, 'learning_rate': 4.0273e-05, 'epoch': 1.45}
05/30/2024 01:58:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6334, 'learning_rate': 4.0156e-05, 'epoch': 1.46}
05/30/2024 02:00:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6440, 'learning_rate': 4.0038e-05, 'epoch': 1.47}
05/30/2024 02:01:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.7045, 'learning_rate': 3.9920e-05, 'epoch': 1.48}
05/30/2024 02:02:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6335, 'learning_rate': 3.9802e-05, 'epoch': 1.49}
05/30/2024 02:03:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6705, 'learning_rate': 3.9683e-05, 'epoch': 1.50}
05/30/2024 02:03:25 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-800
05/30/2024 02:03:25 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-800/tokenizer_config.json
05/30/2024 02:03:25 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-800/special_tokens_map.json
05/30/2024 02:04:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.6219, 'learning_rate': 3.9563e-05, 'epoch': 1.51}
05/30/2024 02:05:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6101, 'learning_rate': 3.9443e-05, 'epoch': 1.52}
05/30/2024 02:06:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.5930, 'learning_rate': 3.9323e-05, 'epoch': 1.53}
05/30/2024 02:07:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.6577, 'learning_rate': 3.9202e-05, 'epoch': 1.54}
05/30/2024 02:09:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6369, 'learning_rate': 3.9080e-05, 'epoch': 1.55}
05/30/2024 02:10:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6459, 'learning_rate': 3.8958e-05, 'epoch': 1.56}
05/30/2024 02:11:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5747, 'learning_rate': 3.8836e-05, 'epoch': 1.57}
05/30/2024 02:12:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6334, 'learning_rate': 3.8713e-05, 'epoch': 1.58}
05/30/2024 02:13:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5922, 'learning_rate': 3.8589e-05, 'epoch': 1.58}
05/30/2024 02:14:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.6324, 'learning_rate': 3.8465e-05, 'epoch': 1.59}
05/30/2024 02:15:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6397, 'learning_rate': 3.8341e-05, 'epoch': 1.60}
05/30/2024 02:16:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.6344, 'learning_rate': 3.8216e-05, 'epoch': 1.61}
05/30/2024 02:17:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.5903, 'learning_rate': 3.8091e-05, 'epoch': 1.62}
05/30/2024 02:19:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6362, 'learning_rate': 3.7965e-05, 'epoch': 1.63}
05/30/2024 02:20:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6037, 'learning_rate': 3.7839e-05, 'epoch': 1.64}
05/30/2024 02:21:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.6131, 'learning_rate': 3.7712e-05, 'epoch': 1.65}
05/30/2024 02:22:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6504, 'learning_rate': 3.7585e-05, 'epoch': 1.66}
05/30/2024 02:23:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5784, 'learning_rate': 3.7457e-05, 'epoch': 1.67}
05/30/2024 02:24:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.6192, 'learning_rate': 3.7329e-05, 'epoch': 1.68}
05/30/2024 02:25:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.6849, 'learning_rate': 3.7201e-05, 'epoch': 1.69}
05/30/2024 02:25:52 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-900
05/30/2024 02:25:52 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-900/tokenizer_config.json
05/30/2024 02:25:52 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-900/special_tokens_map.json
05/30/2024 02:27:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6645, 'learning_rate': 3.7072e-05, 'epoch': 1.70}
05/30/2024 02:28:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6728, 'learning_rate': 3.6943e-05, 'epoch': 1.71}
05/30/2024 02:29:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6476, 'learning_rate': 3.6813e-05, 'epoch': 1.72}
05/30/2024 02:30:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.7412, 'learning_rate': 3.6683e-05, 'epoch': 1.73}
05/30/2024 02:31:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6201, 'learning_rate': 3.6553e-05, 'epoch': 1.73}
05/30/2024 02:32:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.6296, 'learning_rate': 3.6422e-05, 'epoch': 1.74}
05/30/2024 02:33:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6199, 'learning_rate': 3.6291e-05, 'epoch': 1.75}
05/30/2024 02:35:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6726, 'learning_rate': 3.6159e-05, 'epoch': 1.76}
05/30/2024 02:36:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6077, 'learning_rate': 3.6027e-05, 'epoch': 1.77}
05/30/2024 02:37:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.6546, 'learning_rate': 3.5894e-05, 'epoch': 1.78}
05/30/2024 02:38:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.6134, 'learning_rate': 3.5762e-05, 'epoch': 1.79}
05/30/2024 02:39:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6833, 'learning_rate': 3.5628e-05, 'epoch': 1.80}
05/30/2024 02:40:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.6008, 'learning_rate': 3.5495e-05, 'epoch': 1.81}
05/30/2024 02:41:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6444, 'learning_rate': 3.5361e-05, 'epoch': 1.82}
05/30/2024 02:43:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5960, 'learning_rate': 3.5227e-05, 'epoch': 1.83}
05/30/2024 02:44:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.5834, 'learning_rate': 3.5092e-05, 'epoch': 1.84}
05/30/2024 02:45:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6094, 'learning_rate': 3.4957e-05, 'epoch': 1.85}
05/30/2024 02:46:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5964, 'learning_rate': 3.4822e-05, 'epoch': 1.86}
05/30/2024 02:47:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6180, 'learning_rate': 3.4686e-05, 'epoch': 1.87}
05/30/2024 02:48:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5960, 'learning_rate': 3.4550e-05, 'epoch': 1.88}
05/30/2024 02:48:50 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1000
05/30/2024 02:48:50 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1000/tokenizer_config.json
05/30/2024 02:48:50 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1000/special_tokens_map.json
05/30/2024 02:49:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5774, 'learning_rate': 3.4414e-05, 'epoch': 1.88}
05/30/2024 02:51:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5982, 'learning_rate': 3.4277e-05, 'epoch': 1.89}
05/30/2024 02:52:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6422, 'learning_rate': 3.4140e-05, 'epoch': 1.90}
05/30/2024 02:53:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6312, 'learning_rate': 3.4003e-05, 'epoch': 1.91}
05/30/2024 02:54:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.5998, 'learning_rate': 3.3865e-05, 'epoch': 1.92}
05/30/2024 02:55:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.6103, 'learning_rate': 3.3727e-05, 'epoch': 1.93}
05/30/2024 02:56:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6074, 'learning_rate': 3.3589e-05, 'epoch': 1.94}
05/30/2024 02:57:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.6142, 'learning_rate': 3.3450e-05, 'epoch': 1.95}
05/30/2024 02:58:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6366, 'learning_rate': 3.3312e-05, 'epoch': 1.96}
05/30/2024 02:59:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5871, 'learning_rate': 3.3172e-05, 'epoch': 1.97}
05/30/2024 03:01:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6010, 'learning_rate': 3.3033e-05, 'epoch': 1.98}
05/30/2024 03:02:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.5808, 'learning_rate': 3.2893e-05, 'epoch': 1.99}
05/30/2024 03:03:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6152, 'learning_rate': 3.2753e-05, 'epoch': 2.00}
05/30/2024 03:04:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5649, 'learning_rate': 3.2613e-05, 'epoch': 2.01}
05/30/2024 03:05:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5623, 'learning_rate': 3.2473e-05, 'epoch': 2.02}
05/30/2024 03:06:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.6362, 'learning_rate': 3.2332e-05, 'epoch': 2.03}
05/30/2024 03:08:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6060, 'learning_rate': 3.2191e-05, 'epoch': 2.03}
05/30/2024 03:09:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6176, 'learning_rate': 3.2050e-05, 'epoch': 2.04}
05/30/2024 03:10:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6362, 'learning_rate': 3.1908e-05, 'epoch': 2.05}
05/30/2024 03:11:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5969, 'learning_rate': 3.1767e-05, 'epoch': 2.06}
05/30/2024 03:11:26 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1100
05/30/2024 03:11:26 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1100/tokenizer_config.json
05/30/2024 03:11:26 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1100/special_tokens_map.json
05/30/2024 03:12:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5587, 'learning_rate': 3.1625e-05, 'epoch': 2.07}
05/30/2024 03:13:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5828, 'learning_rate': 3.1482e-05, 'epoch': 2.08}
05/30/2024 03:14:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6010, 'learning_rate': 3.1340e-05, 'epoch': 2.09}
05/30/2024 03:15:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.6003, 'learning_rate': 3.1197e-05, 'epoch': 2.10}
05/30/2024 03:17:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6198, 'learning_rate': 3.1054e-05, 'epoch': 2.11}
05/30/2024 03:18:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6133, 'learning_rate': 3.0911e-05, 'epoch': 2.12}
05/30/2024 03:19:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6002, 'learning_rate': 3.0768e-05, 'epoch': 2.13}
05/30/2024 03:20:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6979, 'learning_rate': 3.0625e-05, 'epoch': 2.14}
05/30/2024 03:21:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5819, 'learning_rate': 3.0481e-05, 'epoch': 2.15}
05/30/2024 03:22:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5804, 'learning_rate': 3.0337e-05, 'epoch': 2.16}
05/30/2024 03:23:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5913, 'learning_rate': 3.0193e-05, 'epoch': 2.17}
05/30/2024 03:25:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5993, 'learning_rate': 3.0049e-05, 'epoch': 2.18}
05/30/2024 03:26:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6061, 'learning_rate': 2.9904e-05, 'epoch': 2.18}
05/30/2024 03:27:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6630, 'learning_rate': 2.9760e-05, 'epoch': 2.19}
05/30/2024 03:28:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6158, 'learning_rate': 2.9615e-05, 'epoch': 2.20}
05/30/2024 03:29:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.6348, 'learning_rate': 2.9470e-05, 'epoch': 2.21}
05/30/2024 03:30:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6263, 'learning_rate': 2.9325e-05, 'epoch': 2.22}
05/30/2024 03:31:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5918, 'learning_rate': 2.9180e-05, 'epoch': 2.23}
05/30/2024 03:33:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5724, 'learning_rate': 2.9035e-05, 'epoch': 2.24}
05/30/2024 03:34:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6142, 'learning_rate': 2.8889e-05, 'epoch': 2.25}
05/30/2024 03:34:08 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1200
05/30/2024 03:34:08 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1200/tokenizer_config.json
05/30/2024 03:34:08 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1200/special_tokens_map.json
05/30/2024 03:35:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6048, 'learning_rate': 2.8743e-05, 'epoch': 2.26}
05/30/2024 03:36:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6531, 'learning_rate': 2.8598e-05, 'epoch': 2.27}
05/30/2024 03:37:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5875, 'learning_rate': 2.8452e-05, 'epoch': 2.28}
05/30/2024 03:38:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5762, 'learning_rate': 2.8306e-05, 'epoch': 2.29}
05/30/2024 03:39:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6074, 'learning_rate': 2.8160e-05, 'epoch': 2.30}
05/30/2024 03:40:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.5865, 'learning_rate': 2.8013e-05, 'epoch': 2.31}
05/30/2024 03:42:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.6015, 'learning_rate': 2.7867e-05, 'epoch': 2.32}
05/30/2024 03:43:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5973, 'learning_rate': 2.7721e-05, 'epoch': 2.33}
05/30/2024 03:44:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6309, 'learning_rate': 2.7574e-05, 'epoch': 2.33}
05/30/2024 03:45:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6291, 'learning_rate': 2.7428e-05, 'epoch': 2.34}
05/30/2024 03:46:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.6297, 'learning_rate': 2.7281e-05, 'epoch': 2.35}
05/30/2024 03:47:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5846, 'learning_rate': 2.7134e-05, 'epoch': 2.36}
05/30/2024 03:48:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6009, 'learning_rate': 2.6987e-05, 'epoch': 2.37}
05/30/2024 03:49:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6807, 'learning_rate': 2.6840e-05, 'epoch': 2.38}
05/30/2024 03:51:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5976, 'learning_rate': 2.6693e-05, 'epoch': 2.39}
05/30/2024 03:52:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6617, 'learning_rate': 2.6546e-05, 'epoch': 2.40}
05/30/2024 03:53:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6327, 'learning_rate': 2.6399e-05, 'epoch': 2.41}
05/30/2024 03:54:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.6101, 'learning_rate': 2.6252e-05, 'epoch': 2.42}
05/30/2024 03:55:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5672, 'learning_rate': 2.6105e-05, 'epoch': 2.43}
05/30/2024 03:56:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6172, 'learning_rate': 2.5958e-05, 'epoch': 2.44}
05/30/2024 03:56:45 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1300
05/30/2024 03:56:45 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1300/tokenizer_config.json
05/30/2024 03:56:45 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1300/special_tokens_map.json
05/30/2024 03:57:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6848, 'learning_rate': 2.5810e-05, 'epoch': 2.45}
05/30/2024 03:59:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6313, 'learning_rate': 2.5663e-05, 'epoch': 2.46}
05/30/2024 04:00:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5976, 'learning_rate': 2.5516e-05, 'epoch': 2.47}
05/30/2024 04:01:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5623, 'learning_rate': 2.5368e-05, 'epoch': 2.48}
05/30/2024 04:02:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6274, 'learning_rate': 2.5221e-05, 'epoch': 2.48}
05/30/2024 04:03:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5823, 'learning_rate': 2.5074e-05, 'epoch': 2.49}
05/30/2024 04:04:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5900, 'learning_rate': 2.4926e-05, 'epoch': 2.50}
05/30/2024 04:05:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.6277, 'learning_rate': 2.4779e-05, 'epoch': 2.51}
05/30/2024 04:06:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5809, 'learning_rate': 2.4632e-05, 'epoch': 2.52}
05/30/2024 04:07:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5756, 'learning_rate': 2.4484e-05, 'epoch': 2.53}
05/30/2024 04:09:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6230, 'learning_rate': 2.4337e-05, 'epoch': 2.54}
05/30/2024 04:10:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6328, 'learning_rate': 2.4190e-05, 'epoch': 2.55}
05/30/2024 04:11:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5745, 'learning_rate': 2.4042e-05, 'epoch': 2.56}
05/30/2024 04:12:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5681, 'learning_rate': 2.3895e-05, 'epoch': 2.57}
05/30/2024 04:13:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6243, 'learning_rate': 2.3748e-05, 'epoch': 2.58}
05/30/2024 04:14:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6017, 'learning_rate': 2.3601e-05, 'epoch': 2.59}
05/30/2024 04:16:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5900, 'learning_rate': 2.3454e-05, 'epoch': 2.60}
05/30/2024 04:17:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5715, 'learning_rate': 2.3307e-05, 'epoch': 2.61}
05/30/2024 04:18:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6479, 'learning_rate': 2.3160e-05, 'epoch': 2.62}
05/30/2024 04:19:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6044, 'learning_rate': 2.3013e-05, 'epoch': 2.63}
05/30/2024 04:19:23 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1400
05/30/2024 04:19:23 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1400/tokenizer_config.json
05/30/2024 04:19:23 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1400/special_tokens_map.json
05/30/2024 04:20:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5893, 'learning_rate': 2.2866e-05, 'epoch': 2.63}
05/30/2024 04:21:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5733, 'learning_rate': 2.2719e-05, 'epoch': 2.64}
05/30/2024 04:22:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5649, 'learning_rate': 2.2572e-05, 'epoch': 2.65}
05/30/2024 04:23:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5842, 'learning_rate': 2.2426e-05, 'epoch': 2.66}
05/30/2024 04:24:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5905, 'learning_rate': 2.2279e-05, 'epoch': 2.67}
05/30/2024 04:25:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6128, 'learning_rate': 2.2133e-05, 'epoch': 2.68}
05/30/2024 04:27:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6135, 'learning_rate': 2.1987e-05, 'epoch': 2.69}
05/30/2024 04:28:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5995, 'learning_rate': 2.1840e-05, 'epoch': 2.70}
05/30/2024 04:29:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6354, 'learning_rate': 2.1694e-05, 'epoch': 2.71}
05/30/2024 04:30:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6152, 'learning_rate': 2.1548e-05, 'epoch': 2.72}
05/30/2024 04:31:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5884, 'learning_rate': 2.1402e-05, 'epoch': 2.73}
05/30/2024 04:32:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5956, 'learning_rate': 2.1257e-05, 'epoch': 2.74}
05/30/2024 04:33:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5954, 'learning_rate': 2.1111e-05, 'epoch': 2.75}
05/30/2024 04:34:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6026, 'learning_rate': 2.0965e-05, 'epoch': 2.76}
05/30/2024 04:35:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.5849, 'learning_rate': 2.0820e-05, 'epoch': 2.77}
05/30/2024 04:37:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5873, 'learning_rate': 2.0675e-05, 'epoch': 2.78}
05/30/2024 04:38:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6195, 'learning_rate': 2.0530e-05, 'epoch': 2.78}
05/30/2024 04:39:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.6176, 'learning_rate': 2.0385e-05, 'epoch': 2.79}
05/30/2024 04:40:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6364, 'learning_rate': 2.0240e-05, 'epoch': 2.80}
05/30/2024 04:41:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5921, 'learning_rate': 2.0096e-05, 'epoch': 2.81}
05/30/2024 04:41:35 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1500
05/30/2024 04:41:35 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1500/tokenizer_config.json
05/30/2024 04:41:35 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1500/special_tokens_map.json
05/30/2024 04:42:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5787, 'learning_rate': 1.9951e-05, 'epoch': 2.82}
05/30/2024 04:43:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5955, 'learning_rate': 1.9807e-05, 'epoch': 2.83}
05/30/2024 04:44:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6349, 'learning_rate': 1.9663e-05, 'epoch': 2.84}
05/30/2024 04:46:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5990, 'learning_rate': 1.9519e-05, 'epoch': 2.85}
05/30/2024 04:47:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6072, 'learning_rate': 1.9375e-05, 'epoch': 2.86}
05/30/2024 04:48:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.5714, 'learning_rate': 1.9232e-05, 'epoch': 2.87}
05/30/2024 04:49:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6001, 'learning_rate': 1.9089e-05, 'epoch': 2.88}
05/30/2024 04:50:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5621, 'learning_rate': 1.8946e-05, 'epoch': 2.89}
05/30/2024 04:51:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6401, 'learning_rate': 1.8803e-05, 'epoch': 2.90}
05/30/2024 04:52:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5987, 'learning_rate': 1.8660e-05, 'epoch': 2.91}
05/30/2024 04:53:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5915, 'learning_rate': 1.8518e-05, 'epoch': 2.92}
05/30/2024 04:55:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.6210, 'learning_rate': 1.8375e-05, 'epoch': 2.93}
05/30/2024 04:56:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6379, 'learning_rate': 1.8233e-05, 'epoch': 2.93}
05/30/2024 04:57:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5990, 'learning_rate': 1.8092e-05, 'epoch': 2.94}
05/30/2024 04:58:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6333, 'learning_rate': 1.7950e-05, 'epoch': 2.95}
05/30/2024 04:59:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.6245, 'learning_rate': 1.7809e-05, 'epoch': 2.96}
05/30/2024 05:00:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5750, 'learning_rate': 1.7668e-05, 'epoch': 2.97}
05/30/2024 05:01:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6453, 'learning_rate': 1.7527e-05, 'epoch': 2.98}
05/30/2024 05:02:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6524, 'learning_rate': 1.7387e-05, 'epoch': 2.99}
05/30/2024 05:03:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5959, 'learning_rate': 1.7247e-05, 'epoch': 3.00}
05/30/2024 05:03:55 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1600
05/30/2024 05:03:55 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1600/tokenizer_config.json
05/30/2024 05:03:55 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1600/special_tokens_map.json
05/30/2024 05:05:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6342, 'learning_rate': 1.7107e-05, 'epoch': 3.01}
05/30/2024 05:06:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5901, 'learning_rate': 1.6967e-05, 'epoch': 3.02}
05/30/2024 05:07:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6125, 'learning_rate': 1.6828e-05, 'epoch': 3.03}
05/30/2024 05:08:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6208, 'learning_rate': 1.6688e-05, 'epoch': 3.04}
05/30/2024 05:09:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.6135, 'learning_rate': 1.6550e-05, 'epoch': 3.05}
05/30/2024 05:10:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5635, 'learning_rate': 1.6411e-05, 'epoch': 3.06}
05/30/2024 05:11:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5519, 'learning_rate': 1.6273e-05, 'epoch': 3.07}
05/30/2024 05:12:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.6453, 'learning_rate': 1.6135e-05, 'epoch': 3.08}
05/30/2024 05:14:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.5444, 'learning_rate': 1.5997e-05, 'epoch': 3.08}
05/30/2024 05:15:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.6103, 'learning_rate': 1.5860e-05, 'epoch': 3.09}
05/30/2024 05:16:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6560, 'learning_rate': 1.5723e-05, 'epoch': 3.10}
05/30/2024 05:17:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5887, 'learning_rate': 1.5586e-05, 'epoch': 3.11}
05/30/2024 05:18:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5990, 'learning_rate': 1.5450e-05, 'epoch': 3.12}
05/30/2024 05:19:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5677, 'learning_rate': 1.5314e-05, 'epoch': 3.13}
05/30/2024 05:20:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5458, 'learning_rate': 1.5178e-05, 'epoch': 3.14}
05/30/2024 05:21:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6356, 'learning_rate': 1.5043e-05, 'epoch': 3.15}
05/30/2024 05:22:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.5678, 'learning_rate': 1.4908e-05, 'epoch': 3.16}
05/30/2024 05:23:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5883, 'learning_rate': 1.4773e-05, 'epoch': 3.17}
05/30/2024 05:25:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5643, 'learning_rate': 1.4639e-05, 'epoch': 3.18}
05/30/2024 05:26:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5889, 'learning_rate': 1.4505e-05, 'epoch': 3.19}
05/30/2024 05:26:13 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1700
05/30/2024 05:26:13 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1700/tokenizer_config.json
05/30/2024 05:26:13 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1700/special_tokens_map.json
05/30/2024 05:27:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5550, 'learning_rate': 1.4372e-05, 'epoch': 3.20}
05/30/2024 05:28:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5856, 'learning_rate': 1.4238e-05, 'epoch': 3.21}
05/30/2024 05:29:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.6232, 'learning_rate': 1.4106e-05, 'epoch': 3.22}
05/30/2024 05:30:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6432, 'learning_rate': 1.3973e-05, 'epoch': 3.23}
05/30/2024 05:31:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5808, 'learning_rate': 1.3841e-05, 'epoch': 3.23}
05/30/2024 05:32:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5563, 'learning_rate': 1.3709e-05, 'epoch': 3.24}
05/30/2024 05:34:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.5474, 'learning_rate': 1.3578e-05, 'epoch': 3.25}
05/30/2024 05:35:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6451, 'learning_rate': 1.3447e-05, 'epoch': 3.26}
05/30/2024 05:36:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.5905, 'learning_rate': 1.3317e-05, 'epoch': 3.27}
05/30/2024 05:37:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5763, 'learning_rate': 1.3187e-05, 'epoch': 3.28}
05/30/2024 05:38:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5873, 'learning_rate': 1.3057e-05, 'epoch': 3.29}
05/30/2024 05:39:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.5518, 'learning_rate': 1.2928e-05, 'epoch': 3.30}
05/30/2024 05:40:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5584, 'learning_rate': 1.2799e-05, 'epoch': 3.31}
05/30/2024 05:41:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5702, 'learning_rate': 1.2671e-05, 'epoch': 3.32}
05/30/2024 05:43:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5774, 'learning_rate': 1.2543e-05, 'epoch': 3.33}
05/30/2024 05:44:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6325, 'learning_rate': 1.2415e-05, 'epoch': 3.34}
05/30/2024 05:45:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5764, 'learning_rate': 1.2288e-05, 'epoch': 3.35}
05/30/2024 05:46:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6093, 'learning_rate': 1.2161e-05, 'epoch': 3.36}
05/30/2024 05:47:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.6254, 'learning_rate': 1.2035e-05, 'epoch': 3.37}
05/30/2024 05:48:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5884, 'learning_rate': 1.1909e-05, 'epoch': 3.38}
05/30/2024 05:48:34 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1800
05/30/2024 05:48:34 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1800/tokenizer_config.json
05/30/2024 05:48:34 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1800/special_tokens_map.json
05/30/2024 05:49:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.5836, 'learning_rate': 1.1784e-05, 'epoch': 3.38}
05/30/2024 05:50:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6103, 'learning_rate': 1.1659e-05, 'epoch': 3.39}
05/30/2024 05:51:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5888, 'learning_rate': 1.1535e-05, 'epoch': 3.40}
05/30/2024 05:52:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6512, 'learning_rate': 1.1411e-05, 'epoch': 3.41}
05/30/2024 05:54:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5910, 'learning_rate': 1.1287e-05, 'epoch': 3.42}
05/30/2024 05:55:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.5623, 'learning_rate': 1.1164e-05, 'epoch': 3.43}
05/30/2024 05:56:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6128, 'learning_rate': 1.1042e-05, 'epoch': 3.44}
05/30/2024 05:57:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.5260, 'learning_rate': 1.0920e-05, 'epoch': 3.45}
05/30/2024 05:58:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.5670, 'learning_rate': 1.0798e-05, 'epoch': 3.46}
05/30/2024 06:00:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.6084, 'learning_rate': 1.0677e-05, 'epoch': 3.47}
05/30/2024 06:01:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5740, 'learning_rate': 1.0557e-05, 'epoch': 3.48}
05/30/2024 06:02:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.5817, 'learning_rate': 1.0437e-05, 'epoch': 3.49}
05/30/2024 06:03:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5398, 'learning_rate': 1.0317e-05, 'epoch': 3.50}
05/30/2024 06:04:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5761, 'learning_rate': 1.0198e-05, 'epoch': 3.51}
05/30/2024 06:05:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5762, 'learning_rate': 1.0080e-05, 'epoch': 3.52}
05/30/2024 06:06:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5656, 'learning_rate': 9.9618e-06, 'epoch': 3.53}
05/30/2024 06:07:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6402, 'learning_rate': 9.8444e-06, 'epoch': 3.53}
05/30/2024 06:09:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5627, 'learning_rate': 9.7274e-06, 'epoch': 3.54}
05/30/2024 06:10:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6265, 'learning_rate': 9.6110e-06, 'epoch': 3.55}
05/30/2024 06:11:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.5915, 'learning_rate': 9.4952e-06, 'epoch': 3.56}
05/30/2024 06:11:18 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1900
05/30/2024 06:11:18 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1900/tokenizer_config.json
05/30/2024 06:11:18 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-1900/special_tokens_map.json
05/30/2024 06:12:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6192, 'learning_rate': 9.3799e-06, 'epoch': 3.57}
05/30/2024 06:13:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6139, 'learning_rate': 9.2651e-06, 'epoch': 3.58}
05/30/2024 06:14:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.5788, 'learning_rate': 9.1508e-06, 'epoch': 3.59}
05/30/2024 06:15:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5967, 'learning_rate': 9.0372e-06, 'epoch': 3.60}
05/30/2024 06:16:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5525, 'learning_rate': 8.9240e-06, 'epoch': 3.61}
05/30/2024 06:18:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6459, 'learning_rate': 8.8115e-06, 'epoch': 3.62}
05/30/2024 06:19:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6542, 'learning_rate': 8.6995e-06, 'epoch': 3.63}
05/30/2024 06:20:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.6144, 'learning_rate': 8.5880e-06, 'epoch': 3.64}
05/30/2024 06:21:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6243, 'learning_rate': 8.4772e-06, 'epoch': 3.65}
05/30/2024 06:22:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6097, 'learning_rate': 8.3669e-06, 'epoch': 3.66}
05/30/2024 06:23:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5890, 'learning_rate': 8.2571e-06, 'epoch': 3.67}
05/30/2024 06:24:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6196, 'learning_rate': 8.1480e-06, 'epoch': 3.68}
05/30/2024 06:25:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5785, 'learning_rate': 8.0395e-06, 'epoch': 3.68}
05/30/2024 06:27:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6941, 'learning_rate': 7.9315e-06, 'epoch': 3.69}
05/30/2024 06:28:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5711, 'learning_rate': 7.8241e-06, 'epoch': 3.70}
05/30/2024 06:29:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5851, 'learning_rate': 7.7173e-06, 'epoch': 3.71}
05/30/2024 06:30:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5877, 'learning_rate': 7.6112e-06, 'epoch': 3.72}
05/30/2024 06:31:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5841, 'learning_rate': 7.5056e-06, 'epoch': 3.73}
05/30/2024 06:33:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6185, 'learning_rate': 7.4006e-06, 'epoch': 3.74}
05/30/2024 06:34:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6278, 'learning_rate': 7.2963e-06, 'epoch': 3.75}
05/30/2024 06:34:15 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2000
05/30/2024 06:34:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2000/tokenizer_config.json
05/30/2024 06:34:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2000/special_tokens_map.json
05/30/2024 06:35:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6518, 'learning_rate': 7.1926e-06, 'epoch': 3.76}
05/30/2024 06:36:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5290, 'learning_rate': 7.0895e-06, 'epoch': 3.77}
05/30/2024 06:37:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5540, 'learning_rate': 6.9870e-06, 'epoch': 3.78}
05/30/2024 06:38:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.5859, 'learning_rate': 6.8851e-06, 'epoch': 3.79}
05/30/2024 06:40:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5853, 'learning_rate': 6.7839e-06, 'epoch': 3.80}
05/30/2024 06:41:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.5972, 'learning_rate': 6.6833e-06, 'epoch': 3.81}
05/30/2024 06:42:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.6114, 'learning_rate': 6.5833e-06, 'epoch': 3.82}
05/30/2024 06:43:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5638, 'learning_rate': 6.4840e-06, 'epoch': 3.83}
05/30/2024 06:44:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.6113, 'learning_rate': 6.3853e-06, 'epoch': 3.83}
05/30/2024 06:45:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5559, 'learning_rate': 6.2872e-06, 'epoch': 3.84}
05/30/2024 06:46:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.6219, 'learning_rate': 6.1898e-06, 'epoch': 3.85}
05/30/2024 06:47:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.5907, 'learning_rate': 6.0931e-06, 'epoch': 3.86}
05/30/2024 06:48:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5563, 'learning_rate': 5.9970e-06, 'epoch': 3.87}
05/30/2024 06:50:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6009, 'learning_rate': 5.9016e-06, 'epoch': 3.88}
05/30/2024 06:51:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6581, 'learning_rate': 5.8069e-06, 'epoch': 3.89}
05/30/2024 06:52:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5945, 'learning_rate': 5.7128e-06, 'epoch': 3.90}
05/30/2024 06:53:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5586, 'learning_rate': 5.6194e-06, 'epoch': 3.91}
05/30/2024 06:54:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.5624, 'learning_rate': 5.5266e-06, 'epoch': 3.92}
05/30/2024 06:55:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6261, 'learning_rate': 5.4345e-06, 'epoch': 3.93}
05/30/2024 06:56:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.6223, 'learning_rate': 5.3432e-06, 'epoch': 3.94}
05/30/2024 06:56:44 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2100
05/30/2024 06:56:44 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2100/tokenizer_config.json
05/30/2024 06:56:44 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2100/special_tokens_map.json
05/30/2024 06:57:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.5724, 'learning_rate': 5.2524e-06, 'epoch': 3.95}
05/30/2024 06:59:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5976, 'learning_rate': 5.1624e-06, 'epoch': 3.96}
05/30/2024 07:00:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.5715, 'learning_rate': 5.0731e-06, 'epoch': 3.97}
05/30/2024 07:01:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6233, 'learning_rate': 4.9845e-06, 'epoch': 3.98}
05/30/2024 07:02:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.5740, 'learning_rate': 4.8965e-06, 'epoch': 3.98}
05/30/2024 07:03:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5929, 'learning_rate': 4.8093e-06, 'epoch': 3.99}
05/30/2024 07:04:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5920, 'learning_rate': 4.7227e-06, 'epoch': 4.00}
05/30/2024 07:05:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6445, 'learning_rate': 4.6369e-06, 'epoch': 4.01}
05/30/2024 07:06:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.5722, 'learning_rate': 4.5518e-06, 'epoch': 4.02}
05/30/2024 07:08:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6088, 'learning_rate': 4.4673e-06, 'epoch': 4.03}
05/30/2024 07:09:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5924, 'learning_rate': 4.3836e-06, 'epoch': 4.04}
05/30/2024 07:10:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.5758, 'learning_rate': 4.3006e-06, 'epoch': 4.05}
05/30/2024 07:11:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.5952, 'learning_rate': 4.2184e-06, 'epoch': 4.06}
05/30/2024 07:12:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6040, 'learning_rate': 4.1368e-06, 'epoch': 4.07}
05/30/2024 07:13:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5906, 'learning_rate': 4.0560e-06, 'epoch': 4.08}
05/30/2024 07:14:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5407, 'learning_rate': 3.9759e-06, 'epoch': 4.09}
05/30/2024 07:15:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.5633, 'learning_rate': 3.8965e-06, 'epoch': 4.10}
05/30/2024 07:16:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5990, 'learning_rate': 3.8179e-06, 'epoch': 4.11}
05/30/2024 07:18:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5879, 'learning_rate': 3.7400e-06, 'epoch': 4.12}
05/30/2024 07:19:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.5620, 'learning_rate': 3.6629e-06, 'epoch': 4.13}
05/30/2024 07:19:15 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2200
05/30/2024 07:19:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2200/tokenizer_config.json
05/30/2024 07:19:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2200/special_tokens_map.json
05/30/2024 07:20:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5675, 'learning_rate': 3.5864e-06, 'epoch': 4.14}
05/30/2024 07:21:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5573, 'learning_rate': 3.5108e-06, 'epoch': 4.14}
05/30/2024 07:22:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.5788, 'learning_rate': 3.4358e-06, 'epoch': 4.15}
05/30/2024 07:23:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5877, 'learning_rate': 3.3617e-06, 'epoch': 4.16}
05/30/2024 07:24:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5661, 'learning_rate': 3.2882e-06, 'epoch': 4.17}
05/30/2024 07:25:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.6241, 'learning_rate': 3.2156e-06, 'epoch': 4.18}
05/30/2024 07:27:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.5932, 'learning_rate': 3.1436e-06, 'epoch': 4.19}
05/30/2024 07:28:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6129, 'learning_rate': 3.0725e-06, 'epoch': 4.20}
05/30/2024 07:29:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5467, 'learning_rate': 3.0021e-06, 'epoch': 4.21}
05/30/2024 07:30:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6347, 'learning_rate': 2.9325e-06, 'epoch': 4.22}
05/30/2024 07:31:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5657, 'learning_rate': 2.8636e-06, 'epoch': 4.23}
05/30/2024 07:32:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.5562, 'learning_rate': 2.7955e-06, 'epoch': 4.24}
05/30/2024 07:33:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5954, 'learning_rate': 2.7282e-06, 'epoch': 4.25}
05/30/2024 07:34:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.6499, 'learning_rate': 2.6616e-06, 'epoch': 4.26}
05/30/2024 07:36:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5835, 'learning_rate': 2.5959e-06, 'epoch': 4.27}
05/30/2024 07:37:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5804, 'learning_rate': 2.5309e-06, 'epoch': 4.28}
05/30/2024 07:38:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5838, 'learning_rate': 2.4667e-06, 'epoch': 4.29}
05/30/2024 07:39:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6163, 'learning_rate': 2.4032e-06, 'epoch': 4.29}
05/30/2024 07:40:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5561, 'learning_rate': 2.3406e-06, 'epoch': 4.30}
05/30/2024 07:41:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5709, 'learning_rate': 2.2787e-06, 'epoch': 4.31}
05/30/2024 07:41:37 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2300
05/30/2024 07:41:37 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2300/tokenizer_config.json
05/30/2024 07:41:37 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2300/special_tokens_map.json
05/30/2024 07:42:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5357, 'learning_rate': 2.2176e-06, 'epoch': 4.32}
05/30/2024 07:43:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5840, 'learning_rate': 2.1574e-06, 'epoch': 4.33}
05/30/2024 07:44:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5967, 'learning_rate': 2.0979e-06, 'epoch': 4.34}
05/30/2024 07:46:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.5506, 'learning_rate': 2.0392e-06, 'epoch': 4.35}
05/30/2024 07:47:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5686, 'learning_rate': 1.9813e-06, 'epoch': 4.36}
05/30/2024 07:48:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.5544, 'learning_rate': 1.9242e-06, 'epoch': 4.37}
05/30/2024 07:49:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5756, 'learning_rate': 1.8679e-06, 'epoch': 4.38}
05/30/2024 07:50:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5522, 'learning_rate': 1.8124e-06, 'epoch': 4.39}
05/30/2024 07:51:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5979, 'learning_rate': 1.7578e-06, 'epoch': 4.40}
05/30/2024 07:52:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.5762, 'learning_rate': 1.7039e-06, 'epoch': 4.41}
05/30/2024 07:53:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6322, 'learning_rate': 1.6508e-06, 'epoch': 4.42}
05/30/2024 07:55:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.5814, 'learning_rate': 1.5986e-06, 'epoch': 4.43}
05/30/2024 07:56:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.6210, 'learning_rate': 1.5471e-06, 'epoch': 4.44}
05/30/2024 07:57:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.5462, 'learning_rate': 1.4965e-06, 'epoch': 4.44}
05/30/2024 07:58:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5868, 'learning_rate': 1.4467e-06, 'epoch': 4.45}
05/30/2024 07:59:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5820, 'learning_rate': 1.3977e-06, 'epoch': 4.46}
05/30/2024 08:00:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5764, 'learning_rate': 1.3495e-06, 'epoch': 4.47}
05/30/2024 08:01:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.5745, 'learning_rate': 1.3022e-06, 'epoch': 4.48}
05/30/2024 08:03:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6206, 'learning_rate': 1.2557e-06, 'epoch': 4.49}
05/30/2024 08:04:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5657, 'learning_rate': 1.2100e-06, 'epoch': 4.50}
05/30/2024 08:04:12 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2400
05/30/2024 08:04:12 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2400/tokenizer_config.json
05/30/2024 08:04:12 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2400/special_tokens_map.json
05/30/2024 08:05:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.5487, 'learning_rate': 1.1651e-06, 'epoch': 4.51}
05/30/2024 08:06:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6923, 'learning_rate': 1.1210e-06, 'epoch': 4.52}
05/30/2024 08:07:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.5588, 'learning_rate': 1.0778e-06, 'epoch': 4.53}
05/30/2024 08:08:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5588, 'learning_rate': 1.0354e-06, 'epoch': 4.54}
05/30/2024 08:09:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.5864, 'learning_rate': 9.9389e-07, 'epoch': 4.55}
05/30/2024 08:10:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.5896, 'learning_rate': 9.5317e-07, 'epoch': 4.56}
05/30/2024 08:11:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.6555, 'learning_rate': 9.1329e-07, 'epoch': 4.57}
05/30/2024 08:13:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.5814, 'learning_rate': 8.7424e-07, 'epoch': 4.58}
05/30/2024 08:14:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.5837, 'learning_rate': 8.3604e-07, 'epoch': 4.59}
05/30/2024 08:15:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6351, 'learning_rate': 7.9867e-07, 'epoch': 4.59}
05/30/2024 08:16:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6131, 'learning_rate': 7.6214e-07, 'epoch': 4.60}
05/30/2024 08:17:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5549, 'learning_rate': 7.2645e-07, 'epoch': 4.61}
05/30/2024 08:18:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.5807, 'learning_rate': 6.9161e-07, 'epoch': 4.62}
05/30/2024 08:19:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6375, 'learning_rate': 6.5761e-07, 'epoch': 4.63}
05/30/2024 08:21:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.5800, 'learning_rate': 6.2446e-07, 'epoch': 4.64}
05/30/2024 08:22:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6019, 'learning_rate': 5.9216e-07, 'epoch': 4.65}
05/30/2024 08:23:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.5911, 'learning_rate': 5.6070e-07, 'epoch': 4.66}
05/30/2024 08:24:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.5959, 'learning_rate': 5.3009e-07, 'epoch': 4.67}
05/30/2024 08:25:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.5758, 'learning_rate': 5.0033e-07, 'epoch': 4.68}
05/30/2024 08:26:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6281, 'learning_rate': 4.7143e-07, 'epoch': 4.69}
05/30/2024 08:26:40 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2500
05/30/2024 08:26:40 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2500/tokenizer_config.json
05/30/2024 08:26:40 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2500/special_tokens_map.json
05/30/2024 08:27:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.5875, 'learning_rate': 4.4337e-07, 'epoch': 4.70}
05/30/2024 08:28:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6487, 'learning_rate': 4.1617e-07, 'epoch': 4.71}
05/30/2024 08:30:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5580, 'learning_rate': 3.8982e-07, 'epoch': 4.72}
05/30/2024 08:31:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.6207, 'learning_rate': 3.6433e-07, 'epoch': 4.73}
05/30/2024 08:32:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.5592, 'learning_rate': 3.3969e-07, 'epoch': 4.74}
05/30/2024 08:33:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.5857, 'learning_rate': 3.1591e-07, 'epoch': 4.74}
05/30/2024 08:34:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.5805, 'learning_rate': 2.9299e-07, 'epoch': 4.75}
05/30/2024 08:35:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.5519, 'learning_rate': 2.7093e-07, 'epoch': 4.76}
05/30/2024 08:36:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6442, 'learning_rate': 2.4972e-07, 'epoch': 4.77}
05/30/2024 08:38:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.5744, 'learning_rate': 2.2937e-07, 'epoch': 4.78}
05/30/2024 08:39:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6113, 'learning_rate': 2.0989e-07, 'epoch': 4.79}
05/30/2024 08:40:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.6187, 'learning_rate': 1.9127e-07, 'epoch': 4.80}
05/30/2024 08:41:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.5540, 'learning_rate': 1.7351e-07, 'epoch': 4.81}
05/30/2024 08:42:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.5647, 'learning_rate': 1.5661e-07, 'epoch': 4.82}
05/30/2024 08:43:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.6173, 'learning_rate': 1.4057e-07, 'epoch': 4.83}
05/30/2024 08:44:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.5928, 'learning_rate': 1.2540e-07, 'epoch': 4.84}
05/30/2024 08:46:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5262, 'learning_rate': 1.1109e-07, 'epoch': 4.85}
05/30/2024 08:47:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.5449, 'learning_rate': 9.7646e-08, 'epoch': 4.86}
05/30/2024 08:48:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.5811, 'learning_rate': 8.5068e-08, 'epoch': 4.87}
05/30/2024 08:49:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.5818, 'learning_rate': 7.3355e-08, 'epoch': 4.88}
05/30/2024 08:49:17 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2600
05/30/2024 08:49:17 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2600/tokenizer_config.json
05/30/2024 08:49:17 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/checkpoint-2600/special_tokens_map.json
05/30/2024 08:50:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.6438, 'learning_rate': 6.2508e-08, 'epoch': 4.89}
05/30/2024 08:51:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.5808, 'learning_rate': 5.2528e-08, 'epoch': 4.89}
05/30/2024 08:52:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.5931, 'learning_rate': 4.3414e-08, 'epoch': 4.90}
05/30/2024 08:53:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.5510, 'learning_rate': 3.5167e-08, 'epoch': 4.91}
05/30/2024 08:54:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.5414, 'learning_rate': 2.7788e-08, 'epoch': 4.92}
05/30/2024 08:56:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.6081, 'learning_rate': 2.1276e-08, 'epoch': 4.93}
05/30/2024 08:57:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6069, 'learning_rate': 1.5632e-08, 'epoch': 4.94}
05/30/2024 08:58:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.5897, 'learning_rate': 1.0856e-08, 'epoch': 4.95}
05/30/2024 08:59:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.5580, 'learning_rate': 6.9479e-09, 'epoch': 4.96}
05/30/2024 09:00:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.5617, 'learning_rate': 3.9083e-09, 'epoch': 4.97}
05/30/2024 09:01:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6366, 'learning_rate': 1.7370e-09, 'epoch': 4.98}
05/30/2024 09:03:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.5921, 'learning_rate': 4.3426e-10, 'epoch': 4.99}
05/30/2024 09:04:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.5838, 'learning_rate': 0.0000e+00, 'epoch': 5.00}
05/30/2024 09:04:11 - INFO - transformers.trainer -
Training completed. Do not forget to share your model on huggingface.co/models =)
05/30/2024 09:04:11 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat
05/30/2024 09:04:11 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/tokenizer_config.json
05/30/2024 09:04:11 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-7B-Chat/special_tokens_map.json
05/30/2024 09:04:11 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}