[INFO|2024-12-28 20:32:56] parser.py:355 >> Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, compute dtype: torch.bfloat16 [INFO|2024-12-28 20:32:56] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:32:56] configuration_utils.py:746 >> Model config LlamaConfig { "_name_or_path": "meta-llama/Llama-3.2-1B", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:32:56] tokenization_utils_base.py:2211 >> loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/tokenizer.json [INFO|2024-12-28 20:32:56] tokenization_utils_base.py:2211 >> loading file tokenizer.model from cache at None [INFO|2024-12-28 20:32:56] tokenization_utils_base.py:2211 >> loading file added_tokens.json from cache at None [INFO|2024-12-28 20:32:56] tokenization_utils_base.py:2211 >> loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/special_tokens_map.json [INFO|2024-12-28 20:32:56] tokenization_utils_base.py:2211 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/tokenizer_config.json [INFO|2024-12-28 20:32:57] tokenization_utils_base.py:2475 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [INFO|2024-12-28 20:32:57] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:32:57] configuration_utils.py:746 >> Model config LlamaConfig { "_name_or_path": "meta-llama/Llama-3.2-1B", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:32:57] tokenization_utils_base.py:2211 >> loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/tokenizer.json [INFO|2024-12-28 20:32:57] tokenization_utils_base.py:2211 >> loading file tokenizer.model from cache at None [INFO|2024-12-28 20:32:57] tokenization_utils_base.py:2211 >> loading file added_tokens.json from cache at None [INFO|2024-12-28 20:32:57] tokenization_utils_base.py:2211 >> loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/special_tokens_map.json [INFO|2024-12-28 20:32:57] tokenization_utils_base.py:2211 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/tokenizer_config.json [INFO|2024-12-28 20:32:58] tokenization_utils_base.py:2475 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [INFO|2024-12-28 20:32:58] logging.py:157 >> Add pad token: <|end_of_text|> [INFO|2024-12-28 20:32:58] logging.py:157 >> Loading dataset TIGER-Lab/MathInstruct... [INFO|2024-12-28 20:33:03] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:33:03] configuration_utils.py:746 >> Model config LlamaConfig { "_name_or_path": "meta-llama/Llama-3.2-1B", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [WARNING|2024-12-28 20:33:03] logging.py:162 >> Input length is smaller than max length. Consider increase input length. [INFO|2024-12-28 20:33:03] logging.py:157 >> Using linear scaling strategy and setting scaling factor to 1.0 [INFO|2024-12-28 20:33:03] modeling_utils.py:3937 >> loading weights file model.safetensors from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/model.safetensors [INFO|2024-12-28 20:33:03] modeling_utils.py:1670 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16. [INFO|2024-12-28 20:33:03] configuration_utils.py:1096 >> Generate config GenerationConfig { "bos_token_id": 128000, "eos_token_id": 128001 } [INFO|2024-12-28 20:33:04] modeling_utils.py:4800 >> All model checkpoint weights were used when initializing LlamaForCausalLM. [INFO|2024-12-28 20:33:04] modeling_utils.py:4808 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at meta-llama/Llama-3.2-1B. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. [INFO|2024-12-28 20:33:04] configuration_utils.py:1051 >> loading configuration file generation_config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/generation_config.json [INFO|2024-12-28 20:33:04] configuration_utils.py:1096 >> Generate config GenerationConfig { "bos_token_id": 128000, "do_sample": true, "eos_token_id": 128001, "temperature": 0.6, "top_p": 0.9 } [INFO|2024-12-28 20:33:04] logging.py:157 >> Gradient checkpointing enabled. [INFO|2024-12-28 20:33:04] logging.py:157 >> Using torch SDPA for faster training and inference. [INFO|2024-12-28 20:33:04] logging.py:157 >> Upcasting trainable params to float32. [INFO|2024-12-28 20:33:04] logging.py:157 >> Fine-tuning method: LoRA [INFO|2024-12-28 20:33:04] logging.py:157 >> Found linear modules: o_proj,down_proj,k_proj,gate_proj,up_proj,v_proj,q_proj [INFO|2024-12-28 20:33:05] logging.py:157 >> trainable params: 5,636,096 || all params: 1,241,450,496 || trainable%: 0.4540 [INFO|2024-12-28 20:33:05] trainer.py:698 >> Using auto half precision backend [INFO|2024-12-28 20:33:05] trainer.py:2313 >> ***** Running training ***** [INFO|2024-12-28 20:33:05] trainer.py:2314 >> Num examples = 10,000 [INFO|2024-12-28 20:33:05] trainer.py:2315 >> Num Epochs = 3 [INFO|2024-12-28 20:33:05] trainer.py:2316 >> Instantaneous batch size per device = 16 [INFO|2024-12-28 20:33:05] trainer.py:2319 >> Total train batch size (w. parallel, distributed & accumulation) = 16 [INFO|2024-12-28 20:33:05] trainer.py:2320 >> Gradient Accumulation steps = 1 [INFO|2024-12-28 20:33:05] trainer.py:2321 >> Total optimization steps = 1,875 [INFO|2024-12-28 20:33:05] trainer.py:2322 >> Number of trainable parameters = 5,636,096 [INFO|2024-12-28 20:33:11] logging.py:157 >> {'loss': 1.2049, 'learning_rate': 2.0000e-04, 'epoch': 0.01} [INFO|2024-12-28 20:33:18] logging.py:157 >> {'loss': 0.9333, 'learning_rate': 1.9999e-04, 'epoch': 0.02} [INFO|2024-12-28 20:33:25] logging.py:157 >> {'loss': 0.8671, 'learning_rate': 1.9997e-04, 'epoch': 0.02} [INFO|2024-12-28 20:33:31] logging.py:157 >> {'loss': 0.7979, 'learning_rate': 1.9994e-04, 'epoch': 0.03} [INFO|2024-12-28 20:33:36] logging.py:157 >> {'loss': 0.7662, 'learning_rate': 1.9991e-04, 'epoch': 0.04} [INFO|2024-12-28 20:33:43] logging.py:157 >> {'loss': 0.7929, 'learning_rate': 1.9987e-04, 'epoch': 0.05} [INFO|2024-12-28 20:33:49] logging.py:157 >> {'loss': 0.7683, 'learning_rate': 1.9983e-04, 'epoch': 0.06} [INFO|2024-12-28 20:33:56] logging.py:157 >> {'loss': 0.8667, 'learning_rate': 1.9978e-04, 'epoch': 0.06} [INFO|2024-12-28 20:34:04] logging.py:157 >> {'loss': 0.8446, 'learning_rate': 1.9972e-04, 'epoch': 0.07} [INFO|2024-12-28 20:34:09] logging.py:157 >> {'loss': 0.9051, 'learning_rate': 1.9965e-04, 'epoch': 0.08} [INFO|2024-12-28 20:34:14] logging.py:157 >> {'loss': 0.7235, 'learning_rate': 1.9958e-04, 'epoch': 0.09} [INFO|2024-12-28 20:34:18] logging.py:157 >> {'loss': 0.8169, 'learning_rate': 1.9950e-04, 'epoch': 0.10} [INFO|2024-12-28 20:34:24] logging.py:157 >> {'loss': 0.8266, 'learning_rate': 1.9941e-04, 'epoch': 0.10} [INFO|2024-12-28 20:34:29] logging.py:157 >> {'loss': 0.7580, 'learning_rate': 1.9931e-04, 'epoch': 0.11} [INFO|2024-12-28 20:34:36] logging.py:157 >> {'loss': 0.7759, 'learning_rate': 1.9921e-04, 'epoch': 0.12} [INFO|2024-12-28 20:34:42] logging.py:157 >> {'loss': 0.7797, 'learning_rate': 1.9910e-04, 'epoch': 0.13} [INFO|2024-12-28 20:34:49] logging.py:157 >> {'loss': 0.7437, 'learning_rate': 1.9899e-04, 'epoch': 0.14} [INFO|2024-12-28 20:34:56] logging.py:157 >> {'loss': 0.8043, 'learning_rate': 1.9887e-04, 'epoch': 0.14} [INFO|2024-12-28 20:35:02] logging.py:157 >> {'loss': 0.7701, 'learning_rate': 1.9874e-04, 'epoch': 0.15} [INFO|2024-12-28 20:35:10] logging.py:157 >> {'loss': 0.7090, 'learning_rate': 1.9860e-04, 'epoch': 0.16} [INFO|2024-12-28 20:35:10] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-100 [INFO|2024-12-28 20:35:10] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:35:10] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:35:10] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-100/tokenizer_config.json [INFO|2024-12-28 20:35:10] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-100/special_tokens_map.json [INFO|2024-12-28 20:35:17] logging.py:157 >> {'loss': 0.7377, 'learning_rate': 1.9846e-04, 'epoch': 0.17} [INFO|2024-12-28 20:35:23] logging.py:157 >> {'loss': 0.8352, 'learning_rate': 1.9831e-04, 'epoch': 0.18} [INFO|2024-12-28 20:35:30] logging.py:157 >> {'loss': 0.7738, 'learning_rate': 1.9815e-04, 'epoch': 0.18} [INFO|2024-12-28 20:35:35] logging.py:157 >> {'loss': 0.8067, 'learning_rate': 1.9799e-04, 'epoch': 0.19} [INFO|2024-12-28 20:35:41] logging.py:157 >> {'loss': 0.7456, 'learning_rate': 1.9781e-04, 'epoch': 0.20} [INFO|2024-12-28 20:35:48] logging.py:157 >> {'loss': 0.7580, 'learning_rate': 1.9764e-04, 'epoch': 0.21} [INFO|2024-12-28 20:35:54] logging.py:157 >> {'loss': 0.7895, 'learning_rate': 1.9745e-04, 'epoch': 0.22} [INFO|2024-12-28 20:36:02] logging.py:157 >> {'loss': 0.7302, 'learning_rate': 1.9726e-04, 'epoch': 0.22} [INFO|2024-12-28 20:36:07] logging.py:157 >> {'loss': 0.8152, 'learning_rate': 1.9706e-04, 'epoch': 0.23} [INFO|2024-12-28 20:36:12] logging.py:157 >> {'loss': 0.8461, 'learning_rate': 1.9686e-04, 'epoch': 0.24} [INFO|2024-12-28 20:36:18] logging.py:157 >> {'loss': 0.7787, 'learning_rate': 1.9665e-04, 'epoch': 0.25} [INFO|2024-12-28 20:36:24] logging.py:157 >> {'loss': 0.7574, 'learning_rate': 1.9643e-04, 'epoch': 0.26} [INFO|2024-12-28 20:36:28] logging.py:157 >> {'loss': 0.8487, 'learning_rate': 1.9620e-04, 'epoch': 0.26} [INFO|2024-12-28 20:36:34] logging.py:157 >> {'loss': 0.6611, 'learning_rate': 1.9597e-04, 'epoch': 0.27} [INFO|2024-12-28 20:36:41] logging.py:157 >> {'loss': 0.7802, 'learning_rate': 1.9573e-04, 'epoch': 0.28} [INFO|2024-12-28 20:36:48] logging.py:157 >> {'loss': 0.6727, 'learning_rate': 1.9549e-04, 'epoch': 0.29} [INFO|2024-12-28 20:36:53] logging.py:157 >> {'loss': 0.7502, 'learning_rate': 1.9523e-04, 'epoch': 0.30} [INFO|2024-12-28 20:36:59] logging.py:157 >> {'loss': 0.8401, 'learning_rate': 1.9498e-04, 'epoch': 0.30} [INFO|2024-12-28 20:37:04] logging.py:157 >> {'loss': 0.7494, 'learning_rate': 1.9471e-04, 'epoch': 0.31} [INFO|2024-12-28 20:37:10] logging.py:157 >> {'loss': 0.7842, 'learning_rate': 1.9444e-04, 'epoch': 0.32} [INFO|2024-12-28 20:37:10] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-200 [INFO|2024-12-28 20:37:11] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:37:11] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:37:11] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-200/tokenizer_config.json [INFO|2024-12-28 20:37:11] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-200/special_tokens_map.json [INFO|2024-12-28 20:37:18] logging.py:157 >> {'loss': 0.8082, 'learning_rate': 1.9416e-04, 'epoch': 0.33} [INFO|2024-12-28 20:37:23] logging.py:157 >> {'loss': 0.7883, 'learning_rate': 1.9387e-04, 'epoch': 0.34} [INFO|2024-12-28 20:37:31] logging.py:157 >> {'loss': 0.7356, 'learning_rate': 1.9358e-04, 'epoch': 0.34} [INFO|2024-12-28 20:37:37] logging.py:157 >> {'loss': 0.7891, 'learning_rate': 1.9328e-04, 'epoch': 0.35} [INFO|2024-12-28 20:37:44] logging.py:157 >> {'loss': 0.7671, 'learning_rate': 1.9298e-04, 'epoch': 0.36} [INFO|2024-12-28 20:37:51] logging.py:157 >> {'loss': 0.6608, 'learning_rate': 1.9267e-04, 'epoch': 0.37} [INFO|2024-12-28 20:38:00] logging.py:157 >> {'loss': 0.6470, 'learning_rate': 1.9235e-04, 'epoch': 0.38} [INFO|2024-12-28 20:38:08] logging.py:157 >> {'loss': 0.7290, 'learning_rate': 1.9202e-04, 'epoch': 0.38} [INFO|2024-12-28 20:38:13] logging.py:157 >> {'loss': 0.6713, 'learning_rate': 1.9169e-04, 'epoch': 0.39} [INFO|2024-12-28 20:38:22] logging.py:157 >> {'loss': 0.7049, 'learning_rate': 1.9135e-04, 'epoch': 0.40} [INFO|2024-12-28 20:38:30] logging.py:157 >> {'loss': 0.7419, 'learning_rate': 1.9101e-04, 'epoch': 0.41} [INFO|2024-12-28 20:38:35] logging.py:157 >> {'loss': 0.7148, 'learning_rate': 1.9066e-04, 'epoch': 0.42} [INFO|2024-12-28 20:38:43] logging.py:157 >> {'loss': 0.7493, 'learning_rate': 1.9030e-04, 'epoch': 0.42} [INFO|2024-12-28 20:38:49] logging.py:157 >> {'loss': 0.7652, 'learning_rate': 1.8994e-04, 'epoch': 0.43} [INFO|2024-12-28 20:38:56] logging.py:157 >> {'loss': 0.7438, 'learning_rate': 1.8957e-04, 'epoch': 0.44} [INFO|2024-12-28 20:39:01] logging.py:157 >> {'loss': 0.7683, 'learning_rate': 1.8920e-04, 'epoch': 0.45} [INFO|2024-12-28 20:39:06] logging.py:157 >> {'loss': 0.8115, 'learning_rate': 1.8881e-04, 'epoch': 0.46} [INFO|2024-12-28 20:39:12] logging.py:157 >> {'loss': 0.8335, 'learning_rate': 1.8843e-04, 'epoch': 0.46} [INFO|2024-12-28 20:39:19] logging.py:157 >> {'loss': 0.6933, 'learning_rate': 1.8803e-04, 'epoch': 0.47} [INFO|2024-12-28 20:39:23] logging.py:157 >> {'loss': 0.7515, 'learning_rate': 1.8763e-04, 'epoch': 0.48} [INFO|2024-12-28 20:39:23] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-300 [INFO|2024-12-28 20:39:23] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:39:23] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:39:24] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-300/tokenizer_config.json [INFO|2024-12-28 20:39:24] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-300/special_tokens_map.json [INFO|2024-12-28 20:39:31] logging.py:157 >> {'loss': 0.6931, 'learning_rate': 1.8722e-04, 'epoch': 0.49} [INFO|2024-12-28 20:39:37] logging.py:157 >> {'loss': 0.7820, 'learning_rate': 1.8681e-04, 'epoch': 0.50} [INFO|2024-12-28 20:39:44] logging.py:157 >> {'loss': 0.7361, 'learning_rate': 1.8639e-04, 'epoch': 0.50} [INFO|2024-12-28 20:39:49] logging.py:157 >> {'loss': 0.7443, 'learning_rate': 1.8597e-04, 'epoch': 0.51} [INFO|2024-12-28 20:39:57] logging.py:157 >> {'loss': 0.7221, 'learning_rate': 1.8554e-04, 'epoch': 0.52} [INFO|2024-12-28 20:40:02] logging.py:157 >> {'loss': 0.7622, 'learning_rate': 1.8510e-04, 'epoch': 0.53} [INFO|2024-12-28 20:40:08] logging.py:157 >> {'loss': 0.8556, 'learning_rate': 1.8466e-04, 'epoch': 0.54} [INFO|2024-12-28 20:40:15] logging.py:157 >> {'loss': 0.7814, 'learning_rate': 1.8421e-04, 'epoch': 0.54} [INFO|2024-12-28 20:40:24] logging.py:157 >> {'loss': 0.7220, 'learning_rate': 1.8375e-04, 'epoch': 0.55} [INFO|2024-12-28 20:40:28] logging.py:157 >> {'loss': 0.7903, 'learning_rate': 1.8329e-04, 'epoch': 0.56} [INFO|2024-12-28 20:40:33] logging.py:157 >> {'loss': 0.6996, 'learning_rate': 1.8283e-04, 'epoch': 0.57} [INFO|2024-12-28 20:40:39] logging.py:157 >> {'loss': 0.7730, 'learning_rate': 1.8235e-04, 'epoch': 0.58} [INFO|2024-12-28 20:40:45] logging.py:157 >> {'loss': 0.7280, 'learning_rate': 1.8188e-04, 'epoch': 0.58} [INFO|2024-12-28 20:40:51] logging.py:157 >> {'loss': 0.7659, 'learning_rate': 1.8139e-04, 'epoch': 0.59} [INFO|2024-12-28 20:40:56] logging.py:157 >> {'loss': 0.7039, 'learning_rate': 1.8090e-04, 'epoch': 0.60} [INFO|2024-12-28 20:41:02] logging.py:157 >> {'loss': 0.7125, 'learning_rate': 1.8041e-04, 'epoch': 0.61} [INFO|2024-12-28 20:41:09] logging.py:157 >> {'loss': 0.6980, 'learning_rate': 1.7991e-04, 'epoch': 0.62} [INFO|2024-12-28 20:41:14] logging.py:157 >> {'loss': 0.8255, 'learning_rate': 1.7940e-04, 'epoch': 0.62} [INFO|2024-12-28 20:41:19] logging.py:157 >> {'loss': 0.6616, 'learning_rate': 1.7889e-04, 'epoch': 0.63} [INFO|2024-12-28 20:41:24] logging.py:157 >> {'loss': 0.7452, 'learning_rate': 1.7837e-04, 'epoch': 0.64} [INFO|2024-12-28 20:41:24] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-400 [INFO|2024-12-28 20:41:24] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:41:24] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:41:25] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-400/tokenizer_config.json [INFO|2024-12-28 20:41:25] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-400/special_tokens_map.json [INFO|2024-12-28 20:41:32] logging.py:157 >> {'loss': 0.7652, 'learning_rate': 1.7785e-04, 'epoch': 0.65} [INFO|2024-12-28 20:41:40] logging.py:157 >> {'loss': 0.7793, 'learning_rate': 1.7732e-04, 'epoch': 0.66} [INFO|2024-12-28 20:41:45] logging.py:157 >> {'loss': 0.6875, 'learning_rate': 1.7678e-04, 'epoch': 0.66} [INFO|2024-12-28 20:41:54] logging.py:157 >> {'loss': 0.7465, 'learning_rate': 1.7624e-04, 'epoch': 0.67} [INFO|2024-12-28 20:42:01] logging.py:157 >> {'loss': 0.7205, 'learning_rate': 1.7570e-04, 'epoch': 0.68} [INFO|2024-12-28 20:42:08] logging.py:157 >> {'loss': 0.6589, 'learning_rate': 1.7515e-04, 'epoch': 0.69} [INFO|2024-12-28 20:42:13] logging.py:157 >> {'loss': 0.7035, 'learning_rate': 1.7459e-04, 'epoch': 0.70} [INFO|2024-12-28 20:42:20] logging.py:157 >> {'loss': 0.7870, 'learning_rate': 1.7403e-04, 'epoch': 0.70} [INFO|2024-12-28 20:42:29] logging.py:157 >> {'loss': 0.7515, 'learning_rate': 1.7347e-04, 'epoch': 0.71} [INFO|2024-12-28 20:42:34] logging.py:157 >> {'loss': 0.7199, 'learning_rate': 1.7290e-04, 'epoch': 0.72} [INFO|2024-12-28 20:42:41] logging.py:157 >> {'loss': 0.8037, 'learning_rate': 1.7232e-04, 'epoch': 0.73} [INFO|2024-12-28 20:42:46] logging.py:157 >> {'loss': 0.7502, 'learning_rate': 1.7174e-04, 'epoch': 0.74} [INFO|2024-12-28 20:42:53] logging.py:157 >> {'loss': 0.7446, 'learning_rate': 1.7115e-04, 'epoch': 0.74} [INFO|2024-12-28 20:43:00] logging.py:157 >> {'loss': 0.6507, 'learning_rate': 1.7056e-04, 'epoch': 0.75} [INFO|2024-12-28 20:43:06] logging.py:157 >> {'loss': 0.7164, 'learning_rate': 1.6997e-04, 'epoch': 0.76} [INFO|2024-12-28 20:43:12] logging.py:157 >> {'loss': 0.7621, 'learning_rate': 1.6937e-04, 'epoch': 0.77} [INFO|2024-12-28 20:43:18] logging.py:157 >> {'loss': 0.7623, 'learning_rate': 1.6876e-04, 'epoch': 0.78} [INFO|2024-12-28 20:43:26] logging.py:157 >> {'loss': 0.6606, 'learning_rate': 1.6815e-04, 'epoch': 0.78} [INFO|2024-12-28 20:43:34] logging.py:157 >> {'loss': 0.6941, 'learning_rate': 1.6753e-04, 'epoch': 0.79} [INFO|2024-12-28 20:43:39] logging.py:157 >> {'loss': 0.6841, 'learning_rate': 1.6691e-04, 'epoch': 0.80} [INFO|2024-12-28 20:43:39] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-500 [INFO|2024-12-28 20:43:39] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:43:39] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:43:40] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-500/tokenizer_config.json [INFO|2024-12-28 20:43:40] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-500/special_tokens_map.json [INFO|2024-12-28 20:43:47] logging.py:157 >> {'loss': 0.6996, 'learning_rate': 1.6629e-04, 'epoch': 0.81} [INFO|2024-12-28 20:43:54] logging.py:157 >> {'loss': 0.7542, 'learning_rate': 1.6566e-04, 'epoch': 0.82} [INFO|2024-12-28 20:44:00] logging.py:157 >> {'loss': 0.7175, 'learning_rate': 1.6502e-04, 'epoch': 0.82} [INFO|2024-12-28 20:44:05] logging.py:157 >> {'loss': 0.7565, 'learning_rate': 1.6439e-04, 'epoch': 0.83} [INFO|2024-12-28 20:44:10] logging.py:157 >> {'loss': 0.7339, 'learning_rate': 1.6374e-04, 'epoch': 0.84} [INFO|2024-12-28 20:44:16] logging.py:157 >> {'loss': 0.5690, 'learning_rate': 1.6309e-04, 'epoch': 0.85} [INFO|2024-12-28 20:44:20] logging.py:157 >> {'loss': 0.7556, 'learning_rate': 1.6244e-04, 'epoch': 0.86} [INFO|2024-12-28 20:44:25] logging.py:157 >> {'loss': 0.7084, 'learning_rate': 1.6179e-04, 'epoch': 0.86} [INFO|2024-12-28 20:44:31] logging.py:157 >> {'loss': 0.6935, 'learning_rate': 1.6113e-04, 'epoch': 0.87} [INFO|2024-12-28 20:44:36] logging.py:157 >> {'loss': 0.7076, 'learning_rate': 1.6046e-04, 'epoch': 0.88} [INFO|2024-12-28 20:44:42] logging.py:157 >> {'loss': 0.7151, 'learning_rate': 1.5979e-04, 'epoch': 0.89} [INFO|2024-12-28 20:44:48] logging.py:157 >> {'loss': 0.7001, 'learning_rate': 1.5912e-04, 'epoch': 0.90} [INFO|2024-12-28 20:44:53] logging.py:157 >> {'loss': 0.7285, 'learning_rate': 1.5844e-04, 'epoch': 0.90} [INFO|2024-12-28 20:44:59] logging.py:157 >> {'loss': 0.8041, 'learning_rate': 1.5776e-04, 'epoch': 0.91} [INFO|2024-12-28 20:45:05] logging.py:157 >> {'loss': 0.7353, 'learning_rate': 1.5707e-04, 'epoch': 0.92} [INFO|2024-12-28 20:45:11] logging.py:157 >> {'loss': 0.7792, 'learning_rate': 1.5638e-04, 'epoch': 0.93} [INFO|2024-12-28 20:45:18] logging.py:157 >> {'loss': 1.0121, 'learning_rate': 1.5569e-04, 'epoch': 0.94} [INFO|2024-12-28 20:45:24] logging.py:157 >> {'loss': 0.7727, 'learning_rate': 1.5499e-04, 'epoch': 0.94} [INFO|2024-12-28 20:45:30] logging.py:157 >> {'loss': 0.7410, 'learning_rate': 1.5429e-04, 'epoch': 0.95} [INFO|2024-12-28 20:45:36] logging.py:157 >> {'loss': 0.6919, 'learning_rate': 1.5358e-04, 'epoch': 0.96} [INFO|2024-12-28 20:45:36] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-600 [INFO|2024-12-28 20:45:36] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:45:36] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:45:36] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-600/tokenizer_config.json [INFO|2024-12-28 20:45:36] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-600/special_tokens_map.json [INFO|2024-12-28 20:45:44] logging.py:157 >> {'loss': 0.7163, 'learning_rate': 1.5287e-04, 'epoch': 0.97} [INFO|2024-12-28 20:45:48] logging.py:157 >> {'loss': 0.8152, 'learning_rate': 1.5216e-04, 'epoch': 0.98} [INFO|2024-12-28 20:45:54] logging.py:157 >> {'loss': 0.6709, 'learning_rate': 1.5144e-04, 'epoch': 0.98} [INFO|2024-12-28 20:46:00] logging.py:157 >> {'loss': 0.6527, 'learning_rate': 1.5072e-04, 'epoch': 0.99} [INFO|2024-12-28 20:46:05] logging.py:157 >> {'loss': 0.8194, 'learning_rate': 1.5000e-04, 'epoch': 1.00} [INFO|2024-12-28 20:46:10] logging.py:157 >> {'loss': 0.6627, 'learning_rate': 1.4927e-04, 'epoch': 1.01} [INFO|2024-12-28 20:46:16] logging.py:157 >> {'loss': 0.6366, 'learning_rate': 1.4854e-04, 'epoch': 1.02} [INFO|2024-12-28 20:46:21] logging.py:157 >> {'loss': 0.6717, 'learning_rate': 1.4781e-04, 'epoch': 1.02} [INFO|2024-12-28 20:46:26] logging.py:157 >> {'loss': 0.6483, 'learning_rate': 1.4707e-04, 'epoch': 1.03} [INFO|2024-12-28 20:46:31] logging.py:157 >> {'loss': 0.6151, 'learning_rate': 1.4633e-04, 'epoch': 1.04} [INFO|2024-12-28 20:46:38] logging.py:157 >> {'loss': 0.6707, 'learning_rate': 1.4559e-04, 'epoch': 1.05} [INFO|2024-12-28 20:46:42] logging.py:157 >> {'loss': 0.6125, 'learning_rate': 1.4484e-04, 'epoch': 1.06} [INFO|2024-12-28 20:46:48] logging.py:157 >> {'loss': 0.6206, 'learning_rate': 1.4409e-04, 'epoch': 1.06} [INFO|2024-12-28 20:46:54] logging.py:157 >> {'loss': 0.6161, 'learning_rate': 1.4333e-04, 'epoch': 1.07} [INFO|2024-12-28 20:47:00] logging.py:157 >> {'loss': 0.6583, 'learning_rate': 1.4258e-04, 'epoch': 1.08} [INFO|2024-12-28 20:47:05] logging.py:157 >> {'loss': 0.6222, 'learning_rate': 1.4182e-04, 'epoch': 1.09} [INFO|2024-12-28 20:47:10] logging.py:157 >> {'loss': 0.7160, 'learning_rate': 1.4106e-04, 'epoch': 1.10} [INFO|2024-12-28 20:47:16] logging.py:157 >> {'loss': 0.6198, 'learning_rate': 1.4029e-04, 'epoch': 1.10} [INFO|2024-12-28 20:47:24] logging.py:157 >> {'loss': 0.6389, 'learning_rate': 1.3952e-04, 'epoch': 1.11} [INFO|2024-12-28 20:47:30] logging.py:157 >> {'loss': 0.6842, 'learning_rate': 1.3875e-04, 'epoch': 1.12} [INFO|2024-12-28 20:47:30] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-700 [INFO|2024-12-28 20:47:30] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:47:30] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:47:30] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-700/tokenizer_config.json [INFO|2024-12-28 20:47:30] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-700/special_tokens_map.json [INFO|2024-12-28 20:47:38] logging.py:157 >> {'loss': 0.6071, 'learning_rate': 1.3798e-04, 'epoch': 1.13} [INFO|2024-12-28 20:47:46] logging.py:157 >> {'loss': 0.5915, 'learning_rate': 1.3720e-04, 'epoch': 1.14} [INFO|2024-12-28 20:47:51] logging.py:157 >> {'loss': 0.6794, 'learning_rate': 1.3642e-04, 'epoch': 1.14} [INFO|2024-12-28 20:47:57] logging.py:157 >> {'loss': 0.6773, 'learning_rate': 1.3564e-04, 'epoch': 1.15} [INFO|2024-12-28 20:48:04] logging.py:157 >> {'loss': 0.6680, 'learning_rate': 1.3486e-04, 'epoch': 1.16} [INFO|2024-12-28 20:48:12] logging.py:157 >> {'loss': 0.6997, 'learning_rate': 1.3407e-04, 'epoch': 1.17} [INFO|2024-12-28 20:48:20] logging.py:157 >> {'loss': 0.8310, 'learning_rate': 1.3328e-04, 'epoch': 1.18} [INFO|2024-12-28 20:48:27] logging.py:157 >> {'loss': 0.6378, 'learning_rate': 1.3249e-04, 'epoch': 1.18} [INFO|2024-12-28 20:48:32] logging.py:157 >> {'loss': 0.6547, 'learning_rate': 1.3170e-04, 'epoch': 1.19} [INFO|2024-12-28 20:48:37] logging.py:157 >> {'loss': 0.5808, 'learning_rate': 1.3090e-04, 'epoch': 1.20} [INFO|2024-12-28 20:48:43] logging.py:157 >> {'loss': 0.5582, 'learning_rate': 1.3010e-04, 'epoch': 1.21} [INFO|2024-12-28 20:48:50] logging.py:157 >> {'loss': 0.5801, 'learning_rate': 1.2930e-04, 'epoch': 1.22} [INFO|2024-12-28 20:48:56] logging.py:157 >> {'loss': 0.6500, 'learning_rate': 1.2850e-04, 'epoch': 1.22} [INFO|2024-12-28 20:49:03] logging.py:157 >> {'loss': 0.6627, 'learning_rate': 1.2770e-04, 'epoch': 1.23} [INFO|2024-12-28 20:49:08] logging.py:157 >> {'loss': 0.5603, 'learning_rate': 1.2689e-04, 'epoch': 1.24} [INFO|2024-12-28 20:49:13] logging.py:157 >> {'loss': 0.6525, 'learning_rate': 1.2608e-04, 'epoch': 1.25} [INFO|2024-12-28 20:49:20] logging.py:157 >> {'loss': 0.6731, 'learning_rate': 1.2527e-04, 'epoch': 1.26} [INFO|2024-12-28 20:49:27] logging.py:157 >> {'loss': 0.6255, 'learning_rate': 1.2446e-04, 'epoch': 1.26} [INFO|2024-12-28 20:49:33] logging.py:157 >> {'loss': 0.6585, 'learning_rate': 1.2365e-04, 'epoch': 1.27} [INFO|2024-12-28 20:49:41] logging.py:157 >> {'loss': 0.5996, 'learning_rate': 1.2284e-04, 'epoch': 1.28} [INFO|2024-12-28 20:49:41] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-800 [INFO|2024-12-28 20:49:41] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:49:41] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:49:41] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-800/tokenizer_config.json [INFO|2024-12-28 20:49:41] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-800/special_tokens_map.json [INFO|2024-12-28 20:49:50] logging.py:157 >> {'loss': 0.6355, 'learning_rate': 1.2202e-04, 'epoch': 1.29} [INFO|2024-12-28 20:49:56] logging.py:157 >> {'loss': 0.6615, 'learning_rate': 1.2120e-04, 'epoch': 1.30} [INFO|2024-12-28 20:50:04] logging.py:157 >> {'loss': 0.6096, 'learning_rate': 1.2038e-04, 'epoch': 1.30} [INFO|2024-12-28 20:50:11] logging.py:157 >> {'loss': 0.5984, 'learning_rate': 1.1956e-04, 'epoch': 1.31} [INFO|2024-12-28 20:50:19] logging.py:157 >> {'loss': 0.5569, 'learning_rate': 1.1874e-04, 'epoch': 1.32} [INFO|2024-12-28 20:50:26] logging.py:157 >> {'loss': 0.7088, 'learning_rate': 1.1791e-04, 'epoch': 1.33} [INFO|2024-12-28 20:50:32] logging.py:157 >> {'loss': 0.6731, 'learning_rate': 1.1709e-04, 'epoch': 1.34} [INFO|2024-12-28 20:50:38] logging.py:157 >> {'loss': 0.6188, 'learning_rate': 1.1626e-04, 'epoch': 1.34} [INFO|2024-12-28 20:50:43] logging.py:157 >> {'loss': 0.7004, 'learning_rate': 1.1544e-04, 'epoch': 1.35} [INFO|2024-12-28 20:50:51] logging.py:157 >> {'loss': 0.5884, 'learning_rate': 1.1461e-04, 'epoch': 1.36} [INFO|2024-12-28 20:51:00] logging.py:157 >> {'loss': 0.5739, 'learning_rate': 1.1378e-04, 'epoch': 1.37} [INFO|2024-12-28 20:51:05] logging.py:157 >> {'loss': 0.6435, 'learning_rate': 1.1295e-04, 'epoch': 1.38} [INFO|2024-12-28 20:51:12] logging.py:157 >> {'loss': 0.6897, 'learning_rate': 1.1212e-04, 'epoch': 1.38} [INFO|2024-12-28 20:51:16] logging.py:157 >> {'loss': 0.6641, 'learning_rate': 1.1129e-04, 'epoch': 1.39} [INFO|2024-12-28 20:51:22] logging.py:157 >> {'loss': 0.6273, 'learning_rate': 1.1045e-04, 'epoch': 1.40} [INFO|2024-12-28 20:51:30] logging.py:157 >> {'loss': 0.6437, 'learning_rate': 1.0962e-04, 'epoch': 1.41} [INFO|2024-12-28 20:51:37] logging.py:157 >> {'loss': 0.6345, 'learning_rate': 1.0879e-04, 'epoch': 1.42} [INFO|2024-12-28 20:51:45] logging.py:157 >> {'loss': 0.5913, 'learning_rate': 1.0795e-04, 'epoch': 1.42} [INFO|2024-12-28 20:51:51] logging.py:157 >> {'loss': 0.6482, 'learning_rate': 1.0711e-04, 'epoch': 1.43} [INFO|2024-12-28 20:51:58] logging.py:157 >> {'loss': 0.6165, 'learning_rate': 1.0628e-04, 'epoch': 1.44} [INFO|2024-12-28 20:51:58] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-900 [INFO|2024-12-28 20:51:58] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:51:58] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:51:58] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-900/tokenizer_config.json [INFO|2024-12-28 20:51:58] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-900/special_tokens_map.json [INFO|2024-12-28 20:52:05] logging.py:157 >> {'loss': 0.6340, 'learning_rate': 1.0544e-04, 'epoch': 1.45} [INFO|2024-12-28 20:52:09] logging.py:157 >> {'loss': 0.6509, 'learning_rate': 1.0461e-04, 'epoch': 1.46} [INFO|2024-12-28 20:52:17] logging.py:157 >> {'loss': 0.6212, 'learning_rate': 1.0377e-04, 'epoch': 1.46} [INFO|2024-12-28 20:52:23] logging.py:157 >> {'loss': 0.7305, 'learning_rate': 1.0293e-04, 'epoch': 1.47} [INFO|2024-12-28 20:52:29] logging.py:157 >> {'loss': 0.6685, 'learning_rate': 1.0209e-04, 'epoch': 1.48} [INFO|2024-12-28 20:52:35] logging.py:157 >> {'loss': 0.6214, 'learning_rate': 1.0126e-04, 'epoch': 1.49} [INFO|2024-12-28 20:52:43] logging.py:157 >> {'loss': 0.6035, 'learning_rate': 1.0042e-04, 'epoch': 1.50} [INFO|2024-12-28 20:52:49] logging.py:157 >> {'loss': 0.5868, 'learning_rate': 9.9581e-05, 'epoch': 1.50} [INFO|2024-12-28 20:52:55] logging.py:157 >> {'loss': 0.6003, 'learning_rate': 9.8743e-05, 'epoch': 1.51} [INFO|2024-12-28 20:53:01] logging.py:157 >> {'loss': 0.5854, 'learning_rate': 9.7906e-05, 'epoch': 1.52} [INFO|2024-12-28 20:53:09] logging.py:157 >> {'loss': 0.5882, 'learning_rate': 9.7068e-05, 'epoch': 1.53} [INFO|2024-12-28 20:53:14] logging.py:157 >> {'loss': 0.7187, 'learning_rate': 9.6231e-05, 'epoch': 1.54} [INFO|2024-12-28 20:53:19] logging.py:157 >> {'loss': 0.6156, 'learning_rate': 9.5394e-05, 'epoch': 1.54} [INFO|2024-12-28 20:53:23] logging.py:157 >> {'loss': 0.6488, 'learning_rate': 9.4557e-05, 'epoch': 1.55} [INFO|2024-12-28 20:53:30] logging.py:157 >> {'loss': 0.6601, 'learning_rate': 9.3721e-05, 'epoch': 1.56} [INFO|2024-12-28 20:53:37] logging.py:157 >> {'loss': 0.5968, 'learning_rate': 9.2885e-05, 'epoch': 1.57} [INFO|2024-12-28 20:53:42] logging.py:157 >> {'loss': 0.7034, 'learning_rate': 9.2050e-05, 'epoch': 1.58} [INFO|2024-12-28 20:53:46] logging.py:157 >> {'loss': 0.5973, 'learning_rate': 9.1215e-05, 'epoch': 1.58} [INFO|2024-12-28 20:53:50] logging.py:157 >> {'loss': 0.7877, 'learning_rate': 9.0381e-05, 'epoch': 1.59} [INFO|2024-12-28 20:53:57] logging.py:157 >> {'loss': 0.6440, 'learning_rate': 8.9547e-05, 'epoch': 1.60} [INFO|2024-12-28 20:53:57] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1000 [INFO|2024-12-28 20:53:57] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:53:57] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:53:57] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1000/tokenizer_config.json [INFO|2024-12-28 20:53:57] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1000/special_tokens_map.json [INFO|2024-12-28 20:54:04] logging.py:157 >> {'loss': 0.6678, 'learning_rate': 8.8714e-05, 'epoch': 1.61} [INFO|2024-12-28 20:54:09] logging.py:157 >> {'loss': 0.6088, 'learning_rate': 8.7882e-05, 'epoch': 1.62} [INFO|2024-12-28 20:54:16] logging.py:157 >> {'loss': 0.6219, 'learning_rate': 8.7051e-05, 'epoch': 1.62} [INFO|2024-12-28 20:54:21] logging.py:157 >> {'loss': 0.6698, 'learning_rate': 8.6221e-05, 'epoch': 1.63} [INFO|2024-12-28 20:54:29] logging.py:157 >> {'loss': 0.6207, 'learning_rate': 8.5392e-05, 'epoch': 1.64} [INFO|2024-12-28 20:54:36] logging.py:157 >> {'loss': 0.6260, 'learning_rate': 8.4563e-05, 'epoch': 1.65} [INFO|2024-12-28 20:54:42] logging.py:157 >> {'loss': 0.6972, 'learning_rate': 8.3736e-05, 'epoch': 1.66} [INFO|2024-12-28 20:54:47] logging.py:157 >> {'loss': 0.6282, 'learning_rate': 8.2910e-05, 'epoch': 1.66} [INFO|2024-12-28 20:54:53] logging.py:157 >> {'loss': 0.6219, 'learning_rate': 8.2085e-05, 'epoch': 1.67} [INFO|2024-12-28 20:55:00] logging.py:157 >> {'loss': 0.6220, 'learning_rate': 8.1262e-05, 'epoch': 1.68} [INFO|2024-12-28 20:55:06] logging.py:157 >> {'loss': 0.5801, 'learning_rate': 8.0440e-05, 'epoch': 1.69} [INFO|2024-12-28 20:55:13] logging.py:157 >> {'loss': 0.5980, 'learning_rate': 7.9619e-05, 'epoch': 1.70} [INFO|2024-12-28 20:55:18] logging.py:157 >> {'loss': 0.6990, 'learning_rate': 7.8799e-05, 'epoch': 1.70} [INFO|2024-12-28 20:55:23] logging.py:157 >> {'loss': 0.5882, 'learning_rate': 7.7981e-05, 'epoch': 1.71} [INFO|2024-12-28 20:55:31] logging.py:157 >> {'loss': 0.5321, 'learning_rate': 7.7165e-05, 'epoch': 1.72} [INFO|2024-12-28 20:55:39] logging.py:157 >> {'loss': 0.6647, 'learning_rate': 7.6350e-05, 'epoch': 1.73} [INFO|2024-12-28 20:55:45] logging.py:157 >> {'loss': 0.6280, 'learning_rate': 7.5537e-05, 'epoch': 1.74} [INFO|2024-12-28 20:55:52] logging.py:157 >> {'loss': 0.6262, 'learning_rate': 7.4726e-05, 'epoch': 1.74} [INFO|2024-12-28 20:55:58] logging.py:157 >> {'loss': 0.6131, 'learning_rate': 7.3916e-05, 'epoch': 1.75} [INFO|2024-12-28 20:56:04] logging.py:157 >> {'loss': 0.6494, 'learning_rate': 7.3108e-05, 'epoch': 1.76} [INFO|2024-12-28 20:56:04] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1100 [INFO|2024-12-28 20:56:05] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:56:05] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:56:05] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1100/tokenizer_config.json [INFO|2024-12-28 20:56:05] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1100/special_tokens_map.json [INFO|2024-12-28 20:56:12] logging.py:157 >> {'loss': 0.5514, 'learning_rate': 7.2302e-05, 'epoch': 1.77} [INFO|2024-12-28 20:56:20] logging.py:157 >> {'loss': 0.5823, 'learning_rate': 7.1498e-05, 'epoch': 1.78} [INFO|2024-12-28 20:56:25] logging.py:157 >> {'loss': 0.7207, 'learning_rate': 7.0696e-05, 'epoch': 1.78} [INFO|2024-12-28 20:56:32] logging.py:157 >> {'loss': 0.7006, 'learning_rate': 6.9896e-05, 'epoch': 1.79} [INFO|2024-12-28 20:56:37] logging.py:157 >> {'loss': 0.6222, 'learning_rate': 6.9098e-05, 'epoch': 1.80} [INFO|2024-12-28 20:56:43] logging.py:157 >> {'loss': 0.6569, 'learning_rate': 6.8303e-05, 'epoch': 1.81} [INFO|2024-12-28 20:56:51] logging.py:157 >> {'loss': 0.5430, 'learning_rate': 6.7509e-05, 'epoch': 1.82} [INFO|2024-12-28 20:56:57] logging.py:157 >> {'loss': 0.6334, 'learning_rate': 6.6718e-05, 'epoch': 1.82} [INFO|2024-12-28 20:57:05] logging.py:157 >> {'loss': 0.6701, 'learning_rate': 6.5929e-05, 'epoch': 1.83} [INFO|2024-12-28 20:57:12] logging.py:157 >> {'loss': 0.6216, 'learning_rate': 6.5143e-05, 'epoch': 1.84} [INFO|2024-12-28 20:57:20] logging.py:157 >> {'loss': 0.5877, 'learning_rate': 6.4359e-05, 'epoch': 1.85} [INFO|2024-12-28 20:57:27] logging.py:157 >> {'loss': 0.6256, 'learning_rate': 6.3577e-05, 'epoch': 1.86} [INFO|2024-12-28 20:57:31] logging.py:157 >> {'loss': 0.7062, 'learning_rate': 6.2798e-05, 'epoch': 1.86} [INFO|2024-12-28 20:57:38] logging.py:157 >> {'loss': 0.6304, 'learning_rate': 6.2022e-05, 'epoch': 1.87} [INFO|2024-12-28 20:57:42] logging.py:157 >> {'loss': 0.7695, 'learning_rate': 6.1248e-05, 'epoch': 1.88} [INFO|2024-12-28 20:57:49] logging.py:157 >> {'loss': 0.5723, 'learning_rate': 6.0478e-05, 'epoch': 1.89} [INFO|2024-12-28 20:57:54] logging.py:157 >> {'loss': 0.6847, 'learning_rate': 5.9709e-05, 'epoch': 1.90} [INFO|2024-12-28 20:58:01] logging.py:157 >> {'loss': 0.6618, 'learning_rate': 5.8944e-05, 'epoch': 1.90} [INFO|2024-12-28 20:58:07] logging.py:157 >> {'loss': 0.6275, 'learning_rate': 5.8182e-05, 'epoch': 1.91} [INFO|2024-12-28 20:58:14] logging.py:157 >> {'loss': 0.5617, 'learning_rate': 5.7422e-05, 'epoch': 1.92} [INFO|2024-12-28 20:58:14] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1200 [INFO|2024-12-28 20:58:15] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 20:58:15] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 20:58:15] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1200/tokenizer_config.json [INFO|2024-12-28 20:58:15] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1200/special_tokens_map.json [INFO|2024-12-28 20:58:21] logging.py:157 >> {'loss': 0.6278, 'learning_rate': 5.6666e-05, 'epoch': 1.93} [INFO|2024-12-28 20:58:26] logging.py:157 >> {'loss': 0.6713, 'learning_rate': 5.5912e-05, 'epoch': 1.94} [INFO|2024-12-28 20:58:30] logging.py:157 >> {'loss': 0.6113, 'learning_rate': 5.5162e-05, 'epoch': 1.94} [INFO|2024-12-28 20:58:37] logging.py:157 >> {'loss': 0.5587, 'learning_rate': 5.4414e-05, 'epoch': 1.95} [INFO|2024-12-28 20:58:44] logging.py:157 >> {'loss': 0.5601, 'learning_rate': 5.3670e-05, 'epoch': 1.96} [INFO|2024-12-28 20:58:49] logging.py:157 >> {'loss': 0.5941, 'learning_rate': 5.2930e-05, 'epoch': 1.97} [INFO|2024-12-28 20:58:55] logging.py:157 >> {'loss': 0.6285, 'learning_rate': 5.2192e-05, 'epoch': 1.98} [INFO|2024-12-28 20:59:01] logging.py:157 >> {'loss': 0.6516, 'learning_rate': 5.1458e-05, 'epoch': 1.98} [INFO|2024-12-28 20:59:10] logging.py:157 >> {'loss': 0.5904, 'learning_rate': 5.0727e-05, 'epoch': 1.99} [INFO|2024-12-28 20:59:17] logging.py:157 >> {'loss': 0.6190, 'learning_rate': 5.0000e-05, 'epoch': 2.00} [INFO|2024-12-28 20:59:21] logging.py:157 >> {'loss': 0.6058, 'learning_rate': 4.9276e-05, 'epoch': 2.01} [INFO|2024-12-28 20:59:26] logging.py:157 >> {'loss': 0.6248, 'learning_rate': 4.8556e-05, 'epoch': 2.02} [INFO|2024-12-28 20:59:32] logging.py:157 >> {'loss': 0.5247, 'learning_rate': 4.7839e-05, 'epoch': 2.02} [INFO|2024-12-28 20:59:40] logging.py:157 >> {'loss': 0.5439, 'learning_rate': 4.7127e-05, 'epoch': 2.03} [INFO|2024-12-28 20:59:46] logging.py:157 >> {'loss': 0.4491, 'learning_rate': 4.6417e-05, 'epoch': 2.04} [INFO|2024-12-28 20:59:54] logging.py:157 >> {'loss': 0.5200, 'learning_rate': 4.5712e-05, 'epoch': 2.05} [INFO|2024-12-28 21:00:01] logging.py:157 >> {'loss': 0.5259, 'learning_rate': 4.5010e-05, 'epoch': 2.06} [INFO|2024-12-28 21:00:08] logging.py:157 >> {'loss': 0.5025, 'learning_rate': 4.4312e-05, 'epoch': 2.06} [INFO|2024-12-28 21:00:14] logging.py:157 >> {'loss': 0.4772, 'learning_rate': 4.3619e-05, 'epoch': 2.07} [INFO|2024-12-28 21:00:19] logging.py:157 >> {'loss': 0.5945, 'learning_rate': 4.2929e-05, 'epoch': 2.08} [INFO|2024-12-28 21:00:19] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1300 [INFO|2024-12-28 21:00:20] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 21:00:20] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 21:00:20] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1300/tokenizer_config.json [INFO|2024-12-28 21:00:20] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1300/special_tokens_map.json [INFO|2024-12-28 21:00:28] logging.py:157 >> {'loss': 0.4813, 'learning_rate': 4.2243e-05, 'epoch': 2.09} [INFO|2024-12-28 21:00:34] logging.py:157 >> {'loss': 0.5315, 'learning_rate': 4.1561e-05, 'epoch': 2.10} [INFO|2024-12-28 21:00:39] logging.py:157 >> {'loss': 0.5591, 'learning_rate': 4.0883e-05, 'epoch': 2.10} [INFO|2024-12-28 21:00:45] logging.py:157 >> {'loss': 0.6050, 'learning_rate': 4.0210e-05, 'epoch': 2.11} [INFO|2024-12-28 21:00:53] logging.py:157 >> {'loss': 0.4955, 'learning_rate': 3.9540e-05, 'epoch': 2.12} [INFO|2024-12-28 21:00:58] logging.py:157 >> {'loss': 0.5757, 'learning_rate': 3.8875e-05, 'epoch': 2.13} [INFO|2024-12-28 21:01:05] logging.py:157 >> {'loss': 0.5313, 'learning_rate': 3.8214e-05, 'epoch': 2.14} [INFO|2024-12-28 21:01:11] logging.py:157 >> {'loss': 0.5904, 'learning_rate': 3.7557e-05, 'epoch': 2.14} [INFO|2024-12-28 21:01:18] logging.py:157 >> {'loss': 0.4679, 'learning_rate': 3.6905e-05, 'epoch': 2.15} [INFO|2024-12-28 21:01:22] logging.py:157 >> {'loss': 0.5235, 'learning_rate': 3.6258e-05, 'epoch': 2.16} [INFO|2024-12-28 21:01:27] logging.py:157 >> {'loss': 0.5797, 'learning_rate': 3.5614e-05, 'epoch': 2.17} [INFO|2024-12-28 21:01:32] logging.py:157 >> {'loss': 0.5772, 'learning_rate': 3.4976e-05, 'epoch': 2.18} [INFO|2024-12-28 21:01:38] logging.py:157 >> {'loss': 0.5316, 'learning_rate': 3.4341e-05, 'epoch': 2.18} [INFO|2024-12-28 21:01:44] logging.py:157 >> {'loss': 0.5646, 'learning_rate': 3.3712e-05, 'epoch': 2.19} [INFO|2024-12-28 21:01:49] logging.py:157 >> {'loss': 0.5431, 'learning_rate': 3.3087e-05, 'epoch': 2.20} [INFO|2024-12-28 21:01:54] logging.py:157 >> {'loss': 0.5403, 'learning_rate': 3.2467e-05, 'epoch': 2.21} [INFO|2024-12-28 21:01:59] logging.py:157 >> {'loss': 0.5329, 'learning_rate': 3.1851e-05, 'epoch': 2.22} [INFO|2024-12-28 21:02:07] logging.py:157 >> {'loss': 0.5696, 'learning_rate': 3.1241e-05, 'epoch': 2.22} [INFO|2024-12-28 21:02:12] logging.py:157 >> {'loss': 0.5900, 'learning_rate': 3.0635e-05, 'epoch': 2.23} [INFO|2024-12-28 21:02:20] logging.py:157 >> {'loss': 0.5116, 'learning_rate': 3.0034e-05, 'epoch': 2.24} [INFO|2024-12-28 21:02:20] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1400 [INFO|2024-12-28 21:02:20] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 21:02:20] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 21:02:20] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1400/tokenizer_config.json [INFO|2024-12-28 21:02:20] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1400/special_tokens_map.json [INFO|2024-12-28 21:02:27] logging.py:157 >> {'loss': 0.5783, 'learning_rate': 2.9438e-05, 'epoch': 2.25} [INFO|2024-12-28 21:02:33] logging.py:157 >> {'loss': 0.6259, 'learning_rate': 2.8846e-05, 'epoch': 2.26} [INFO|2024-12-28 21:02:41] logging.py:157 >> {'loss': 0.5759, 'learning_rate': 2.8260e-05, 'epoch': 2.26} [INFO|2024-12-28 21:02:47] logging.py:157 >> {'loss': 0.4970, 'learning_rate': 2.7679e-05, 'epoch': 2.27} [INFO|2024-12-28 21:02:53] logging.py:157 >> {'loss': 0.6190, 'learning_rate': 2.7103e-05, 'epoch': 2.28} [INFO|2024-12-28 21:02:57] logging.py:157 >> {'loss': 0.5858, 'learning_rate': 2.6532e-05, 'epoch': 2.29} [INFO|2024-12-28 21:03:04] logging.py:157 >> {'loss': 0.6291, 'learning_rate': 2.5966e-05, 'epoch': 2.30} [INFO|2024-12-28 21:03:10] logging.py:157 >> {'loss': 0.6418, 'learning_rate': 2.5406e-05, 'epoch': 2.30} [INFO|2024-12-28 21:03:17] logging.py:157 >> {'loss': 0.5483, 'learning_rate': 2.4851e-05, 'epoch': 2.31} [INFO|2024-12-28 21:03:24] logging.py:157 >> {'loss': 0.6071, 'learning_rate': 2.4300e-05, 'epoch': 2.32} [INFO|2024-12-28 21:03:31] logging.py:157 >> {'loss': 0.5099, 'learning_rate': 2.3756e-05, 'epoch': 2.33} [INFO|2024-12-28 21:03:35] logging.py:157 >> {'loss': 0.5186, 'learning_rate': 2.3216e-05, 'epoch': 2.34} [INFO|2024-12-28 21:03:42] logging.py:157 >> {'loss': 0.5043, 'learning_rate': 2.2682e-05, 'epoch': 2.34} [INFO|2024-12-28 21:03:48] logging.py:157 >> {'loss': 0.6100, 'learning_rate': 2.2154e-05, 'epoch': 2.35} [INFO|2024-12-28 21:03:55] logging.py:157 >> {'loss': 0.5987, 'learning_rate': 2.1631e-05, 'epoch': 2.36} [INFO|2024-12-28 21:04:02] logging.py:157 >> {'loss': 0.5212, 'learning_rate': 2.1113e-05, 'epoch': 2.37} [INFO|2024-12-28 21:04:10] logging.py:157 >> {'loss': 0.4796, 'learning_rate': 2.0601e-05, 'epoch': 2.38} [INFO|2024-12-28 21:04:17] logging.py:157 >> {'loss': 0.4844, 'learning_rate': 2.0094e-05, 'epoch': 2.38} [INFO|2024-12-28 21:04:22] logging.py:157 >> {'loss': 0.5085, 'learning_rate': 1.9594e-05, 'epoch': 2.39} [INFO|2024-12-28 21:04:29] logging.py:157 >> {'loss': 0.4839, 'learning_rate': 1.9098e-05, 'epoch': 2.40} [INFO|2024-12-28 21:04:29] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1500 [INFO|2024-12-28 21:04:29] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 21:04:29] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 21:04:29] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1500/tokenizer_config.json [INFO|2024-12-28 21:04:29] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1500/special_tokens_map.json [INFO|2024-12-28 21:04:38] logging.py:157 >> {'loss': 0.5715, 'learning_rate': 1.8609e-05, 'epoch': 2.41} [INFO|2024-12-28 21:04:42] logging.py:157 >> {'loss': 0.5266, 'learning_rate': 1.8125e-05, 'epoch': 2.42} [INFO|2024-12-28 21:04:49] logging.py:157 >> {'loss': 0.5422, 'learning_rate': 1.7647e-05, 'epoch': 2.42} [INFO|2024-12-28 21:04:55] logging.py:157 >> {'loss': 0.5553, 'learning_rate': 1.7174e-05, 'epoch': 2.43} [INFO|2024-12-28 21:05:03] logging.py:157 >> {'loss': 0.5765, 'learning_rate': 1.6708e-05, 'epoch': 2.44} [INFO|2024-12-28 21:05:08] logging.py:157 >> {'loss': 0.5490, 'learning_rate': 1.6247e-05, 'epoch': 2.45} [INFO|2024-12-28 21:05:13] logging.py:157 >> {'loss': 0.4876, 'learning_rate': 1.5792e-05, 'epoch': 2.46} [INFO|2024-12-28 21:05:19] logging.py:157 >> {'loss': 0.5546, 'learning_rate': 1.5344e-05, 'epoch': 2.46} [INFO|2024-12-28 21:05:24] logging.py:157 >> {'loss': 0.5356, 'learning_rate': 1.4901e-05, 'epoch': 2.47} [INFO|2024-12-28 21:05:30] logging.py:157 >> {'loss': 0.5142, 'learning_rate': 1.4464e-05, 'epoch': 2.48} [INFO|2024-12-28 21:05:35] logging.py:157 >> {'loss': 0.6054, 'learning_rate': 1.4033e-05, 'epoch': 2.49} [INFO|2024-12-28 21:05:41] logging.py:157 >> {'loss': 0.5294, 'learning_rate': 1.3608e-05, 'epoch': 2.50} [INFO|2024-12-28 21:05:45] logging.py:157 >> {'loss': 0.5294, 'learning_rate': 1.3189e-05, 'epoch': 2.50} [INFO|2024-12-28 21:05:53] logging.py:157 >> {'loss': 0.4905, 'learning_rate': 1.2776e-05, 'epoch': 2.51} [INFO|2024-12-28 21:06:00] logging.py:157 >> {'loss': 0.5186, 'learning_rate': 1.2369e-05, 'epoch': 2.52} [INFO|2024-12-28 21:06:06] logging.py:157 >> {'loss': 0.4909, 'learning_rate': 1.1969e-05, 'epoch': 2.53} [INFO|2024-12-28 21:06:11] logging.py:157 >> {'loss': 0.5303, 'learning_rate': 1.1574e-05, 'epoch': 2.54} [INFO|2024-12-28 21:06:17] logging.py:157 >> {'loss': 0.5169, 'learning_rate': 1.1186e-05, 'epoch': 2.54} [INFO|2024-12-28 21:06:24] logging.py:157 >> {'loss': 0.5339, 'learning_rate': 1.0804e-05, 'epoch': 2.55} [INFO|2024-12-28 21:06:29] logging.py:157 >> {'loss': 0.5283, 'learning_rate': 1.0429e-05, 'epoch': 2.56} [INFO|2024-12-28 21:06:29] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1600 [INFO|2024-12-28 21:06:29] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 21:06:29] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 21:06:29] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1600/tokenizer_config.json [INFO|2024-12-28 21:06:29] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1600/special_tokens_map.json [INFO|2024-12-28 21:06:36] logging.py:157 >> {'loss': 0.5576, 'learning_rate': 1.0059e-05, 'epoch': 2.57} [INFO|2024-12-28 21:06:42] logging.py:157 >> {'loss': 0.5136, 'learning_rate': 9.6964e-06, 'epoch': 2.58} [INFO|2024-12-28 21:06:49] logging.py:157 >> {'loss': 0.5885, 'learning_rate': 9.3397e-06, 'epoch': 2.58} [INFO|2024-12-28 21:06:57] logging.py:157 >> {'loss': 0.5000, 'learning_rate': 8.9894e-06, 'epoch': 2.59} [INFO|2024-12-28 21:07:04] logging.py:157 >> {'loss': 0.5325, 'learning_rate': 8.6455e-06, 'epoch': 2.60} [INFO|2024-12-28 21:07:10] logging.py:157 >> {'loss': 0.5772, 'learning_rate': 8.3079e-06, 'epoch': 2.61} [INFO|2024-12-28 21:07:17] logging.py:157 >> {'loss': 0.5736, 'learning_rate': 7.9768e-06, 'epoch': 2.62} [INFO|2024-12-28 21:07:23] logging.py:157 >> {'loss': 0.5199, 'learning_rate': 7.6522e-06, 'epoch': 2.62} [INFO|2024-12-28 21:07:30] logging.py:157 >> {'loss': 0.5753, 'learning_rate': 7.3340e-06, 'epoch': 2.63} [INFO|2024-12-28 21:07:35] logging.py:157 >> {'loss': 0.5424, 'learning_rate': 7.0224e-06, 'epoch': 2.64} [INFO|2024-12-28 21:07:42] logging.py:157 >> {'loss': 0.5555, 'learning_rate': 6.7172e-06, 'epoch': 2.65} [INFO|2024-12-28 21:07:50] logging.py:157 >> {'loss': 0.4936, 'learning_rate': 6.4186e-06, 'epoch': 2.66} [INFO|2024-12-28 21:07:56] logging.py:157 >> {'loss': 0.6291, 'learning_rate': 6.1266e-06, 'epoch': 2.66} [INFO|2024-12-28 21:08:01] logging.py:157 >> {'loss': 0.5197, 'learning_rate': 5.8412e-06, 'epoch': 2.67} [INFO|2024-12-28 21:08:08] logging.py:157 >> {'loss': 0.5398, 'learning_rate': 5.5624e-06, 'epoch': 2.68} [INFO|2024-12-28 21:08:13] logging.py:157 >> {'loss': 0.6059, 'learning_rate': 5.2902e-06, 'epoch': 2.69} [INFO|2024-12-28 21:08:20] logging.py:157 >> {'loss': 0.5310, 'learning_rate': 5.0246e-06, 'epoch': 2.70} [INFO|2024-12-28 21:08:24] logging.py:157 >> {'loss': 0.6054, 'learning_rate': 4.7657e-06, 'epoch': 2.70} [INFO|2024-12-28 21:08:31] logging.py:157 >> {'loss': 0.5173, 'learning_rate': 4.5135e-06, 'epoch': 2.71} [INFO|2024-12-28 21:08:36] logging.py:157 >> {'loss': 0.4944, 'learning_rate': 4.2681e-06, 'epoch': 2.72} [INFO|2024-12-28 21:08:36] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1700 [INFO|2024-12-28 21:08:37] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 21:08:37] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 21:08:37] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1700/tokenizer_config.json [INFO|2024-12-28 21:08:37] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1700/special_tokens_map.json [INFO|2024-12-28 21:08:44] logging.py:157 >> {'loss': 0.5183, 'learning_rate': 4.0293e-06, 'epoch': 2.73} [INFO|2024-12-28 21:08:50] logging.py:157 >> {'loss': 0.6680, 'learning_rate': 3.7972e-06, 'epoch': 2.74} [INFO|2024-12-28 21:08:56] logging.py:157 >> {'loss': 0.5068, 'learning_rate': 3.5719e-06, 'epoch': 2.74} [INFO|2024-12-28 21:09:02] logging.py:157 >> {'loss': 0.5823, 'learning_rate': 3.3534e-06, 'epoch': 2.75} [INFO|2024-12-28 21:09:07] logging.py:157 >> {'loss': 0.6111, 'learning_rate': 3.1417e-06, 'epoch': 2.76} [INFO|2024-12-28 21:09:13] logging.py:157 >> {'loss': 0.5231, 'learning_rate': 2.9367e-06, 'epoch': 2.77} [INFO|2024-12-28 21:09:20] logging.py:157 >> {'loss': 0.5236, 'learning_rate': 2.7386e-06, 'epoch': 2.78} [INFO|2024-12-28 21:09:29] logging.py:157 >> {'loss': 0.5657, 'learning_rate': 2.5473e-06, 'epoch': 2.78} [INFO|2024-12-28 21:09:34] logging.py:157 >> {'loss': 0.5518, 'learning_rate': 2.3629e-06, 'epoch': 2.79} [INFO|2024-12-28 21:09:40] logging.py:157 >> {'loss': 0.4908, 'learning_rate': 2.1852e-06, 'epoch': 2.80} [INFO|2024-12-28 21:09:49] logging.py:157 >> {'loss': 0.5459, 'learning_rate': 2.0145e-06, 'epoch': 2.81} [INFO|2024-12-28 21:09:56] logging.py:157 >> {'loss': 0.5208, 'learning_rate': 1.8506e-06, 'epoch': 2.82} [INFO|2024-12-28 21:10:02] logging.py:157 >> {'loss': 0.5824, 'learning_rate': 1.6936e-06, 'epoch': 2.82} [INFO|2024-12-28 21:10:11] logging.py:157 >> {'loss': 0.5512, 'learning_rate': 1.5436e-06, 'epoch': 2.83} [INFO|2024-12-28 21:10:18] logging.py:157 >> {'loss': 0.6327, 'learning_rate': 1.4004e-06, 'epoch': 2.84} [INFO|2024-12-28 21:10:26] logging.py:157 >> {'loss': 0.5292, 'learning_rate': 1.2641e-06, 'epoch': 2.85} [INFO|2024-12-28 21:10:32] logging.py:157 >> {'loss': 0.5692, 'learning_rate': 1.1348e-06, 'epoch': 2.86} [INFO|2024-12-28 21:10:39] logging.py:157 >> {'loss': 0.5532, 'learning_rate': 1.0124e-06, 'epoch': 2.86} [INFO|2024-12-28 21:10:44] logging.py:157 >> {'loss': 0.5314, 'learning_rate': 8.9701e-07, 'epoch': 2.87} [INFO|2024-12-28 21:10:51] logging.py:157 >> {'loss': 0.5240, 'learning_rate': 7.8853e-07, 'epoch': 2.88} [INFO|2024-12-28 21:10:51] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1800 [INFO|2024-12-28 21:10:51] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 21:10:51] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 21:10:52] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1800/tokenizer_config.json [INFO|2024-12-28 21:10:52] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1800/special_tokens_map.json [INFO|2024-12-28 21:10:58] logging.py:157 >> {'loss': 0.5227, 'learning_rate': 6.8701e-07, 'epoch': 2.89} [INFO|2024-12-28 21:11:04] logging.py:157 >> {'loss': 0.5993, 'learning_rate': 5.9247e-07, 'epoch': 2.90} [INFO|2024-12-28 21:11:12] logging.py:157 >> {'loss': 0.5618, 'learning_rate': 5.0490e-07, 'epoch': 2.90} [INFO|2024-12-28 21:11:18] logging.py:157 >> {'loss': 0.5504, 'learning_rate': 4.2431e-07, 'epoch': 2.91} [INFO|2024-12-28 21:11:24] logging.py:157 >> {'loss': 0.5147, 'learning_rate': 3.5071e-07, 'epoch': 2.92} [INFO|2024-12-28 21:11:32] logging.py:157 >> {'loss': 0.5564, 'learning_rate': 2.8411e-07, 'epoch': 2.93} [INFO|2024-12-28 21:11:40] logging.py:157 >> {'loss': 0.5188, 'learning_rate': 2.2450e-07, 'epoch': 2.94} [INFO|2024-12-28 21:11:45] logging.py:157 >> {'loss': 0.5438, 'learning_rate': 1.7190e-07, 'epoch': 2.94} [INFO|2024-12-28 21:11:51] logging.py:157 >> {'loss': 0.4962, 'learning_rate': 1.2630e-07, 'epoch': 2.95} [INFO|2024-12-28 21:11:58] logging.py:157 >> {'loss': 0.5021, 'learning_rate': 8.7717e-08, 'epoch': 2.96} [INFO|2024-12-28 21:12:03] logging.py:157 >> {'loss': 0.5277, 'learning_rate': 5.6142e-08, 'epoch': 2.97} [INFO|2024-12-28 21:12:06] logging.py:157 >> {'loss': 0.5764, 'learning_rate': 3.1581e-08, 'epoch': 2.98} [INFO|2024-12-28 21:12:14] logging.py:157 >> {'loss': 0.5408, 'learning_rate': 1.4036e-08, 'epoch': 2.98} [INFO|2024-12-28 21:12:23] logging.py:157 >> {'loss': 0.5014, 'learning_rate': 3.5092e-09, 'epoch': 2.99} [INFO|2024-12-28 21:12:29] logging.py:157 >> {'loss': 0.6182, 'learning_rate': 0.0000e+00, 'epoch': 3.00} [INFO|2024-12-28 21:12:29] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1875 [INFO|2024-12-28 21:12:29] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 21:12:29] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 21:12:30] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1875/tokenizer_config.json [INFO|2024-12-28 21:12:30] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/checkpoint-1875/special_tokens_map.json [INFO|2024-12-28 21:12:31] trainer.py:2584 >> Training completed. Do not forget to share your model on huggingface.co/models =) [INFO|2024-12-28 21:12:31] trainer.py:3801 >> Saving model checkpoint to saves/Llama-3.2-1B/lora/llama3.2-1b [INFO|2024-12-28 21:12:31] configuration_utils.py:679 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B/snapshots/4e20de362430cd3b72f300e6b0f18e50e7166e08/config.json [INFO|2024-12-28 21:12:31] configuration_utils.py:746 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 16, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 32.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": true, "torch_dtype": "bfloat16", "transformers_version": "4.46.1", "use_cache": true, "vocab_size": 128256 } [INFO|2024-12-28 21:12:31] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/tokenizer_config.json [INFO|2024-12-28 21:12:31] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Llama-3.2-1B/lora/llama3.2-1b/special_tokens_map.json [WARNING|2024-12-28 21:12:32] logging.py:162 >> No metric eval_loss to plot. [WARNING|2024-12-28 21:12:32] logging.py:162 >> No metric eval_accuracy to plot. [INFO|2024-12-28 21:12:32] modelcard.py:449 >> Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}