--- library_name: transformers license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - trl - sft - generated_from_trainer model-index: - name: IE_L3_1000steps_1e7rate_SFT results: [] --- # IE_L3_1000steps_1e7rate_SFT This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2086 | 0.4 | 50 | 2.2064 | | 2.2198 | 0.8 | 100 | 2.1548 | | 2.2023 | 1.2 | 150 | 2.0793 | | 2.0602 | 1.6 | 200 | 2.0114 | | 2.0378 | 2.0 | 250 | 1.9558 | | 2.0038 | 2.4 | 300 | 1.8972 | | 1.9713 | 2.8 | 350 | 1.8398 | | 1.8103 | 3.2 | 400 | 1.7944 | | 1.8982 | 3.6 | 450 | 1.7569 | | 1.7218 | 4.0 | 500 | 1.7267 | | 1.824 | 4.4 | 550 | 1.7062 | | 1.7494 | 4.8 | 600 | 1.6925 | | 1.7574 | 5.2 | 650 | 1.6844 | | 1.738 | 5.6 | 700 | 1.6798 | | 1.6533 | 6.0 | 750 | 1.6779 | | 1.7537 | 6.4 | 800 | 1.6770 | | 1.7075 | 6.8 | 850 | 1.6770 | | 1.7128 | 7.2 | 900 | 1.6772 | | 1.7139 | 7.6 | 950 | 1.6772 | | 1.7539 | 8.0 | 1000 | 1.6772 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.0.0+cu117 - Datasets 3.0.0 - Tokenizers 0.19.1