Llama3_8B_final_Task2_2.0
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5334
- Accuracy: 0.9129
- Precision: 0.9050
- Recall: 0.9231
- F1 score: 0.9140
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 score |
---|---|---|---|---|---|---|---|
0.6105 | 0.2725 | 200 | 0.3962 | 0.8486 | 0.8793 | 0.8091 | 0.8427 |
0.4553 | 0.5450 | 400 | 0.3388 | 0.8757 | 0.8729 | 0.8803 | 0.8766 |
0.4318 | 0.8174 | 600 | 0.3273 | 0.89 | 0.9006 | 0.8775 | 0.8889 |
0.3155 | 1.0899 | 800 | 0.3807 | 0.89 | 0.9281 | 0.8462 | 0.8852 |
0.3279 | 1.3624 | 1000 | 0.3757 | 0.8943 | 0.9601 | 0.8234 | 0.8865 |
0.2451 | 1.6349 | 1200 | 0.3784 | 0.89 | 0.8683 | 0.9202 | 0.8935 |
0.2956 | 1.9074 | 1400 | 0.3187 | 0.9143 | 0.9318 | 0.8946 | 0.9128 |
0.2107 | 2.1798 | 1600 | 0.3999 | 0.89 | 0.8513 | 0.9459 | 0.8961 |
0.1744 | 2.4523 | 1800 | 0.6330 | 0.8857 | 0.9788 | 0.7892 | 0.8738 |
0.191 | 2.7248 | 2000 | 0.4101 | 0.91 | 0.9444 | 0.8718 | 0.9067 |
0.1378 | 2.9973 | 2200 | 0.4604 | 0.8957 | 0.8582 | 0.9487 | 0.9012 |
0.0703 | 3.2698 | 2400 | 0.4276 | 0.9 | 0.8958 | 0.9060 | 0.9008 |
0.0582 | 3.5422 | 2600 | 0.5431 | 0.9086 | 0.9527 | 0.8604 | 0.9042 |
0.0887 | 3.8147 | 2800 | 0.4993 | 0.9157 | 0.9534 | 0.8746 | 0.9123 |
0.0976 | 4.0872 | 3000 | 0.4540 | 0.9157 | 0.9269 | 0.9031 | 0.9149 |
0.0179 | 4.3597 | 3200 | 0.5068 | 0.92 | 0.9086 | 0.9345 | 0.9213 |
0.0277 | 4.6322 | 3400 | 0.5119 | 0.9157 | 0.9269 | 0.9031 | 0.9149 |
0.0231 | 4.9046 | 3600 | 0.5334 | 0.9129 | 0.9050 | 0.9231 | 0.9140 |
Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for rishavranaut/Llama3_8B_final_task2_2.0
Base model
meta-llama/Meta-Llama-3-8B