Meta-Llama-3-8B_magiccoder_default
This model is a fine-tuned version of unsloth/llama-3-8b on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.2697
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.2592 | 0.0259 | 4 | 1.4263 |
1.4281 | 0.0518 | 8 | 1.4063 |
1.3795 | 0.0777 | 12 | 1.3824 |
1.3751 | 0.1036 | 16 | 1.3937 |
1.4053 | 0.1296 | 20 | 1.3523 |
1.2927 | 0.1555 | 24 | 1.3474 |
1.3619 | 0.1814 | 28 | 1.3529 |
1.3533 | 0.2073 | 32 | 1.3629 |
1.3627 | 0.2332 | 36 | 1.3636 |
1.4408 | 0.2591 | 40 | 1.3531 |
1.3744 | 0.2850 | 44 | 1.3395 |
1.2658 | 0.3109 | 48 | 1.3364 |
1.3364 | 0.3368 | 52 | 1.3400 |
1.3765 | 0.3628 | 56 | 1.3391 |
1.3427 | 0.3887 | 60 | 1.3370 |
1.3975 | 0.4146 | 64 | 1.3329 |
1.2595 | 0.4405 | 68 | 1.3325 |
1.3291 | 0.4664 | 72 | 1.3312 |
1.2702 | 0.4923 | 76 | 1.3323 |
1.3527 | 0.5182 | 80 | 1.3213 |
1.2799 | 0.5441 | 84 | 1.3154 |
1.3082 | 0.5700 | 88 | 1.3099 |
1.4042 | 0.5960 | 92 | 1.3089 |
1.2221 | 0.6219 | 96 | 1.3048 |
1.3079 | 0.6478 | 100 | 1.3017 |
1.2165 | 0.6737 | 104 | 1.2970 |
1.239 | 0.6996 | 108 | 1.2941 |
1.2528 | 0.7255 | 112 | 1.2877 |
1.2932 | 0.7514 | 116 | 1.2859 |
1.2762 | 0.7773 | 120 | 1.2804 |
1.2914 | 0.8032 | 124 | 1.2791 |
1.2835 | 0.8291 | 128 | 1.2755 |
1.2735 | 0.8551 | 132 | 1.2731 |
1.2264 | 0.8810 | 136 | 1.2722 |
1.2637 | 0.9069 | 140 | 1.2713 |
1.2133 | 0.9328 | 144 | 1.2704 |
1.2379 | 0.9587 | 148 | 1.2699 |
1.2131 | 0.9846 | 152 | 1.2697 |
Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 12
Model tree for imdatta0/Meta-Llama-3-8B_magiccoder_default
Base model
unsloth/llama-3-8b