meta_llama_3_Magiccoder_evol_10k_ortho_eye
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.2130
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.02
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.2518 | 0.0259 | 4 | 1.3478 |
1.2877 | 0.0518 | 8 | 1.2942 |
1.2458 | 0.0777 | 12 | 1.2617 |
1.2587 | 0.1036 | 16 | 1.2500 |
1.2636 | 0.1296 | 20 | 1.2422 |
1.1773 | 0.1555 | 24 | 1.2396 |
1.2317 | 0.1814 | 28 | 1.2360 |
1.2043 | 0.2073 | 32 | 1.2328 |
1.2271 | 0.2332 | 36 | 1.2305 |
1.2779 | 0.2591 | 40 | 1.2302 |
1.2349 | 0.2850 | 44 | 1.2278 |
1.1637 | 0.3109 | 48 | 1.2280 |
1.2164 | 0.3368 | 52 | 1.2256 |
1.2552 | 0.3628 | 56 | 1.2239 |
1.234 | 0.3887 | 60 | 1.2228 |
1.2949 | 0.4146 | 64 | 1.2213 |
1.1459 | 0.4405 | 68 | 1.2208 |
1.2171 | 0.4664 | 72 | 1.2189 |
1.1636 | 0.4923 | 76 | 1.2213 |
1.2506 | 0.5182 | 80 | 1.2182 |
1.1753 | 0.5441 | 84 | 1.2163 |
1.2175 | 0.5700 | 88 | 1.2161 |
1.3172 | 0.5960 | 92 | 1.2146 |
1.1309 | 0.6219 | 96 | 1.2159 |
1.2174 | 0.6478 | 100 | 1.2151 |
1.1373 | 0.6737 | 104 | 1.2157 |
1.1638 | 0.6996 | 108 | 1.2158 |
1.1736 | 0.7255 | 112 | 1.2153 |
1.2293 | 0.7514 | 116 | 1.2148 |
1.2124 | 0.7773 | 120 | 1.2143 |
1.2335 | 0.8032 | 124 | 1.2138 |
1.2232 | 0.8291 | 128 | 1.2136 |
1.2183 | 0.8551 | 132 | 1.2132 |
1.1804 | 0.8810 | 136 | 1.2130 |
1.1769 | 0.9069 | 140 | 1.2129 |
1.1632 | 0.9328 | 144 | 1.2130 |
1.1895 | 0.9587 | 148 | 1.2129 |
1.1615 | 0.9846 | 152 | 1.2130 |
Framework versions
- PEFT 0.7.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for imdatta0/meta_llama_3_Magiccoder_evol_10k_ortho_eye
Base model
meta-llama/Meta-Llama-3-8B