metadata
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
datasets:
- KingNish/reasoning-base-20k
Chain of thought Llama-3.1-8b
Basicaly just trained on KingNish/reasoning-base-20k
but added a little **$$CONCLUSION$$**
at the end so the answer can be seperated. My PC is currently broken, so I had to make do with google colab
Uploaded model
- Developed by: Khawn2u
- License: apache-2.0
- Finetuned from model : unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.