Base model: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
This is just a custom 4bit imatrix quant made to run optiomally on a macbook with 8gb of ram.
For use with llama.cpp https://github.com/ggerganov/llama.cpp
- Downloads last month
- 54
Model size
7.24B params
Architecture
llama
Unable to determine this model's library. Check the
docs
.