|
--- |
|
tags: |
|
- gguf |
|
- mistral |
|
- conversational |
|
--- |
|
<img src="https://i.imgur.com/P68dXux.png" width="400"/> |
|
|
|
|
|
# Mistral 7B v0.2 iMat GGUF |
|
|
|
<h1>Not to be confused with Mistral 7B Instruct v0.2 (this is the latest release from 3/23) </h1> |
|
|
|
|
|
|
|
Mistral 7B v0.2 iMat GGUF quantized from fp16 with love. |
|
* iMat dat file created using groups_merged.txt |
|
* Not sure what to expect from this model by itself but uploading to repo incase anyone is curious like me |
|
|
|
<b>Legacy quants (i.e. Q8, Q5_K_M) in this repo have all been enhanced with importance matrix calculation. These quants show improved KL-Divergence over their static counterparts.</b> |
|
|
|
All files have been tested for your safety and convenience. No need to clone the entire repo, just pick the quant that's right for you. |
|
|
|
For more information on latest iMatrix quants see this PR - https://github.com/ggerganov/llama.cpp/pull/5747 |
|
|
|
|