Lewdiculous's picture
Update README.md
a9bcfe3 verified
|
raw
history blame
3.42 kB
metadata
license: apache-2.0
language:
  - en
inference: false
tags:
  - roleplay
  - llama3
  - sillytavern

#roleplay #sillytavern #llama3

My GGUF-IQ-Imatrix quants for nbeerbower/llama-3-Stheno-Mahou-8B.

"A potential precious hidden gem, will you polish this rough diamond?"

This is a merge of two very interesting models, aimed at roleplaying usage.

image/png

Personal-support:
I apologize for disrupting your experience.
Currently I'm working on moving for a better internet provider.
If you want and you are able to...
You can spare some change over here (Ko-fi).

Author-support:
You can support the author at their own page.

Quantization process:
For future reference, these quants have been done after the fixes from #6920 have been merged.
Imatrix data was generated from the FP16-GGUF and the final conversions used BF16-GGUF for the quantization process.
This was a bit more disk and compute intensive but hopefully avoided any losses during conversion.
If you noticed any issues let me know in the discussions.

General usage:
Use the latest version of KoboldCpp.
Remember that you can also use --flashattention on KoboldCpp now even with non-RTX cards for reduced VRAM usage.
For 8GB VRAM GPUs, I recommend the Q4_K_M-imat quant for up to 12288 context sizes.
For 12GB VRAM GPUs, the Q5_K_M-imat quant will give you a great size/quality balance.

Resources:
You can find out more about how each quant stacks up against each other and their types here and here, respectively.

Presets:
Some compatible SillyTavern presets can be found here (Virt's Roleplay Presets).

Original model text information:

llama-3-Stheno-Mahou-8B

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using flammenai/Mahou-1.2-llama3-8B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: flammenai/Mahou-1.1-llama3-8B
  - model: Sao10K/L3-8B-Stheno-v3.1
merge_method: model_stock
base_model: flammenai/Mahou-1.2-llama3-8B
dtype: bfloat16