license: cc-by-nc-4.0
language:
- en
inference: false
tags:
- roleplay
- llama3
- sillytavern
#roleplay #sillytavern #llama3
My GGUF-IQ-Imatrix quants for Nitral-AI/Poppy_Porpoise-1.0-L3-8B.
"Isn't Poppy the cutest Porpoise?"
Quantization process:
For future reference, these quants have been done after the fixes from #6920 have been merged.
Since the original model was already an FP16, imatrix data was generated from the FP16-GGUF and the conversions as well.
If you noticed any issues let me know in the discussions.
General usage:
Use the latest version of KoboldCpp.
Remember that you can also use--flashattention
on KoboldCpp now even with non-RTX cards for reduced VRAM usage.
For 8GB VRAM GPUs, I recommend the Q4_K_M-imat quant for up to 12288 context sizes.
For 12GB VRAM GPUs, the Q5_K_M-imat quant will give you a great size/quality balance.Resources:
You can find out more about how each quant stacks up against each other and their types here and here, respectively.Presets:
Some compatible SillyTavern presets can be found here (New Poppy-1.0 Presets) or here (Virt's Roleplay Presets).
Personal-support:
I apologize for disrupting your experience.
Currently I'm working on moving for a better internet provider.
If you want and you are able to...
You can spare some change over here (Ko-fi).Author-support:
You can support the author at their own page.
Original model text information:
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
Presets in repo folder:
If you want to use vision functionality:
- You must use the latest versions of Koboldcpp.
To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found here: Llava-MMProj file.
- You can load the mmproj file by using the corresponding section in the interface:
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 69.24 |
AI2 Reasoning Challenge (25-Shot) | 63.40 |
HellaSwag (10-Shot) | 82.89 |
MMLU (5-Shot) | 68.04 |
TruthfulQA (0-shot) | 54.12 |
Winogrande (5-shot) | 77.90 |
GSM8k (5-shot) | 69.07 |