File size: 4,682 Bytes
95161ca
 
 
 
 
 
 
 
 
bebd9ef
95161ca
 
b6ab6d6
1858de0
 
f5d0233
1858de0
cfbaed7
1858de0
 
95161ca
e4f84e1
95161ca
 
 
 
 
 
 
 
 
 
 
 
1b5d05a
95161ca
 
 
 
6eac880
95161ca
 
45108af
95161ca
 
 
 
 
 
 
 
 
 
 
 
 
734389a
95161ca
734389a
95161ca
734389a
 
 
e4f84e1
734389a
95161ca
734389a
95161ca
 
 
0bb5d5e
95161ca
0bb5d5e
95161ca
e4f84e1
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
license: cc-by-nc-4.0
language:
- en
inference: false
tags:
- roleplay
- llama3
- sillytavern
- broken
---

> [!CAUTION]
> # #broken
> 
> **[Use version 0.72 instead.](https://huggingface.co/Lewdiculous/Poppy_Porpoise-0.72-L3-8B-GGUF-IQ-Imatrix)**
> 
> **This model is now **deprecated** since the [author has identified significant issues with it](https://huggingface.co/LWDCLS/LLM-Discussions/discussions/12#665d0331325b11b6fe29835f), this is considered #broken and is only still kept here for archiving.**
> 
> ![JmoAAPf.png](https://iili.io/JmoAAPf.png)

My GGUF-IQ-Imatrix quants for [**Nitral-AI/Poppy_Porpoise-1.0-L3-8B**](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B).

"Isn't Poppy the cutest [Porpoise](https://g.co/kgs/5C2zP3r)?"

> [!IMPORTANT]
> **Quantization process:** <br>
> For future reference, these quants have been done after the fixes from [**#6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
> Since the original model was already an FP16, imatrix data was generated from the FP16-GGUF and the conversions as well. <br> <!-- This was a bit more disk and compute intensive but hopefully avoided any losses during conversion. <br> -->
> If you noticed any issues let me know in the discussions.

> [!NOTE]
> **General usage:** <br>
> Use the latest version of **KoboldCpp**. <br>
> Remember that you can also use `--flashattention` on KoboldCpp now even with non-RTX cards for reduced VRAM usage. <br>
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes. <br>
> For **12GB VRAM** GPUs, the **Q5_K_M-imat** quant will give you a great size/quality balance. <br>
>
> **Resources:** <br>
> You can find out more about how each quant stacks up against each other and their types [**here**](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [**here**](https://rentry.org/llama-cpp-quants-or-fine-ill-do-it-myself-then-pt-2), respectively.
> 
> **Presets:** <br>
> Some compatible SillyTavern presets can be found [**here (New Poppy-1.0 Presets)**](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B/tree/main/Porpoise_1.0-Presets) or [**here (Virt's Roleplay Presets)**](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
<!-- > Check [**discussions such as this one**](https://huggingface.co/Virt-io/SillyTavern-Presets/discussions/5#664d6fb87c563d4d95151baa) for other recommendations and samplers.
-->

> [!TIP]
> **Personal-support:** <br>
> I apologize for disrupting your experience. <br>
> Currently I'm working on moving for a better internet provider. <br>
> If you **want** and you are **able to**... <br>
> You can [**spare some change over here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
>
> **Author-support:** <br>
> You can support the author [**at their own page**](https://huggingface.co/Nitral-AI).

![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png)

## **Original model text information:**

**"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.**

# Presets in repo folder:

 * https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B/tree/main/Porpoise_1.0-Presets

# If you want to use vision functionality:

 * You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
 
# To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found here: [Llava-MMProj file](https://huggingface.co/Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16).
 
 * You can load the **mmproj** file by using the corresponding section in the interface:

 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Nitral-AI__Poppy_Porpoise-0.85-L3-8B)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |69.24|
|AI2 Reasoning Challenge (25-Shot)|63.40|
|HellaSwag (10-Shot)              |82.89|
|MMLU (5-Shot)                    |68.04|
|TruthfulQA (0-shot)              |54.12|
|Winogrande (5-shot)              |77.90|
|GSM8k (5-shot)                   |69.07|