Lewdiculous
commited on
Commit
•
f39e952
1
Parent(s):
29f5ca0
Update README.md
Browse files
README.md
CHANGED
@@ -6,26 +6,23 @@ tags:
|
|
6 |
- sillytavern
|
7 |
---
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
> [!TIP]
|
10 |
-
>
|
11 |
> My upload speeds have been cooked and unstable lately. <br>
|
12 |
-
> Realistically I'd need to move to get a better provider. <br>
|
13 |
> If you **want** and you are able to... <br>
|
14 |
> You can [**support my various endeavors here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
|
15 |
-
> I apologize for disrupting your experience.
|
16 |
|
17 |
GGUF-IQ-Imatrix quants for [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS).
|
18 |
|
19 |
**Author:** <br>
|
20 |
"This model received the Orthogonal Activation Steering treatment, **meaning it will rarely refuse any request.**"
|
21 |
|
22 |
-
> [!IMPORTANT]
|
23 |
-
> **Update:** <br>
|
24 |
-
> Version (**v2**) files added! With imatrix data generated from the FP16 and conversions directly from the BF16. <br>
|
25 |
-
> Hopefully avoiding any losses in the model conversion, as has been the recently discussed topic on Llama-3 and GGUF lately. <br>
|
26 |
-
> If you are able to test them and notice any issues let me know in the discussions.
|
27 |
-
|
28 |
-
>
|
29 |
> **Relevant:** <br>
|
30 |
> These quants have been done after the fixes from [**llama.cpp/pull/6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
|
31 |
> Use **KoboldCpp** version **1.64** or higher, make sure you're up-to-date.
|
|
|
6 |
- sillytavern
|
7 |
---
|
8 |
|
9 |
+
> [!IMPORTANT]
|
10 |
+
> **Update:** <br>
|
11 |
+
> Version (**v2**) files added! With imatrix data generated from the FP16 and conversions directly from the BF16. <br>
|
12 |
+
> Hopefully avoiding any losses in the model conversion, as has been the recently discussed topic on Llama-3 and GGUF lately. <br>
|
13 |
+
> If you are able to test them and notice any issues let me know in the discussions.
|
14 |
+
|
15 |
> [!TIP]
|
16 |
+
> I apologize for disrupting your experience.
|
17 |
> My upload speeds have been cooked and unstable lately. <br>
|
|
|
18 |
> If you **want** and you are able to... <br>
|
19 |
> You can [**support my various endeavors here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
|
|
|
20 |
|
21 |
GGUF-IQ-Imatrix quants for [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS).
|
22 |
|
23 |
**Author:** <br>
|
24 |
"This model received the Orthogonal Activation Steering treatment, **meaning it will rarely refuse any request.**"
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
> **Relevant:** <br>
|
27 |
> These quants have been done after the fixes from [**llama.cpp/pull/6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
|
28 |
> Use **KoboldCpp** version **1.64** or higher, make sure you're up-to-date.
|