Updated!
Version (v2) files added! With imatrix data generated from the FP16 and conversions directly from the BF16.
This is a more disk and compute intensive so lets hope we get GPU inference support for BF16 models in llama.cpp.
Hopefully avoiding any losses in the model conversion, as has been the recently discussed topic on Llama-3 and GGUF lately.
If you are able to test them and notice any issues let me know in the discussions.

Relevant:
These quants have been done after the fixes from llama.cpp/pull/6920 have been merged.
Use KoboldCpp version 1.64 or higher, make sure you're up-to-date.

I apologize for disrupting your experience.
My upload speeds have been cooked and unstable lately.
If you want and you are able to...
You can support my various endeavors here (Ko-fi).

GGUF-IQ-Imatrix quants for NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS.

Author:
"This model received the Orthogonal Activation Steering treatment, meaning it will rarely refuse any request."

Compatible SillyTavern presets here (simple) or here (Virt's Roleplay Presets - recommended).
Use the latest version of KoboldCpp. Use the provided presets for testing.
Feedback and support for the Authors is always welcome.
If there are any issues or questions let me know.

For 8GB VRAM GPUs, I recommend the Q4_K_M-imat (4.89 BPW) quant for up to 12288 context sizes.

image/png

Original model information:

Lumimaid 0.1

This model uses the Llama3 prompting format

Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.

We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.

This model includes the new Luminae dataset from Ikari.

This model have received the Orthogonal Activation Steering treatment, meaning it will rarely refuse any request.

If you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.

Credits:

  • Undi
  • IkariDev

Description

This repo contains FP16 files of Lumimaid-8B-v0.1-OAS.

Switch: 8B - 70B - 70B-alt - 8B-OAS - 70B-OAS

Training data used:

Models used (only for 8B)

  • Initial LumiMaid 8B Finetune
  • Undi95/Llama-3-Unholy-8B-e4
  • Undi95/Llama-3-LewdPlay-8B

Prompt template: Llama3

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

Others

Undi: If you want to support us, you can here.

IkariDev: Visit my retro/neocities style website please kek

Downloads last month
5,214
GGUF
Model size
8.03B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Collections including Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix