kromeurus's picture
Update README.md
e05786b verified
|
raw
history blame
3.43 kB
metadata
base_model:
  - crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
  - Hastagaras/Jamet-8B-L3-MK.V-Blackroot
  - Hastagaras/Jamet-8B-L3-MK.V-Blackroot
  - TheDrummer/Llama-3SOME-8B-v2
tags:
  - merge
  - mergekit
  - lazymergekit
  - crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
  - Hastagaras/Jamet-8B-L3-MK.V-Blackroot
  - TheDrummer/Llama-3SOME-8B-v2
  - not-for-all-audiences

GGUF Quants by mradermacher

iGGUF Quants by mradermacher

My first merge and I honestly thought it would go worse, but I'm pleased with the results. Originally, this model was created to be merged with another model as a part of a personal project; now known as Blackfall Summanus. I wanted a 15B eRP model with the 'humanness' of Jamet-Blackroot with the chaos of Sunfall Stheno, tamed a little by some 'lewd' per se. It worked, mostly. By all means, this is an experimental merge, I'm sure there are problems I haven't found so please bear with me.

In limited testing, this model can't handle high temps at all. Anything higher then 1.3 will send it off the rails. Seems to play better with top P=0.8 and top K=25 on Llama3 template, but I'm still fooling with the settings. Has a tendency to ramble and can get quite dramatic, nothing too insane yet. Interently a little buggy and incoherent so proceed with caution.

L3-Blackened-Sunfall-15B

L3-Blackened-Sunfall-15B is a merge of the following models using LazyMergekit:

🧩 Configuration

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 20]
    model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
- sources:
  - layer_range: [6, 24]
    model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [8, 26]
    model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [24, 32]
    model: TheDrummer/Llama-3SOME-8B-v2

💻 Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "kromeurus/L3-Blackened-Sunfall-15B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])