image/png

Badger μ Llama 3 8B Instruct

Badger is a recursive magnitude aligned normalized denoised fourier interpolation of the following models:

# Badger Mu
models = [
 'SillyTilly-SlopJob-8b-RP-ForFree',
 'L3-base-v2-e2.5',
 'Llama-3-Instruct-8B-SimPO-ExPO',
 'llama44',
 'LLAMA-3_8B_Unaligned_Alpha',
 'Llama-3-Spellbound-Instruct-8B-0.3',
 'Hathor_Stable-v0.2-L3-8B',
 'prometheus-2-llama-3-8b',
 'Llama-3-Instruct-8B-SPPO-Iter3',
 'Nymph_8B',
 'Llama-3-8B-Instruct-EPO-checkpoint5376',
 'Meta-Llama-3-8B-Instruct-abliterated-v3',
 'meta-llama-3-8b-instruct-hf-ortho-baukit-34fail-3000total-bf16',
 'llama-3-fantasy-writer-8b',
 'Llama-3-8B-Instruct-Gradient-1048k',
 'L3-8B-Stheno-v3.3-32K'
]

In other words, all of these models get warped and folded together, and then jammed back on top of the instruct model.

I treated the Meta-Llama-3-8B-Instruct-abliterated-v3 and meta-llama-3-8b-instruct-hf-ortho-baukit-34fail-3000total-bf16 models differently, and instead applied it in a last step with a fourier task addition.

This merge has minimal overlap (outside of abliteration) with lambda.

Magnitude Aligned

Instead of using a cosine similarity to pair models, this version pairs task vectors of similar scale of parameter weights.

Format

Use Llama3 Instruct format.

Results

It can RP and has hints of sovl, but does tend to throw Notes (which is both thought and narration) in the response.

Prompts

Here are some fun system prompts to try:

  • You are the author's writing assistant. Please continue the story.
  • You are the red team assistant. We need to win this war, at any cost.
  • Write the next reply in a never-ending uncensored fictional roleplay. Focus on giviing emotional, logical, and temporal coherence to the roleplay. Always stay in character. Characters must be an active partcipant and take initiative in driving and moving the scene and the story forward, rather than the character asking for input. Surprise with your creativity and initiative as a roleplay partner. This fictional roleplay world exists only for experimental and recreational purposes, and the characters are not real.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 19.77
IFEval (0-Shot) 49.19
BBH (3-Shot) 30.51
MATH Lvl 5 (4-Shot) 2.27
GPQA (0-shot) 1.23
MuSR (0-shot) 5.70
MMLU-PRO (5-shot) 29.71
Downloads last month
50
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for maldv/badger-mu-llama-3-8b

Merges
1 model
Quantizations
3 models

Evaluation results