GGUF files of Llama-3-Peach-Instruct-4x8B-MoE.

Llama-3-Peach-Instruct-4x8B-MoE

This is a experimental MoE created using Mergekit from

Evaluation: Q4_K_M:

  • GSM8K (5-shot): 0.6983 ± 0.0126
  • GSM8K (8-shot, cot): 0.674 ± 0.0129

Mergekit yaml file:

base_model: Meta-Llama-3-8B-Instruct
experts:
  - source_model: Meta-Llama-3-8B-Instruct
    positive_prompts:
    - "explain"
    - "chat"
    - "assistant"
    - "think"
    - "roleplay"
    - "versatile"
    - "helpful"
    - "factual"
    - "integrated"
    - "adaptive"
    - "comprehensive"
    - "balanced"
    negative_prompts:
    - "specialized"
    - "narrow"
    - "focused"
    - "limited"
    - "specific"
  - source_model: Llama-3-8B-Instruct-Coder
    positive_prompts:
    - "python"
    - "math"
    - "solve"
    - "code"
    - "programming"
    - "javascript"
    - "algorithm"
    - "factual"
    negative_prompts:
    - "sorry"
    - "cannot"
    - "concise"
    - "imaginative"
    - "creative"
  - source_model: SFR-Iterative-DPO-LLaMA-3-8B-R
    positive_prompts:
    - "AI"
    - "instructive"
    - "chat"
    - "assistant"
    - "clear"
    - "directive"
    - "helpful"
    - "informative"
  - source_model: Hermes-2-Theta-Llama-3-8B
    positive_prompts:
    - "chat"
    - "assistant"
    - "analytical"
    - "accurate"
    - "code"
    - "logical"
    - "knowledgeable"
    - "precise"
    - "calculate"
    - "compute"
    - "solve"
    - "work"
    - "python"
    - "javascript"
    - "programming"
    - "algorithm"
    - "tell me"
    - "assistant"
    - "factual"
    negative_prompts:
    - "abstract"
    - "artistic"
    - "emotional"
    - "mistake"
    - "inaccurate"
gate_mode: hidden
dtype: float16

Some inspiration for the Mergekit yaml file is from LoneStriker/Umbra-MoE-4x10.7-2.4bpw-h6-exl2.

Downloads last month
162
GGUF
Model size
24.9B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .