xxx777xxxASD's picture
Update README.md
e353ca4 verified
|
raw
history blame
2.71 kB
metadata
license: llama3
tags:
  - moe
language:
  - en

image/png (Maybe i'll change the waifu picture later.)

MAY BE BROKEN! TRY 1.5 or 1.0 INSTEAD AND CHECK IF THEY WORK BETTER

GGUF quants

Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.

The model has totally 25B parameters, of which ~13B are active.

Please feedback me if it's more stable than the previous version

Llama 3 ChaoticSoliloquy-v2-4x8B test

base_model: L3_ChaosMaid_8B
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
  - source_model: ChaoticNeutrals_Poppy_Porpoise-0.72-L3-8B
  - source_model: L3_ChaosMaid_8B
  - source_model: openlynn_Llama-3-Soliloquy-8B-v2
  - source_model: Sao10K_L3-Solana-8B-v1

Models used

Difference

L3 ChaosMaid-8B

models:
  - model: jeiku_Chaos_RP_l3_8B
    # No parameters necessary for base model
  - model: NeverSleep_Llama-3-Lumimaid-8B-v0.1
    parameters:
      density: 0.5
      weight: 0.5
merge_method: dare_ties
base_model: jeiku_Chaos_RP_l3_8B
parameters:
  int8_mask: true
dtype: bfloat16

Vision

llama3_mmproj

image/png

Prompt format: Llama 3