QuantFactory/L3-Umbral-Mind-RP-v3.0-8B-GGUF

This is quantized version of Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B created using llama.cpp

Original Model Card

Image by ろ47

Merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:

  • Mental illness
  • Self-harm
  • Trauma
  • Suicide

I hated how RP models tended to be overly positive and hopeful with role-plays involving such themes, but thanks to failspy/Llama-3-8B-Instruct-MopeyMule this problem has been lessened considerably.

If you're an enjoyer of savior/reverse savior type role-plays like myself, then this model is for you.

Usage Info

This model is meant to be used with asterisks/quotes RPing formats, any other format that isn't asterisks/quotes is likely to cause issues

Quants

Models Merged

The following models were included in the merge:

Secret Sauce

The following YAML configurations were used to produce this model:

Umbral-Mind-1-pt.1

models:
  - model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
  - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: tannedbum/L3-Nymeria-Maid-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: tannedbum/L3-Nymeria-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Umbral-Mind-1-pt.2

models:
  - model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
  - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: tannedbum/L3-Nymeria-Maid-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: tannedbum/L3-Nymeria-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Umbral-Mind-1

models:
  - model: Casual-Autopsy/Umbral-Mind-1-pt.1
  - model: Casual-Autopsy/Umbral-Mind-1-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-1-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Umbral-Mind-2-pt.1

models:
  - model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
  - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: migtissera/Llama-3-8B-Synthia-v3.5
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: v000000/L3-8B-Poppy-Sunspice
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Umbral-Mind-2-pt.2

models:
  - model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
  - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: migtissera/Llama-3-8B-Synthia-v3.5
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: Magpie-Align/Llama-3-8B-WizardLM-196K
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Umbral-Mind-2

models:
  - model: Casual-Autopsy/Umbral-Mind-2-pt.1
  - model: Casual-Autopsy/Umbral-Mind-2-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-2-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Umbral-Mind-3-pt.1

models:
  - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
  - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: invisietch/EtherealRainbow-v0.3-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Umbral-Mind-3-pt.2

models:
  - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
  - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: invisietch/EtherealRainbow-v0.3-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Umbral-Mind-3

models:
  - model: Casual-Autopsy/Umbral-Mind-3-pt.1
  - model: Casual-Autopsy/Umbral-Mind-3-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-3-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Umbral-Mind-4

models:
  - model: Casual-Autopsy/Umbral-Mind-1
  - model: Casual-Autopsy/Umbral-Mind-3
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-1
parameters:
  t:
    - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
dtype: bfloat16

Umbral-Mind-5

models:
  - model: Casual-Autopsy/Umbral-Mind-4
  - model: Casual-Autopsy/Umbral-Mind-2
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-4
parameters:
  t:
    - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
  embed_slerp: true
dtype: bfloat16

Umbral-Mind-6

models:
  - model: mergekit-community/Umbral-Mind-5
  - model: Casual-Autopsy/Mopey-Omelette
merge_method: slerp
base_model: mergekit-community/Umbral-Mind-5
parameters:
  t:
    - value: [0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2]
  embed_slerp: true
dtype: bfloat16

Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B

models:
  - model: Casual-Autopsy/Umbral-Mind-6
  - model: aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
    parameters:
      weight: [0.02, -0.01, -0.01, 0.02]
  - model: ResplendentAI/Nymph_8B
    parameters:
      weight: [-0.01, 0.02, 0.02, -0.01]
  - model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
    parameters:
      weight: [-0.01, 0.02, 0.02, -0.01]
  - model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
    parameters:
      weight: [0.02, -0.01, -0.01, 0.02]
merge_method: task_arithmetic
base_model: Casual-Autopsy/Umbral-Mind-6
parameters:
  normalize: false
dtype: bfloat16
Downloads last month
2,340
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for QuantFactory/L3-Umbral-Mind-RP-v3.0-8B-GGUF