Edit model card

image/jpeg

This did not go in the direction I wanted it to. Why is it always my best models are the ones that happen out of sheer coincidence or stupidity. To start, this model doesn't have Crepus Horizon in it despite its name. As of writing this, v0.1 Crepus Horzion is way too broken to merge with and I've taken that repo down until an update is out. This model actually works better without it so, a plus. Still keeping the name though, sounds cool lmao.

Tangent aside, Antheros was part elaborate experiment, part 'throw shit at the wall as see what sticks'. I was aiming for a more narritive heavy RP model with some moist and spice thrown in, ended up with that and solid intruct following. Took a week and a half, but it's here. Any feedback would be much appreciated.

Quants

OG Q8 GGUF by me.

GGUFs by BackyardAI

Details & Recommended Settings

(Still testing; subject to change)

Thrives at more story heavy and narritive RP yet still excels with the basics. Leans vaguely sensual and smooth-tongued with it's description while keeping up human dialogue and actions. Fully uncensored and a tad dramatic, so have fun with those eRP scenes.

Clings onto whatever instructs you give it and fucking sprints with it (thanks Formax). This model's generation quality heavily depends on the character card given; a shit card will give you shit outputs and vise versa. Be careful with your choice of words and the examples you give it.

Has a certain tendency to speak for the {user}, but that's easily negated with a few instructs.

Rec. Settings:

Template: Model Default
Temperature: 1.3
Min P: 0.1
Repeat Penalty: 1.1
Repeat Penalty Tokens: 256

Models Merged & Merge Theory

The following models were included in the merge:

It looks like a lot, I swear it's not that bad.

General idea was for a more story like RP merge so I picked models that were either for basic RP and story wriing. Originally it was going to only be six models split into two groups to min max their attributes, then someone on the BackyardAI discord asked if someone could fit in Formax into an RP model and I said 'bet'. Small problem, Formax is a 4k context model while the rest are 8k. As excpected, some frakenmerges straight up failed to quant. But, I managed to finagle it in.

The first merge (anteros.b) is the more story-writing heavy base of Anteros, also the part with majority of the moist and spice in it. Given that the models in it are all NSFW and one trained off of r/DirtyWritingPrompts, who's suprised? Since Instruct DWP is a story writing model, the top end had to be sole RP models to balence it out as well as right at the end. But, I wanted to keep that 'human' writing style so I capped the end still with Instruct DWP. What came out was a merge that rambled a ton but verbose and narritive driven.

Second merge (anteros.c) is the RP forward merge with Formax at the start. Still keeping that 'human' style with by capping the end with Celeste v1.2 while stuffing the mids with Niitama and Tahsin; two very competent models in their own right. Took a page out of @matchaaaaa's recent models with the splice method where you take a slice out of two models of the same layers then merge it. I took it a step further, splicing where there was major (4+) layer overlap, streamlining the merge overall. The resulting merge is consise and almost snippy, but suprisingly consise and coherent though it doesn't pick up nuance well.

As for how I managed to get Formax in, I did some testing and found out that you could get away with merging roughly the first quarter of the layers without much error. It's the last few layers that cause trouble; trying to put the end layers at the end of the merge will make the model unquantable. But, I don't know how much that effects the overall context length for the final model, still need more testing for that.

The final merge was a DELLA merge with both anteros.b and anteros.c with multiple gradients. DELLA is a very new merge method as of this model's release, you can read more about it here. Wanted to keep anteros.b's narritive affinity and nuance while also keeping anteros.c's coherence; so, I did a high weight of anteros.c with low density at the start then eased the weight down until the end, where it went up again. Maintained an averege density around ~0.4 and epsilon around 0.05.

Config

models:
slices:
- sources:
  - layer_range: [0, 4]
    model: ResplendentAI/Nymph_8B
- sources:
  - layer_range: [3, 9]
    model: TheDrummer/Llama-3SOME-8B-v2
    parameters:
      scale:
      - filter: q_proj
        value: 0.9
      - filter: down_proj
        value: 0.9
- sources:
  - layer_range: [5, 12]
    model: nothingiisreal/L3-8B-Instruct-Abliterated-DWP
    parameters:
      scale:
      - filter: up_proj
        value: 0.9
- sources:
  - layer_range: [10, 18]
    model: ResplendentAI/Nymph_8B
- sources:
  - layer_range: [11, 22]
    model: nothingiisreal/L3-8B-Instruct-Abliterated-DWP
- sources:
  - layer_range: [17, 19]
    model: TheDrummer/Llama-3SOME-8B-v2
- sources:
  - layer_range: [19, 24]
    model: ResplendentAI/Nymph_8B
- sources:
  - layer_range: [22, 26]
    model: nothingiisreal/L3-8B-Instruct-Abliterated-DWP
- sources:
  - layer_range: [27, 29]
    model: TheDrummer/Llama-3SOME-8B-v2
    parameters:
      scale:
      - filter: v_proj
        value: 0.9
- sources:
  - layer_range: [29, 30]
    model: ResplendentAI/Nymph_8B
- sources:
  - layer_range: [26, 32]
    model: nothingiisreal/L3-8B-Instruct-Abliterated-DWP
parameters:
  int8_mask: true
merge_method: passthrough
dtype: bfloat16
name: anteros.b
---
models:
slices:
- sources:
  - layer_range: [14, 20]
    model: nothingiisreal/L3-8B-Celeste-V1.2
parameters:
  int8_mask: true
merge_method: passthrough
dtype: bfloat16
name: celeste14-20.sl
---
models:
slices:
- sources:
  - layer_range: [14, 20]
    model: Sao10K/L3-8B-Niitama-v1
parameters:
  int8_mask: true
merge_method: passthrough
dtype: bfloat16
name: niitama14-20.sl
---
models: 
  - model: celeste14-20.sl
    parameters:
      weight: [1, 0.75, 0.625, 0.5, 0.375, 0.25, 0]
  - model: niitama14-20.sl
    parameters:
      weight: [0, 0.25, 0.375, 0.5, 0.625, 0.75, 1]
merge_method: dare_linear
base_model: celeste14-20.sl
dtype: bfloat16
name: celeniit14-20.sl
---
models:
slices:
- sources:
  - layer_range: [0, 4]
    model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- sources:
  - layer_range: [1, 5]
    model: ArliAI/ArliAI-Llama-3-8B-Formax-v1.0
- sources:
  - layer_range: [4, 8]
    model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- sources:
  - layer_range: [5, 9]
    model: ArliAI/ArliAI-Llama-3-8B-Formax-v1.0
- sources:
  - layer_range: [8, 10]
    model: Sao10K/L3-8B-Niitama-v1
- sources:
  - layer_range: [6, 14]
    model: nothingiisreal/L3-8B-Celeste-V1.2
- sources:
  - layer_range: [0, 6]
    model: celeniit14-20.sl
- sources:
  - layer_range: [20, 23]
    model: Sao10K/L3-8B-Niitama-v1
- sources:
  - layer_range: [22, 26]
    model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- sources:
  - layer_range: [22, 28]
    model: nothingiisreal/L3-8B-Celeste-V1.2
- sources:
  - layer_range: [25, 27]
    model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- sources:
  - layer_range: [28, 30]
    model: Sao10K/L3-8B-Niitama-v1
- sources:
  - layer_range: [25, 32]
    model: nothingiisreal/L3-8B-Celeste-V1.2
parameters:
  int8_mask: true
merge_method: passthrough
dtype: bfloat16
name: anteros.c # Himerus Basis.C, available on its own.
----
models:
  - model: himerus.c
    parameters:
      weight: [0.8, 0.32, 0.6]
      density: [0.23, 0.55, 0.37]
      epsilon: [0.05, 0.06, 0.04]
      lambda: 0.07
  - model: anteros.b
base_model: anteros.b
parameters:
  normalize: false
  int8_mask: true
merge_method: della
dtype: bfloat16
Downloads last month
2
Safetensors
Model size
13.3B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for kromvault/L3-Horizon-Anteros-v0.1-13B