base_model:
- TheBloke/Llama-2-13B-fp16
- Masterjp123/SnowyRP-FinalV1-L2-13B
- Masterjp123/Snowyrp-V2B-P1
- sauce1337/BerrySauce-L2-13b
library_name: transformers
tags:
- mergekit
- merge
Model
This is the Bf16 unquantized version of SnowyRP V2 Beta And the First Public Beta Model in the SnowyRP series of models!
NOTE: this model has gave me issues when I tried to quantize it, So I guess if you want, IDK get the bloke the do it, after all they do stuff better than me anyways.
Merge Details
just originally made V2beta to be a test, But it seems like it is good, So I am quantizing it.
These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit or make the model worse.
This Model has a Very good knowledge base and understands anatomy decently, Plus this Model is VERY versitle and is great for General assistant work, RP and ERP, RPG RPs and much more.
Model Use:
This model is very good... WITH THE RIGHT SETTINGS. I personally use microstat mixed with dynamic temp with epsion cut off and eta cut off.
Optimal Settings (so far)
Microstat Mode: 2
tau: 2.95
eta: 0.05
Dynamic Temp
min: 0.25
max: 1.8
Cut offs
epsilon: 3
eta: 3
Merge Method
This model was merged using the ties merge method using TheBloke/Llama-2-13B-fp16 as a base.
Models Merged
The following models were included in the merge:
- Masterjp123/SnowyRP-FinalV1-L2-13B
- posicube/Llama2-chat-AYB-13B
- Sao10K/Stheno-1.8-L2-13B
- ValiantLabs/ShiningValiantXS
- sauce1337/BerrySauce-L2-13b
Configuration
The following YAML configuration was used to produce this model:
base_model:
model:
path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: ties
parameters:
int8_mask: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 40]
model:
model:
path: Masterjp123/Snowyrp-V2B-P1
parameters:
density: [1.0, 0.7, 0.1]
weight: 1.0
- layer_range: [0, 40]
model:
model:
path: Masterjp123/SnowyRP-FinalV1-L2-13B
parameters:
density: 0.5
weight: [0.0, 0.3, 0.7, 1.0]
- layer_range: [0, 40]
model:
model:
path: sauce1337/BerrySauce-L2-13b
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0.0
- layer_range: [0, 40]
model:
model:
path: TheBloke/Llama-2-13B-fp16
for Masterjp123/Snowyrp-V2B-P1
base_model:
model:
path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: ties
parameters:
int8_mask: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 40]
model:
model:
path: Sao10K/Stheno-1.8-L2-13B
parameters:
density: [1.0, 0.7, 0.1]
weight: 1.0
- layer_range: [0, 40]
model:
model:
path: ValiantLabs/ShiningValiantXS
parameters:
density: 0.5
weight: [0.0, 0.3, 0.7, 1.0]
- layer_range: [0, 40]
model:
model:
path: posicube/Llama2-chat-AYB-13B
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0.0
- layer_range: [0, 40]
model:
model:
path: TheBloke/Llama-2-13B-fp16