File size: 2,679 Bytes
71be9c7 43155cd 71be9c7 e75a3bd db87f0c 82cb15a e5c41fd 1447da8 1874f3a 445c562 1c3eff1 d07ba47 026a9d5 71be9c7 07920ca 0a2de2b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
base_model:
- Sao10K/L3-8B-Stheno-v3.1
- Nitral-Archive/Poppy_Porpoise-Biomix
- Hastagaras/HALU-8B-LLAMA3-BRSLURP
- crestf411/L3-8B-sunfall-abliterated-v0.1
- cgato/L3-TheSpice-8b-v0.8.3
- Nitral-AI/Poppy_Porpoise-0.72-L3-8B
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
---
Why choose?
The full model name is "Llama-3-8B-Poppy-Sunspice"
RP Model, pretty creative.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/YsE0lCbC-5EjYS3d7-Iey.png)
# Thanks mradermacher for the quants
* [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Sunspice-GGUF)
* [GGUF imatrix](https://huggingface.co/mradermacher/L3-8B-Poppy-Sunspice-i1-GGUF)
# Thanks MarsupialAI for the quants
* [EXL2](https://huggingface.co/MarsupialAI/L3-8B-Poppy-Sunspice_EXL2)
# Update/Notice:
This has a tendency for endless generations. It likes a bit of penalty parameters if this occurs.
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
* [Nitral-Archive/Poppy_Porpoise-Biomix](https://huggingface.co/Nitral-Archive/Poppy_Porpoise-Biomix)
* [Hastagaras/HALU-8B-LLAMA3-BRSLURP](https://huggingface.co/Hastagaras/HALU-8B-LLAMA3-BRSLURP)
* [crestf411/L3-8B-sunfall-abliterated-v0.1](https://huggingface.co/crestf411/L3-8B-sunfall-abliterated-v0.1)
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
* [Nitral-AI/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/Nitral-AI/Poppy_Porpoise-0.72-L3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: crestf411/L3-8B-sunfall-abliterated-v0.1
parameters:
weight: 0.1
- model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B
parameters:
weight: 0.3
- model: Nitral-Archive/Poppy_Porpoise-Biomix
parameters:
weight: 0.1
- model: Sao10K/L3-8B-Stheno-v3.1
parameters:
weight: 0.2
- model: Hastagaras/HALU-8B-LLAMA3-BRSLURP
parameters:
weight: 0.1
- model: cgato/L3-TheSpice-8b-v0.8.3
parameters:
weight: 0.2
merge_method: linear
dtype: float16
```
# Prompt Template:
```bash
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
|