|
--- |
|
license: llama3 |
|
license_name: llama3 |
|
license_link: LICENSE |
|
library_name: transformers |
|
tags: |
|
- not-for-all-audiences |
|
- mergekit |
|
base_model: |
|
- meta-llama/Llama-3.1-8B-Instruct |
|
--- |
|
|
|
 |
|
|
|
([GGUFs](https://huggingface.co/mradermacher/L3.1-8B-Dark-Planet-Slush-i1-GGUF)) |
|
|
|
This is based on [v1.1](https://huggingface.co/crestf411/L3.1-8B-Slush-v1.1) and includes a merge with [DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B](https://huggingface.co/DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B). |
|
|
|
**Parameter suggestions:** |
|
|
|
I did all my testing with temp 1, min-p 0.1, DRY 0.8. I enabled XTC at higher contexts. |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using meta-llama/Llama-3.1-8B as a base. |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: stage1-on-instruct |
|
parameters: |
|
weight: 1 |
|
density: 1 |
|
- model: stage2-on-stage1 |
|
parameters: |
|
weight: 1 |
|
density: 1 |
|
- model: DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B |
|
parameters: |
|
weight: 1 |
|
density: 1 |
|
- model: meta-llama/Llama-3.1-8B-Instruct |
|
parameters: |
|
weight: 1.3 |
|
density: 1 |
|
merge_method: ties |
|
base_model: meta-llama/Llama-3.1-8B |
|
parameters: |
|
weight: 1 |
|
density: 1 |
|
normalize: true |
|
int8_mask: true |
|
tokenizer_source: meta-llama/Llama-3.1-8B-Instruct |
|
dtype: bfloat16 |
|
``` |
|
|