File size: 1,280 Bytes
3fe4a14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: apache-2.0
base_model:
- lodrick-the-lafted/Olethros-8B
- lodrick-the-lafted/Limon-8B
- lodrick-the-lafted/Rummage-8B
- cgato/L3-TheSpice-8b-v0.8.3
- unsloth/llama-3-8b-Instruct
- Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total
library_name: transformers
tags:
- mergekit
- merge
---

<img src=https://huggingface.co/lodrick-the-lafted/Kudzu-8B/resolve/main/kudzu.png>

Kudzu-8B

Fresh out of the mergekit-evolve kitchen, this is a merge model between:
* [lodrick-the-lafted/Olethros-8B](https://huggingface.co/lodrick-the-lafted/Olethros-8B)
* [lodrick-the-lafted/Limon-8B](https://huggingface.co/lodrick-the-lafted/Limon-8B)
* [lodrick-the-lafted/Rummage-8B](https://huggingface.co/lodrick-the-lafted/Rummage-8B)
* [Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total](https://huggingface.co/Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total)
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)


Used wmdp as the scoring method for evolve.  In my limited testing, it has not done the usual Llama-3 "Ahaha!" interjections while retaining a good portion of the intelligence. There are several ablated models in the mix so don't be surprised if it gives you what you ask for.