File size: 1,389 Bytes
17e162b dffb635 17e162b 232c8d7 a12bd54 0d8677e 232c8d7 17e162b dffb635 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
base_model:
- Walmart-the-bag/MysticFusion-13B
- Undi95/Amethyst-13B
- Sao10K/Stheno-Inverted-1.2-L2-13B
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
---
A merge of my favorite Llama-2 models.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/If_SPXL3cXI9roQ2bYLFZ.png)
# Thanks mradermacher for the quants
* [GGUF](https://huggingface.co/mradermacher/MysticGem-L2-13B-GGUF)
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [Walmart-the-bag/MysticFusion-13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B)
* [Undi95/Amethyst-13B](https://huggingface.co/Undi95/Amethyst-13B)
* [Sao10K/Stheno-Inverted-1.2-L2-13B](https://huggingface.co/Sao10K/Stheno-Inverted-1.2-L2-13B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Undi95/Amethyst-13B
parameters:
weight: 0.3
- model: Walmart-the-bag/MysticFusion-13B
parameters:
weight: 0.5
- model: Sao10K/Stheno-Inverted-1.2-L2-13B
parameters:
weight: 0.2
merge_method: linear
dtype: float16
``` |