Not-For-All-Audiences
nsfw
File size: 1,331 Bytes
00fd4e3 c037ed5 00fd4e3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
## Exl2 version of [Undi95/OpenDolphinMaid-4x7b](https://huggingface.co/Undi95/OpenDolphinMaid-4x7b)
## branch
7bh8 : 7bpw h8
Using ThePile [0007.parquet](https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated/resolve/refs%2Fconvert%2Fparquet/default/train/0007.parquet) as dataset
Quantization settings : ```python convert.py -i models/Undi95_OpenDolphinMaid-4x7b -o OpenDolphinMaid-4x7b-temp -cf OpenDolphinMaid-4x7b-7bpw-h8-exl2 -c 0007.parquet -l 8192 -b 7 -hb 8 -ml 8192```
### below this line is original readme
Merge of OpenHermes and Dolphin with 2x Noromaid DPO, trying to add a little more brain in the model, while being smaller than a 8x7b.
It seems to work well.
<!-- description start -->
## Description
This repo contains fp16 files of OpenDolphinMaid-4x7b.
<!-- description end -->
<!-- description start -->
## Models and LoRA used
- NeverSleep/Noromaid-7B-0.4-DPO x 2
- teknium/OpenHermes-2.5-Mistral-7B
- cognitivecomputations/dolphin-2.6-mistral-7b-dpo
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Chatml
```
<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
If you want to support me, you can [here](https://ko-fi.com/undiai). |