File size: 807 Bytes
8f7aad7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
---
tags:
- not-for-all-audiences
---

# daybreak-kunoichi-2dpo-7b - EXL2 8bpw

This is a 8bpw EXL2 quant of [crestf411/daybreak-kunoichi-2dpo-7b](https://huggingface.co/crestf411/daybreak-kunoichi-2dpo-7b)

This quant was made using exllamav2-0.0.21 with default dataset.

Context limit seems to be 8k (in webui it shows 16k by default but it gets incoherent past 8k, use alpha_value in webui to scale to 16k).

I tested this quant shortly in some random RPs (including one over 8k with alpha_value in webui) and it seems to work fine.

## Prompt Templates

This model seems to use Alpaca format.

### Original readme below

---

Experimental model doing a DPO training on top of [Kunoichi-DPO-v2-7b](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B), i.e. double-DPO.

Not suitable for any audience.