Initial release
Browse files- .gitattributes +2 -0
- README.md +46 -0
- kuno-kunoichi-v1-DPO-v2-SLERP-7B.Q8_0.gguf +3 -0
.gitattributes
CHANGED
@@ -4,6 +4,7 @@
|
|
4 |
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
*.ftz filter=lfs diff=lfs merge=lfs -text
|
|
|
7 |
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
*.joblib filter=lfs diff=lfs merge=lfs -text
|
@@ -33,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
4 |
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gguf filter=lfs diff=lfs merge=lfs -text
|
8 |
*.gz filter=lfs diff=lfs merge=lfs -text
|
9 |
*.h5 filter=lfs diff=lfs merge=lfs -text
|
10 |
*.joblib filter=lfs diff=lfs merge=lfs -text
|
|
|
34 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
35 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
36 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
37 |
+
*.GGUF filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,3 +1,49 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model:
|
3 |
+
- SanjiWatsuki/Kunoichi-7B
|
4 |
+
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
5 |
+
library_name: transformers
|
6 |
+
tags:
|
7 |
+
- mergekit
|
8 |
+
- merge
|
9 |
license: cc-by-nc-4.0
|
10 |
---
|
11 |
+
# kuno-kunoichi-v1-DPO-v2-SLERP-7B
|
12 |
+
|
13 |
+
kuno-kunoichi-v1-DPO-v2-SLERP-7B is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
14 |
+
I'm hoping that the result is more robust against errors or when merging due to "denseness", as the two models likely implement comparable reasoning at least somewhat differently.
|
15 |
+
|
16 |
+
I've performed some testing with ChatML format prompting using temperature=1.1 and minP=0.03. The model also supports Alpaca format prompts.
|
17 |
+
|
18 |
+
[GGUF-IQ-Imatrix quants helpfully provided by Lewdiculous.](https://huggingface.co/Lewdiculous/kuno-kunoichi-v1-DPO-v2-SLERP-7B-GGUF-IQ-Imatrix)
|
19 |
+
|
20 |
+
## Merge Details
|
21 |
+
### Merge Method
|
22 |
+
|
23 |
+
This model was merged using the SLERP merge method.
|
24 |
+
|
25 |
+
### Models Merged
|
26 |
+
|
27 |
+
The following models were included in the merge:
|
28 |
+
* [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
|
29 |
+
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
|
30 |
+
|
31 |
+
### Configuration
|
32 |
+
|
33 |
+
The following YAML configuration was used to produce this model:
|
34 |
+
|
35 |
+
```yaml
|
36 |
+
slices:
|
37 |
+
- sources:
|
38 |
+
- model: SanjiWatsuki/Kunoichi-7B
|
39 |
+
layer_range: [0,32]
|
40 |
+
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
|
41 |
+
layer_range: [0,32]
|
42 |
+
merge_method: slerp
|
43 |
+
base_model: SanjiWatsuki/Kunoichi-7B
|
44 |
+
parameters:
|
45 |
+
t:
|
46 |
+
- value: 0.5
|
47 |
+
dtype: float16
|
48 |
+
|
49 |
+
```
|
kuno-kunoichi-v1-DPO-v2-SLERP-7B.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a39bb5da986ec3c5566fa43fe00323a408ebbcb3c4e82db4acd92dfea00e048c
|
3 |
+
size 7695857376
|