Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- Qwen/Qwen2.5-72B-Instruct
|
4 |
+
tags:
|
5 |
+
- conversational
|
6 |
+
- roleplay
|
7 |
+
- chat
|
8 |
+
license: other
|
9 |
+
license_name: qwen
|
10 |
+
---
|
11 |
+
# Qwen 2.5 72b RP Ink
|
12 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/M9KSL64gppBVatmTdoQnG.png)
|
13 |
+
A roleplay-focused LoRA finetune of Qwen 2.5 72b Instruct. Methodology and hyperparams inspired by [SorcererLM](https://huggingface.co/rAIfle/SorcererLM-8x22b-bf16) and [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush).
|
14 |
+
Yet another model in the Ink series, following in the footsteps of [the 32b one](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) and [the Nemo one](https://huggingface.co/allura-org/MN-12b-RP-Ink)
|
15 |
+
|
16 |
+
## Testimonials
|
17 |
+
> [Compared to the 32b] felt a noticeable increase in coherence
|
18 |
+
|
19 |
+
\- ShotMisser64
|
20 |
+
|
21 |
+
> Yeah ep2's great!! made me actually wanna write a reply by myself for the first time in a few days
|
22 |
+
|
23 |
+
\- Maw
|
24 |
+
|
25 |
+
> This is the best RP I've ever had
|
26 |
+
|
27 |
+
\- 59smoke
|
28 |
+
|
29 |
+
> this makes me want to get another 3090 to run 72b
|
30 |
+
|
31 |
+
\- dysfunctional
|
32 |
+
|
33 |
+
## Dataset
|
34 |
+
The worst mix of data you've ever seen. Like, seriously, you do not want to see the things that went into this model. It's bad.
|
35 |
+
|
36 |
+
"this is like washing down an adderall with a bottle of methylated rotgut" - inflatebot
|
37 |
+
|
38 |
+
Update: I have sent the (public datasets in the) data mix publicly already so here's that
|
39 |
+
<details>
|
40 |
+
<img src=https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/JtjUoKtbOfBZfSSKojTcj.png>
|
41 |
+
</details>
|
42 |
+
|
43 |
+
## Quants
|
44 |
+
[imatrix GGUFs by bartowski](https://huggingface.co/bartowski/Qwen2.5-72b-RP-Ink-GGUF)
|
45 |
+
|
46 |
+
## Recommended Settings
|
47 |
+
Chat template: ChatML
|
48 |
+
Recommended samplers (not the be-all-end-all, try some on your own!):
|
49 |
+
- Temp 0.83 / Top P 0.8 / Top A 0.3 / Rep Pen 1.03
|
50 |
+
- Your samplers can go here! :3
|
51 |
+
|
52 |
+
## Hyperparams
|
53 |
+
### General
|
54 |
+
- Epochs = 2
|
55 |
+
- LR = 6e-5
|
56 |
+
- LR Scheduler = Cosine
|
57 |
+
- Optimizer = Paged AdamW 8bit
|
58 |
+
- Effective batch size = 16
|
59 |
+
### LoRA
|
60 |
+
- Rank = 16
|
61 |
+
- Alpha = 32
|
62 |
+
- Dropout = 0.25 (Inspiration: [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush))
|
63 |
+
|
64 |
+
## Credits
|
65 |
+
Humongous thanks to the people who created and curated the original data
|
66 |
+
Big thanks to all Allura members, for testing and emotional support ilya /platonic
|
67 |
+
especially to inflatebot who made the model card's image :3
|
68 |
+
Another big thanks to all the members of the ArliAI and BeaverAI Discord servers for testing! All of the people featured in the testimonials are from there :3
|