sophosympatheia
commited on
Commit
•
1f5ccfb
1
Parent(s):
5ca7145
Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ language:
|
|
4 |
- en
|
5 |
---
|
6 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
7 |
-
<img src="https://imgur.com/UY4Y3p5.jpg" alt="
|
8 |
</div>
|
9 |
|
10 |
### Overview
|
@@ -13,12 +13,12 @@ This model is a frankenmerge of two custom 70b merges I made in November 2023 th
|
|
13 |
my [xwin-stellarbright-erp-70b-v2 model](https://huggingface.co/sophosympatheia/xwin-stellarbright-erp-70b-v2). It features 120 layers and should weigh in at 103b parameters.
|
14 |
|
15 |
I feel like I have reached a plateau in my process right now, but the view from here is worth a rest.
|
16 |
-
My personal opinion is this model roleplays better than the other 103-120b models out there right now. I love it. Give it a try for yourself.
|
17 |
-
I recommend trying my sampler settings and prompt template below with this model. This model
|
18 |
|
19 |
Along those lines, this model turned out quite uncensored. *You are responsible for whatever you do with it.*
|
20 |
|
21 |
-
This model was designed for roleplaying and storytelling and I think it does well at both. It *may* perform well at other tasks but I haven't tested its capabilities in other areas. I welcome feedback and suggestions.
|
22 |
|
23 |
### Sampler Tips
|
24 |
|
@@ -116,7 +116,7 @@ If you save this as a .json file, you can import it directly into Silly Tavern.
|
|
116 |
|
117 |
This repo contains branches for various exllama2 quanizations of the model calibratend on a version of the PIPPA dataset.
|
118 |
|
119 |
-
* Main Branch, Full weights
|
120 |
* 3.2 bpw -- This will fit comfortably within 48 GB of VRAM at 8192 context.
|
121 |
* 3.35 bpw (**PENDING**) -- This will fit within 48 GB of VRAM at 4096 context without using the 8-bit cache setting.
|
122 |
* 3.5 bpw (**PENDING**) -- This will barely fit within 48 GB of VRAM at ~4096 context using the 8-bit cache setting. If you get OOM, try lowering the context size slightly until it fits.
|
|
|
4 |
- en
|
5 |
---
|
6 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
7 |
+
<img src="https://imgur.com/UY4Y3p5.jpg" alt="RogueRose" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
8 |
</div>
|
9 |
|
10 |
### Overview
|
|
|
13 |
my [xwin-stellarbright-erp-70b-v2 model](https://huggingface.co/sophosympatheia/xwin-stellarbright-erp-70b-v2). It features 120 layers and should weigh in at 103b parameters.
|
14 |
|
15 |
I feel like I have reached a plateau in my process right now, but the view from here is worth a rest.
|
16 |
+
My personal opinion is this model roleplays better than the other 103-120b models out there right now. I love it. Give it a try for yourself. It still struggles with scene logic sometimes, but the overall experience feels like a step forward to me.
|
17 |
+
I recommend trying my sampler settings and prompt template below with this model. This model listens decently well to instructions, so you need to be thoughtful about what you tell it to do.
|
18 |
|
19 |
Along those lines, this model turned out quite uncensored. *You are responsible for whatever you do with it.*
|
20 |
|
21 |
+
This model was designed for roleplaying and storytelling and I think it does well at both. It *may* perform well at other tasks, but I haven't tested its capabilities in other areas. I welcome feedback and suggestions.
|
22 |
|
23 |
### Sampler Tips
|
24 |
|
|
|
116 |
|
117 |
This repo contains branches for various exllama2 quanizations of the model calibratend on a version of the PIPPA dataset.
|
118 |
|
119 |
+
* Main Branch, Full weights
|
120 |
* 3.2 bpw -- This will fit comfortably within 48 GB of VRAM at 8192 context.
|
121 |
* 3.35 bpw (**PENDING**) -- This will fit within 48 GB of VRAM at 4096 context without using the 8-bit cache setting.
|
122 |
* 3.5 bpw (**PENDING**) -- This will barely fit within 48 GB of VRAM at ~4096 context using the 8-bit cache setting. If you get OOM, try lowering the context size slightly until it fits.
|