brucethemoose
commited on
Commit
•
721d64f
1
Parent(s):
96cf83c
Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ pipeline_tag: text-generation
|
|
10 |
|
11 |
**NousResearch/Nous-Capybara-34B**, **migtissera/Tess-M-v1.3** and **bhenrym14/airoboros-3_1-yi-34b-200k** merged with a new, experimental implementation of "dare ties" via mergekit.
|
12 |
|
13 |
-
Quantized with exllamav2 on 200 rows (400K tokens) on a long Orca-Vicuna format chat, a sci fi story and a fantasy story. This should hopefully yield better chat performance than the default wikitext quantization.
|
14 |
|
15 |
4bpw is enough for **~47K context on a 24GB GPU.**. I would highly recommend running in exui for speed at long context. I go into more detail in this [Reddit post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)
|
16 |
|
|
|
10 |
|
11 |
**NousResearch/Nous-Capybara-34B**, **migtissera/Tess-M-v1.3** and **bhenrym14/airoboros-3_1-yi-34b-200k** merged with a new, experimental implementation of "dare ties" via mergekit.
|
12 |
|
13 |
+
Quantized with exllamav2 on 200 rows (400K tokens) on a long Orca-Vicuna format chat, a sci fi story and a fantasy story. This should hopefully yield better chat performance than the small, default wikitext quantization.
|
14 |
|
15 |
4bpw is enough for **~47K context on a 24GB GPU.**. I would highly recommend running in exui for speed at long context. I go into more detail in this [Reddit post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)
|
16 |
|