---
license: other
license_name: microsoft-research-license
tags:
- merge
- not-for-all-audiences
---
# DarkForest 20B v2.0 iMat GGUF
"The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound. Even breathing is done with care. The hunter has to be careful, because everywhere in the forest are stealthy hunters like him."- Liu Cixin
Continuation of an ongoing initiative to bring the latest and greatest models to consumer hardware through SOTA techniques that reduce VRAM overhead.
After testing the new important matrix quants for 11b and 8x7b models and being able to run them on machines without a dedicated GPU, we are now exploring the middleground - 20b.
❗❗Need a different quantization/model? Please open a community post and I'll get back to you - thanks ❗❗
Newer quants (IQ3_S, IQ4_NL, etc) are confirmed working in Koboldcpp as of 1.59.1 - if you run into any issues kindly let me know.
(Credits to [TeeZee](https://huggingface.co/TeeZee/) for the original model and [ikawrakow](https://github.com/ikawrakow) for the stellar work on IQ quants)
---
# DarkForest 20B v2.0
![image/png](https://huggingface.co/TeeZee/DarkForest-20B-v2.0/resolve/main/DarkForest-20B-v2.0.jpg)
## Model Details
- To create this model two step procedure was used. First a new 20B model was created using [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
and [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3) , deatils of the merge in [darkforest_v2_step1.yml](https://huggingface.co/TeeZee/DarkForest-20B-v2.0/resolve/main/darkforest_v2_step1.yml)
- then [jebcarter/psyonic-cetacean-20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B)
- and [TeeZee/BigMaid-20B-v1.0](https://huggingface.co/TeeZee/BigMaid-20B-v1.0) was used to produce the final model, merge config in [darkforest_v2_step2.yml](https://huggingface.co/TeeZee/DarkForest-20B-v2.0/resolve/main/darkforest_v2_step2.yml)
- The resulting model has approximately 20 billion parameters.
**Warning: This model can produce NSFW content!**
## Results
- main difference to v1.0 - model has much better sense of humor.
- produces SFW nad NSFW content without issues, switches context seamlessly.
- good at following instructions.
- good at tracking multiple characters in one scene.
- very creative, scenarios produced are mature and complicated, model doesn't shy from writing about PTSD, menatal issues or complicated relationships.
- NSFW output is more creative and suprising than typical limaRP output.
- definitely for mature audiences, not only because of vivid NSFW content but also because of overall maturity of stories it produces.
- This is NOT Harry Potter level storytelling.
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel: