license: apache-2.0
language:
- en
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- story
- writing
- fiction
- roleplaying
- swearing
- rp
- horror
- story
- general usage
- roleplay
- neo quant
- creative
- rp
- fantasy
- story telling
- ultra high precision
pipeline_tag: text-generation
Command-R-01-Ultra-NEO-DARK-HORROR V1 and V2 at 35B in IMATRIX.
Two versions of Command-R-01 35B using TWO new DARK HORROR Neo Imatrix datasets bring an INTENSE horror upgrade to Command-R 01 35B.
The DARK HORROR NEO Imatrix datasets does the following:
- Adds a "coating of black paint" to any "Horror" prompt generation.
- Adds a "dark tint" to any other creative prompt.
- Increases the intensity of a scene, story, or roleplay interaction.
- Increases the raw vividness of prose.
- In some cases increase instruction following of the model (ie story, and prose).
- Brings a sense of impending "horror", THEN brings the "horror".
The HORROR datasets were created using Grand Horror 16B using 90+ different horror prompts for generation. This process created an incredibility intense, visceral and vivid horror stories in an ultra compact form - perfect for an Imatrix dataset.
These datasets have been tested on Command-R, Llama 3, Llama 3.1, Mistral Nemo and others.
In all cases these datasets move the model into "horror territory" so to speak.
I want to be clear this process is not the same (level) as a "fine tune", it is lighter than that.
That being said, if you are looking for intense, visceral and NSFW horror then the "Grand Horror model" (and it's brothers and sisters) is the way to go.
[ https://huggingface.co/DavidAU/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF ]
(links to other versions on this page)
However, if you want a little more horror, a little more darkness in "Command-R" - which is a great story telling model - then this may be your model.
Two versions of the DARK HORROR dataset were used for each quant.
These are labeled with "V1" and "V2" accordingly.
I suggest downloading both (of the same quant) and test in your use case(s).
The "flavoring" each brings to Command-R will differ - some slightly, some by a lot - depending on the prompt(s) used / use case(s).
This model requires "Command-R" template, and has a max context of 128k (131,000).
--- Quants uploading... Model card with examples pending updates. ---