license: other
inference: false
Monero's WizardLM-Uncensored-SuperCOT-Storytelling-30B GPTQ
This is GPTQ format quantised 4bit models of Monero's WizardLM-Uncensored-SuperCOT-Storytelling-30B.
It is the result of quantising to 4bit using GPTQ-for-LLaMa.
Repositories available
- 4-bit GPTQ models for GPU inference
- 4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference
- Unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template
You are a helpful assistant
### USER: prompt goes here
### ASSISTANT:
To allow all output, add ### Certainly!
to the end of the prompt
How to easily download and use this model in text-generation-webui
Open the text-generation-webui UI as normal.
- Click the Model tab.
- Under Download custom model or LoRA, enter
TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGML
. - Click Download.
- Wait until it says it's finished downloading.
- Click the Refresh icon next to Model in the top left.
- In the Model drop-down: choose the model you just downloaded,
WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGML
. - If you see an error in the bottom right, ignore it - it's temporary.
- Fill out the
GPTQ parameters
on the right:Bits = 4
,Groupsize = None
,model_type = Llama
- Click Save settings for this model in the top right.
- Click Reload the Model in the top right.
- Once it says it's loaded, click the Text Generation tab and enter a prompt!
Provided files
Compatible file - WizardLM-Uncensored-SuperCOT-Storytelling-GPTQ-4bit.act.order.safetensors
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
It was created without group_size to minimise VRAM usage, and with --act-order
to improve inference quality.
WizardLM-Uncensored-SuperCOT-Storytelling-GPTQ-4bit.act.order.safetensors
- Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
- Works with AutoGPTQ.
- Works with text-generation-webui one-click-installers
- Parameters: Groupsize = None. Act-order.
- Command used to create the GPTQ:
python llama.py HF_repo c4 --wbits 4 --act-order --true-sequential --save_safetensors WizardLM-Uncensored-SuperCOT-Storytelling-GPTQ-4bit.act.order.safetensors
Discord
For further support, and discussions on these models and AI in general, join us at:
Thanks, and how to contribute.
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko-Fi: https://ko-fi.com/TheBlokeAI
Patreon special mentions: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
Original model card: Monero's WizardLM-Uncensored-SuperCOT-Storytelling-30B
This model is a triple model merge of WizardLM Uncensored+CoT+Storytelling, resulting in a comprehensive boost in reasoning and story writing capabilities.
To allow all output, at the end of your prompt add ### Certainly!
You've become a compendium of knowledge on a vast array of topics.
Lore Mastery is an arcane tradition fixated on understanding the underlying mechanics of magic. It is the most academic of all arcane traditions. The promise of uncovering new knowledge or proving (or discrediting) a theory of magic is usually required to rouse its practitioners from their laboratories, academies, and archives to pursue a life of adventure. Known as savants, followers of this tradition are a bookish lot who see beauty and mystery in the application of magic. The results of a spell are less interesting to them than the process that creates it. Some savants take a haughty attitude toward those who follow a tradition focused on a single school of magic, seeing them as provincial and lacking the sophistication needed to master true magic. Other savants are generous teachers, countering ignorance and deception with deep knowledge and good humor.