metadata
library_name: transformers
tags:
- axolotl
license: other
language:
- en
datasets:
- Sao10K/Claude-3-Opus-Instruct-15K
- MinervaAI/Aesir-Preview
- abacusai/SystemChat
- HuggingFaceH4/no_robots
- grimulkan/theory-of-mind
- Fizzarolli/wattpad
duloxetine v1
roleplaying finetune of kalo-team/qwen-4b-10k-WSD-CEdiff (which in turn is a distillation of qwen 1.5 32b onto qwen 1.5 4b, iirc).
support me on ko-fi!
please i need money to stay alive
"good god why would you make this"
well there are a few fun things you can do with a model like this:
- fast rp. FAST. SUPER FAST. INSANELY FAST. SO MANY TOKENS PER SECOND THEY WILL FILL YOUR WALLET AND WEIGH DOWN YOUR POCKETS
- local inference on low end devices (like <8gb vram graphics cards and mobile devices) with higher quality than larger models
- it's fun meanie >:(
quants
gguf: https://huggingface.co/Lewdiculous/duloxetine-4b-v1-GGUF-IQ-Imatrix (thanks @Lewdiculous!)
prompting
just chatml this time, nothing fancy
datasets
see tags! :)