Text Generation
Transformers
Safetensors
English
qwen2
axolotl
conversational
Inference Endpoints
text-generation-inference
Edit model card

duloxetine v1

roleplaying finetune of kalo-team/qwen-4b-10k-WSD-CEdiff (which in turn is a distillation of qwen 1.5 32b onto qwen 1.5 4b, iirc).

i spent so long arguing with comfyui to make this image AGAIN you better like it

support me on ko-fi!

kofi button

please i need money to stay alive

"good god why would you make this"

well there are a few fun things you can do with a model like this:

  1. fast rp. FAST. SUPER FAST. INSANELY FAST. SO MANY TOKENS PER SECOND THEY WILL FILL YOUR WALLET AND WEIGH DOWN YOUR POCKETS
  2. local inference on low end devices (like <8gb vram graphics cards and mobile devices) with higher quality than larger models
  3. it's fun meanie >:(

quants

gguf: https://huggingface.co/Lewdiculous/duloxetine-4b-v1-GGUF-IQ-Imatrix (thanks @Lewdiculous!)

prompting

just chatml this time, nothing fancy

datasets

see tags! :)

Downloads last month
37
Safetensors
Model size
3.95B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with Fizzarolli/duloxetine-4b-v1.
This model can be loaded on Inference API (serverless).

Datasets used to train Fizzarolli/duloxetine-4b-v1

Collection including Fizzarolli/duloxetine-4b-v1