Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Star-Command-R-32B-v1 - EXL2 4.5bpw

This is a 4.5bpw EXL2 quant of TheDrummer/Star-Command-R-32B-v1

This quant was made using exllamav2-0.2.0 with default dataset.

I tested this quant shortly in some random RPs (including 8k+ RPs where remembering and understanding specific facts in the context is needed) and it seems to work fine. In my short tests it seemed better than GGUF of similar size (Q4_K_M).

This quant fits nicely in 24GB VRAM, especially with Q4 cache.

Prompt Templates

Uses Command-R format.

Original readme below


Join our Discord! https://discord.gg/Nbv9pQ88Xb


BeaverAI proudly presents...

Star Command R 32B v1 🌟

An RP finetune of Command-R-8-2024

image/png

Links

Usage

  • Cohere Instruct format or Text Completion

Special Thanks

  • Mr. Gargle for the GPUs! Love you, brotha.
Downloads last month
8
Inference API
Unable to determine this model's library. Check the docs .

Collection including DeusImperator/Star-Command-R-32B-v1_exl2_4.5bpw