Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

An experimental model, fine-tuned using the "multiplicative-LoRA" method on command-r-v01.

NOTE: You MUST use some (small) value of min-p with this such as 0.01 (as with the original command-r-v01 model), or else the model will output gibberish!

Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for gghfez/jukofyork_creative-writer-35b-preview-exl2-4.5bpw

Quantized
(4)
this model