How to load command r+ in text-generation-webui?

#1
by MLDataScientist - opened

Thank you for converting this model into GPTQ.
If text-generation-webui does not support this model yet, Can you please share a script on how to load the model for inferencing with partial offloading into CPU RAM? I have 36GB VRAM and 96 GB RAM.

Thanks!

By the way, these are the errors I get with different loaders:

AutoGPTQ_loader: TypeError: cohere isn't supported yet.

Transformers (with auto-devices): CUDA out of memory. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

@alpindale let me know. Thanks!

Hi! Sorry for the delayed response. This was quantized with a custom script. Inference is currently possible with Aphrodite Engine's dev branch, but no CPU offloading is supported yet. You can find the quantization script here.

To run it in text-generation-webui, you need to install the latest transformers package, and choose Transformers loader in webui.

Hi! Sorry for the delayed response. This was quantized with a custom script. Inference is currently possible with Aphrodite Engine's dev branch, but no CPU offloading is supported yet. You can find the quantization script here.

Thanks for the great job, this revision on the dev branch of aphrodite works for Command R Plus: 95faf27d2b39eb34ed59edadcfe24121412decaa

I run it on A100.

Sign up or log in to comment