Can't Download due to clashing model vs. weights formats (model.safetensors + pytorch_model.bin. index.json)

#1
by zackzachzaczak - opened

I have been attempting to download this and I continually get errors. When querying HF-CLI/Hub via Transformers, it fails due to the pytorch_model.bin.index.

OSError: Could not locate pytorch_model-00001-of-00007.bin inside LoneStriker/deepseek-coder-33b-instruct-8.0bpw-h8-exl2.

I haven't attempted force_download, maybe that'll be a workaround. Don't know why Transformers doesn't have the logic for this to work on its own. Using Transformers 4.33.3 if it matters.

image.png

This is the Exllamav2 quantization of the model so you will need to use the Exllamav2 loader in ooba text-generation-webui, use raw exllamav2 module, or use tabbyAPI to load the model. If you want to use Transformers, you will need to download the original base model and not this quant of the model.

Sign up or log in to comment