Error quantizing: b'/bin/sh: 1: ./llama.cpp/llama-quantize: not found\n'

#136
by win10 - opened

Error quantizing: b'/bin/sh: 1: ./llama.cpp/llama-quantize: not found\n'
I use is my merge model :
win10/Llama-3.1-13.3B-ArliAI-RPMax-v1.3
but error...

Same, it's not working right at the moment it seems.

same with the qwen2.5 model.

same with my private model I cannot quantize for Q8

ggml.ai org

I tried qwen 2.5 coder 7B and it works normally.

I think there was a temporary problem with llama-quantize binary yesterday. The space has been restarted recently so it should be fine now.

Hmm. I just tried it a minute ago and it failed to work again. I'll try again later.

image.png
is working now

ngxson changed discussion status to closed

Sign up or log in to comment