4090 system, i9-13900k, 96GB RAM - I could not launch this model (trying the 4.0 GGML one)

#1
by cleverest - opened

My batch is:
koboldcpp.exe E:\text-generation-webui-models\VicUnlocked-alpaca-65b-GGML\VicUnlocked-alpaca-65b-q4_0-ggml.bin --unbantokens --smartcontext --psutil_set_threads --useclblast 0 0 --stream --gpulayers 32

I can run some other 65B models, but this one tells me:

image.png

Ideas? And how to further optimize my performance as per my command line (if something needs added or changed, etc...)

Thank you.

Do you have the newest llama.cpp-version?

Good call, I was using koboldcpp v1.23..3, but I updated it to v1.26 - tried it again and it worked! Thanks!

Sign up or log in to comment