RTX 4090 Windows system with 96GB of RAM - I cannot get this model to load in OobaBooga - ERROR

#1
by cleverest - opened

I get this error. What is the fix for this?

image.png

Check out this Reddit comment, it might be related: https://reddit.com/r/LocalLLaMA/comments/13op1sd/_/jl6adm9/?context=1

Thanks! Actually, the entry didn't even exist in the config-user.yaml file at all.

I added this, and it loads now:

benjicolby_WizardLM-30B-Guanaco-SuperCOT-GPTQ-4bit$:
auto_devices: false
bf16: false
cpu: false
cpu_memory: 0
disk: false
gpu_memory_0: 0
groupsize: 'None'
load_in_8bit: false
mlock: false
model_type: llama
n_batch: 512
n_gpu_layers: 0
no_mmap: false
pre_layer: 0
threads: 0
wbits: 4

Sign up or log in to comment