Will it run on a 4090?
#1
by
brifl
- opened
I ran this successfully on a home PC with a GeForce 4090 RTX. It ran on oobabooga. I couldn't build auto-gptq from source, but the transformers update worked. I had to select AutoGPTQ from oobabooga too.
This is running slow, but it was answering some complex questions like GPT-4 when compared side by side. Things are getting interesting in the open source LLM world!