Hardware for this model?
#3
by
pranavp
- opened
When I try running this model on an A10G GPU machine, it ends up crashing the machine. Does anyone have any hardware recommendations? I'm running withpipe.enable_xformers_memory_efficient_attention() pipe.enable_model_cpu_offload()
Inference works fine for me on a simple T4. I'm not using either of those two functions that you listed.
I just tried the code snippet on a V100. And couldn't reproduce the problem.
Please post a reproducible code snippet and the error logs.
williamberman
changed discussion status to
closed