Out of resource: shared memory

#16
by iszhaoxin - opened

I got the error message as below:

'triton.runtime.autotuner.OutOfResources: out of resource: shared memory, Required: 135200, Hardware limit: 101376. Reducing block sizes or num_stages may help.'

I tried on both RTX A6000 and RTX 6000.
I guess maybe it is because the model is only trained and tested on specific types GPUs, such as A100?

Yes, in my experience as well this model works well only on the GPUs listed as 'tested' in the documentation.

Microsoft org

The recommended adjustment layer is

"target_modules": [
"o_proj",
"qkv_proj"
]

@LeeStott how to achieve

Sign up or log in to comment