Doesn't work in oobabooga

#1
by urtuuuu - opened

Can someone explain why all these 6.7b coding models don't work in oobabooga? They are getting loaded but won't generate response.

are there any errors in the logs? it loads and generates fine for me, using latest release

are there any errors in the logs? it loads and generates fine for me, using latest release

No, seems to work as usual but there is no output (instruct mode). I just tested codeqwen 7b-chat and it works. Somehow almost all other 6.7b coding models dont work. Who knows, maybe something wrong with my Ooba(latest release)

@urtuuuu there's an updated version i'm making right now, i'll test it in my ooba and let you know if i have the same issues

Hi bartowski, facing this also. Tried with 1.1, will test lmstudio-community one as well. Do you know why this might be happening?

ah sorry for the delay, I think i know the issue

try setting your rope scale to 4.0, not sure why that's not being picked up by default by oobabooga or lmstudio

when I set compress_pos_emb = 4 in oobabooga the model generates flawlessly

@urtuuuu @nviraj ^^ forgot to tag so you see it

@bartowski thanks a ton for investigating! It works now. What made you suspect compress_pos_emb?

@nviraj the original model has linear rope scale set to 4.0, it's super strange the GGUF isn't encoding that setting into the model, I thought it should

Sign up or log in to comment