Will llama-2-70B be supported in near future?

#3
by SaraQX - opened

Hi Bloke, many thanks for your generous efforts. Just wondering if llama-2-70B would be your next focus?
The model is amazing.
Best,
Sarai

Sorry I don't follow - I've already done Llama-2-70B-GPTQ ?

Do you mean in GGML? If so, I will do that as soon as llama.cpp supports them.

Hi Bloke, many thanks for this amazing work. Just wondering how to increase the context Length. On Llama 2 it seems to be 4k. But in your fine-tuning it 2048. How can I use your model but with a context Length of 4k. It would be great if you could help me. Many thanks

I've just updated my config.json to match the new config.json in Meta's Llama-2-13b-HF.

When I first made the quants I couldn't use their config.json due to an error in their files. I used my own and it was missing the max-length:4096 parameter. That wouldn't affect the quantisation, and to be honest it doesn't affect most clients either. But anyway I've fixed it now and will shortly push this fix to all the branches and to my other Llama 2 repos.

Download config.json from the main branch and overwrite the one you have and try that.

Great, thank you

Sorry I don't follow - I've already done Llama-2-70B-GPTQ ?

Do you mean in GGML? If so, I will do that as soon as llama.cpp supports them.

That would be so cool. Thank you Bloke~

Im using this 13b that you first released two days ago and its fantastic. I made sure to back it up because the one you posted yesterday seems to respond to everything really weirdly. swapping between them is a night and day difference

Im using this 13b that you first released two days ago and its fantastic. I made sure to back it up because the one you posted yesterday seems to respond to everything really weirdly. swapping between them is a night and day difference

The 13B I uploaded first was accidentally 13B Chat, it should be identical to 13B-Chat-GPTQ. So if you're using the old one, you're actually using the same files as https://huggingface.co/TheBloke/Llama-2-13B-Chat-GPTQ

The files that are here now are the correct Llama 13B, which is a non-fine-tuned base model, so yes it will respond strangely because it's not really designed to be used for question/answer.

Im using this 13b that you first released two days ago and its fantastic. I made sure to back it up because the one you posted yesterday seems to respond to everything really weirdly. swapping between them is a night and day difference

The 13B I uploaded first was accidentally 13B Chat, it should be identical to 13B-Chat-GPTQ. So if you're using the old one, you're actually using the same files as https://huggingface.co/TheBloke/Llama-2-13B-Chat-GPTQ

The files that are here now are the correct Llama 13B, which is a non-fine-tuned base model, so yes it will respond strangely because it's not really designed to be used for question/answer.

Dude you must have been facepalming so bad reading my comment. ofc it was the chat one im so stupid I didn't even see the different name. my mistake!
Loving the work you do :)

Sorry I don't follow - I've already done Llama-2-70B-GPTQ ?

Do you mean in GGML? If so, I will do that as soon as llama.cpp supports them.

I see. Many thanks! I am new to GPTQ version. Will definitely give it try 😄🌹

Sign up or log in to comment