This model breaks laws of physic.
I have no clue how to explain this situation:
This model is supposed to be 34B so it should take lots of VRAM and leave very little memory for context, however in some unknown way it manages to fit 16k tokens into 24gb vram when even 20B models will only fit 8k
and in fact it can remember what was said in the very beginning of the text if asked the question.
This is happening on exllamav2 newest version probably with flash attention, but other models still can't match this absurd context size.
@Onix22 the reason is the codellama models, 70b llama 2 model, mistral models, mixtral models have gqa(grouped query attention) instead of mqa. gqa basically allows the model to use much less vram with less context so thats why its happening.
The 13b, 7b llama 2 and llama 1 models do not have gqa. 20b models are usually merges of 13b so it will also not have gqa
This was useful information It should be mentioned somewhere more.
Because for this fact, 34 B model managed to fit more context than 13B model
sadly there are no other models of the same size to use