Using KV Cache when the new input is more than one token

#2
by skoneru - opened

Hello,

I am having a problem when using KV cache with Paligemma models. It looks like based on the code line here, it is only done with new inputs one token at a time. However, If one would want to cache the prompt for tasks such as reranking or so on, we should be able to cache dynamic lengths as it is supported for models like Llama. Is there a possibility that this may be added in the future?

Sign up or log in to comment