Text Generation
Transformers
Safetensors
llama
code
granite
conversational
Eval Results
text-generation-inference

Input context length

#6
by dyoung - opened

Hello,

I'm just looking for confirmation that I'm understanding things correctly in regards of the input context length of this model.

I'm assuming that it's 4096 due to the hidden layer size seen from the config.json coupled with the model. (https://huggingface.co/ibm-granite/granite-8b-code-instruct/blob/main/config.json) Which has been working for me.

The input context is not directly mentioned outright anywhere I'm use to seeing it being stated. (Near to where one would gain access to it for example.)

If there is something I've missed, let me know. Also cite the info so I can have a look for myself.

Thanks.

I'm finding in their paper that the 8B models were intended for 4096. Half that for the 3B. 8192 for 20B and larger for the granite family. ("Section 3 Model Acrchitecture" - https://arxiv.org/pdf/2405.04324)

IBM Granite org

yeah, you need to look at context length from this table. Hidden size in the config is a different thing.
Screenshot 2024-06-06 at 9.25.50 AM.png

Also, the 3b and 8b can be used with infinite length theoretically since it is using RoPE but performance can't be guaranteed since its not trained above 2048 and 4096 (for 3b and 8b).
For the 20b and 34b, since they are trained with absolute position embeddings, you are limited to 8192 tokens.

Thanks for pointing out the addition points you made. And thanks for following up. I think I have what I need for now.

dyoung changed discussion status to closed

Sign up or log in to comment