How to do batch inference using this SmolVLM-Instruct ?

#19
by dutta18 - opened

I have a VQA dataset. I wanted to do batch inference to see the performance of this model.

How do I organize the chat template ?

Is it something like below?

messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "what is the color of the sky"}
]
},

{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "what is the color of the grass"}
]
},
]

Sign up or log in to comment