How to model.generate batched data

#40
by Popandos - opened

Hi!
I'm trying to make an inference of the model using batch_data, but all text prompts refer to the first picture
my code:
input_text = []
images = []
stp = 5 #batch_size
for i in range(j, j+stp, 1):
try_prompt = create_prompt(df.iloc[i]) #text prompt with {"type": "image"}
input_text.append(processor.apply_chat_template(try_prompt, add_generation_prompt=True))
images.append(return_image(df.iloc[i]))
inputs = processor(images=images, text=input_text, return_tensors="pt", padding=True).to(model.device)
output = model.generate(**inputs, max_new_tokens=10, temperature=0.2, do_sample=True, pad_token_id=processor.tokenizer.pad_token_id)

the answer is
images.append([return_image(df.iloc[i])])
the processor returns list of images [batch_size, images_to_prompt, .....]

Sign up or log in to comment