Few-shot in-context learning

#72
by andrewliao11-nv - opened

Does anyone has success perform few-shot in-context learning using idefics2-8b?

I have experimented in the following setting:

Task: image binary classification
Domain: iNaturist subset

Prompt example

You are tasked to answer the user question based on the image. Here are 2 demonstrations to help you answer the user question:

[image] The above image is a cat
[image] The above image is an elephant, not a cat

---

[image] User question: Does the above image contain a cat?
Answer with a single word. Yes or No.

The experiments show that 0-shot performances (>90) is way better than 2,4,8-shot performances (max around 82)

It's normal if you're using the sft one (idefics2-8b) since it was fine-tuned only on Q/A pairs (with a 0 shot setting).
The more you add turns and images, the worse it gets.

Also, I think your template is not ideal for that and you can boost the performance with the official chat template (see the model card).
By doing the ICL wih multiple turns with the official chat template, I expect you can improve the performance compared that what you have currently

I couldn't seem to find the official chat template in the tokenizer config.

I did my best to re-create what was detailed in the model card and added the chat_template to the tokenizer_config.json in this PR: https://huggingface.co/HuggingFaceM4/idefics2-8b/discussions/74

You should use idefics2-8b-base instead of idefics2-8b.

Sign up or log in to comment