Base model :
- google/paligemma-3b-pt-224
Dataset :
- HuggingFaceM4/VQAv2
Getting started :
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM
config = PeftConfig.from_pretrained("ayoubkirouane/PaliGemma-VQAv2-Lora-finetuned")
base_model = AutoModelForCausalLM.from_pretrained("google/paligemma-3b-pt-224")
model = PeftModel.from_pretrained(base_model, "ayoubkirouane/PaliGemma-VQAv2-Lora-finetuned")
Inference API (serverless) does not yet support transformers models for this pipeline type.