Edit model card

Usage:

from transformers import BlipProcessor, BlipForConditionalGeneration
import torch
from PIL import Image

processor = BlipProcessor.from_pretrained("prasanna2003/blip-image-captioning")
if processor.tokenizer.eos_token is None:
    processor.tokenizer.eos_token = '<|eos|>'
model = BlipForConditionalGeneration.from_pretrained("prasanna2003/blip-image-captioning")

image = Image.open('file_name.jpg').convert('RGB')

prompt = """Instruction: Generate a single line caption of the Image.
output: """

inputs = processor(image, prompt, return_tensors="pt")

output = model.generate(**inputs, max_length=100)
print(processor.tokenizer.decode(output[0]))
Downloads last month
205
Safetensors
Model size
247M params
Tensor type
I64
·
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Dataset used to train nnpy/blip-image-captioning

Space using nnpy/blip-image-captioning 1