A Persian image captioning model constructed from a ViT + RoBERTa architecture trained on flickr30k-fa (created by Sajjad Ayoubi). The encoder (ViT) was initialized from https://huggingface.co/google/vit-base-patch16-224 and the decoder (RoBERTa) was initialized from https://huggingface.co/HooshvareLab/roberta-fa-zwnj-base .

Usage

pip install hezar
from hezar.models import Model

model = Model.load("hezarai/vit-roberta-fa-image-captioning-flickr30k")
captions = model.predict("example_image.jpg")
print(captions)
Downloads last month
81
Inference Examples
Inference API (serverless) does not yet support hezar models for this pipeline type.

Dataset used to train hezarai/vit-roberta-fa-image-captioning-flickr30k

Collection including hezarai/vit-roberta-fa-image-captioning-flickr30k