This model has been pushed to the Hub using the PytorchModelHubMixin integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
About the project
This is a decoder of image captioning model. The image will be first preprocessed and resized to (224, 224) and then passed to ViT_b_32(with no classification layer), and then this will output (N, 768). Then this will be repeated 32(max_length) times and will be passed to K, V to CrossMultiHeadAttention block in decoder. This model was trained with Microsoft COCO2017 dataset and acheived 0.54 of masked_accuracy on validation set.
Sample Code
To use this model, first you need to download ViT_b_32 which will be used as encoder and download decoder from this repo.
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.