|
--- |
|
language: en |
|
license: mit |
|
tags: |
|
- vision |
|
- image-to-text |
|
- image-captioning |
|
- visual-question-answering |
|
pipeline_tag: image-to-text |
|
inference: false |
|
--- |
|
|
|
# BLIP-2, Flan T5-xl, pre-trained only |
|
|
|
BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). |
|
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). |
|
|
|
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. |
|
|
|
## Model description |
|
|
|
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. |
|
|
|
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen |
|
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, |
|
which bridge the gap between the embedding space of the image encoder and the large language model. |
|
|
|
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. |
|
|
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" |
|
alt="drawing" width="600"/> |
|
|
|
This allows the model to be used for tasks like: |
|
|
|
- image captioning |
|
- visual question answering (VQA) |
|
- chat-like conversations by feeding the image and the previous conversation as prompt to the model |
|
|
|
## Intended uses & limitations |
|
|
|
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for |
|
fine-tuned versions on a task that interests you. |
|
|
|
### How to use |
|
|
|
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). |