language:
- en
- multilingual
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
inference: false
datasets:
- Gregor/mblip-train
mBLIP BLOOMZ-7B
This is the model checkpoint for our work mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs.
Model description
mBLIP is a BLIP-2 model which consists of 3 sub-models: a Vision Transformer (ViT), a Query-Transformer (Q-Former) and a large language model (LLM).
The Q-Former and ViT have both been initialized by an English BLIP-2 checkpoint (blip2-flan-t5-xl) and then re-aligned to the multilingual LLM (bloomz-7b1) using a multilingual task mixture.
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
in 96 languages.
Languages
mBLIP was trained on the following 96 languages:
af, am, ar, az, be, bg, bn, ca, ceb, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fil, fr, ga, gd, gl, gu, ha, hi, ht, hu, hy, id, ig, is, it, iw, ja, jv, ka, kk, km, kn, ko, ku, ky, lb, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, no, ny, pa, pl, ps, pt, ro, ru, sd, si, sk, sl, sm, sn, so, sq, sr, st, su, sv, sw, ta, te, tg, th, tr, uk, ur, uz, vi, xh, yi, yo, zh, zu
Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and prompt text in a zero-shot setup or alternatively finetune it for downstream applications. We strongly recommend LoRA applied to the LLM when finetuning and to use bf16 as data type - standard fp16 can cause NaN loss.
See our repository for the code used to train and finetune this model.
When using batched input, use left padding!
Bias, Risks, Limitations, and Ethical Considerations
While mBLIP can work in theory with up to 100 languages, in practice, we expect best results when prompted in high-resource languages like English, German, Spanish, etc.
mBLIP inherits the risk, limitations, and biases from the models used to initialize it. mBLIP has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
How to use
For code examples, we refer to the BLIP-2 documentation.
Running the model on CPU
Click to expand
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
Running the model on GPU
In full precision
Click to expand
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
In half precision (bfloat16
)
Click to expand
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b", torch_dtype=torch.bfloat16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
In 8-bit precision (int8
)
Important: Paper results only use int8 for the LLM weights while this loads all weights in int8. We see that this gives slightly worse results but currently int8 for only some model parts is not supported by HuggingFace.
Click to expand
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
In 4-bit precision (int4
)
Important: Paper results only use int4 for the LLM weights while this loads all weights in int8. We see that this gives slightly worse results but currently int4 for only some model parts is not supported by HuggingFace.
Click to expand
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b",
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=False,
bnb_4bit_compute_dtype=torch.bfloat16,
device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
Citation
If you use our model, please cite the following:
@article{geigle2023mblip,
author = {Gregor Geigle and
Abhay Jain and
Radu Timofte and
Goran Glava\v{s}},
title = {mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs},
journal = {arXiv},
volume = {abs/2307.06930},
year = {2023},
url = {https://arxiv.org/abs/2307.06930},
eprinttype = {arXiv},
eprint = {2307.06930},
}