Divyasreepat's picture
Create README.md
31cbf27 verified
metadata
library_name: keras-hub
license: gemma
pipeline_tag: image-text-to-text
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: >-
  To access PaliGemma on Hugging Face, you’re required to review and agree to
  Google’s usage license. To do this, please ensure you’re logged-in to Hugging
  Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license

PaliGemma 2 model card

Model page: PaliGemma

JAX/FLAX PaliGemma 2 28B weights for use with big_vision codebase, pre-trained with 896*896 input images and 512 token input/output text sequences.

The model is available in the bfloat16 format for fine-tuning.

Downloading Model Weights First, authenticate using the Hugging Face CLI:

huggingface-cli login

Use the following command to download the model weights:

huggingface-cli download --local-dir models google/paligemma2-28b-pt-896-jax

This will download the weights in multiple split files to the models directory.

Combine the downloaded .npz parts into a single file using the cat command:

cat paligemma2-28b-pt-896.b16.npz.part* > paligemma2-28b-pt-896.b16.npz

The resulting model.npz file is now ready to use.

Resources and technical documentation:

Terms of Use: Terms

Authors: Google

Model information

Model summary

PaliGemma 2 is an update of the PaliGemma vision-language model (VLM) which incorporates the capabilities of the Gemma 2 models. The PaliGemma family of models is inspired by PaLI-3 and based on open components such as the SigLIP vision model and Gemma 2 language models. It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.

Model architecture

PaliGemma 2 is the composition of a Transformer decoder and a Vision Transformer image encoder. The text decoder is initialized from Gemma 2 in the 2B, 9B, and 27B parameter sizes. The image encoder is initialized from SigLIP-So400m/14. Similar to the original PaliGemma model, PaliGemma 2 is trained following the PaLI-3 recipes.

Inputs and outputs

  • Input: Image and text string, such as a prompt to caption the image, or a question.
  • Output: Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords.

Citation

@article{
    title={PaliGemma 2: A Family of Versatile VLMs for Transfer},
    author={Andreas Steiner and André Susano Pinto and Michael Tschannen and Daniel Keysers and Xiao Wang and Yonatan Bitton and Alexey Gritsenko and Matthias Minderer and Anthony Sherbondy and Shangbang Long and Siyang Qin and Reeve Ingle and Emanuele Bugliarello and Sahar Kazemzadeh and Thomas Mesnard and Ibrahim Alabdulmohsin and Lucas Beyer and Xiaohua Zhai},
    year={2024},
    journal={arXiv preprint arXiv:2412.03555}
}

Model data

Pre-train datasets

PaliGemma 2 is pre-trained on the following mixture of datasets:

PaliGemma 2 is based on Gemma 2, and you can find information on the pre-training datasets for Gemma 2 in the Gemma 2 model card.

Data responsibility filtering

The following filters are applied to WebLI, with the goal of training PaliGemma 2 on safe and responsible data:

  • Pornographic image filtering: This filter removes images deemed to be of pornographic nature.
  • Text safety filtering: We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about child sexual abuse imagery (CSAI), pornography, vulgarities, or is otherwise offensive.
  • Text toxicity filtering: We further use the Perspective API to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic.
  • Text personal information filtering: We filtered certain personal information and other sensitive data using the Cloud Data Loss Prevention (DLP) API to protect the privacy of individuals. Identifiers such as social security numbers and other sensitive information types were removed.
  • Additional methods: Filtering based on content quality and safety in line with our policies and practices.

Implementation information

Hardware

PaliGemma 2 was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e).

Software

Training was completed using JAX, Flax, TFDS and big_vision.

JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models.

TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma 2 fine-tune code and inference code are released in the big_vision GitHub repository.

Evaluation information

Benchmark results

In order to verify the transferability of PaliGemma 2 to a wide variety of academic tasks, we fine-tune the pretrained models on each task. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data.

PaliGemma 2 results by model resolution and size

Benchmark 224-3B 224-10B 224-28B 448-3B 448-10B 448-28B
AI2D 74.7 83.1 83.2 76.0 84.4 84.6
AOKVQA-DA (val) 64.2 68.9 70.2 67.9 70.8 71.2
AOKVQA-MC (val) 79.7 83.7 84.7 82.5 85.9 87.0
ActivityNet-CAP 34.2 35.9 - - - -
ActivityNet-QA 51.3 53.2 - - - -
COCO-35L (avg34) 113.9 115.8 116.5 115.8 117.2 117.2
COCO-35L (en) 138.4 140.8 142.4 140.4 142.4 142.3
COCOcap 141.3 143.7 144.0 143.4 145.0 145.2
ChartQA (aug) 74.4 74.2 68.9 89.2 90.1 85.1
ChartQA (human) 42.0 48.4 46.8 54.0 66.4 61.3
CountBenchQA 81.0 84.0 86.4 82.0 85.3 87.4
DocVQA (val) 39.9 43.9 44.9 73.6 76.6 76.1
GQA 66.2 67.2 67.3 68.1 68.3 68.3
InfoVQA (val) 25.2 33.6 36.4 37.5 47.8 46.7
MARVL (avg5) 83.5 89.5 90.6 82.7 89.1 89.7
MSRVTT-CAP 68.5 72.1 - - - -
MSRVTT-QA 50.5 51.9 - - - -
MSVD-QA 61.1 62.5 - - - -
NLVR2 91.4 93.9 94.2 91.6 93.7 94.1
NoCaps 123.1 126.3 127.1 123.5 126.9 127.0
OCR-VQA 73.4 74.7 75.3 75.7 76.3 76.6
OKVQA 64.2 68.0 71.2 64.1 68.6 70.6
RSVQA-hr (test) 92.7 92.6 92.7 92.8 92.8 92.8
RSVQA-hr (test2) 90.9 90.8 90.9 90.7 90.7 90.8
RSVQA-lr 93.0 92.8 93.5 92.7 93.1 93.7
RefCOCO (testA) 75.7 77.2 76.8 78.6 79.7 79.3
RefCOCO (testB) 71.0 74.2 73.9 73.5 76.2 74.8
RefCOCO (val) 73.4 75.9 75.0 76.3 78.2 77.3
RefCOCO+ (testA) 72.7 74.7 73.6 76.1 77.7 76.6
RefCOCO+ (testB) 64.2 68.4 67.1 67.0 71.1 68.6
RefCOCO+ (val) 68.6 72.0 70.3 72.1 74.4 72.8
RefCOCOg (test) 69.0 71.9 70.7 72.7 74.8 73.7
RefCOCOg (val) 68.3 71.4 70.5 72.3 74.4 73.0
ST-VQA (val) 61.9 64.3 65.1 80.5 82.0 81.8
SciCap 165.1 159.5 156.9 183.3 177.2 172.7
ScienceQA 96.1 98.2 98.2 96.2 98.5 98.6
Screen2Words 113.3 117.8 122.8 114.0 119.1 123.4
TallyQA (complex) 70.3 73.4 74.2 73.6 76.7 76.8
TallyQA (simple) 81.8 83.2 83.4 85.3 86.2 85.7
TextCaps 127.5 137.9 139.9 152.1 157.7 153.6
TextVQA (val) 59.6 64.0 64.7 75.2 76.6 76.2
VATEX 80.8 82.7 - - - -
VQAv2 (minival) 83.0 84.3 84.5 84.8 85.8 85.8
VizWizVQA (val) 76.4 78.1 78.7 77.5 78.6 78.9
WidgetCap 138.1 139.8 138.8 151.4 151.9 148.9
XM3600 (avg35) 42.8 44.5 45.2 43.2 44.6 45.2
XM3600 (en) 79.8 80.7 81.0 80.3 81.5 81.0
xGQA (avg7) 58.6 61.4 61.1 60.4 62.6 62.1

Additional Benchmarks

ICDAR 2015 Incidental

Model Precision Recall F1
PaliGemma 2 3B 81.88 70.73 75.9

Total-Text

Model Precision Recall F1
PaliGemma 2 3B 73.8. 74.54 74.17

FinTabNet

Model S-TEDS TEDS GriTS-Top GriTS-Con
PaliGemma 2 3B 99.18 98.94 99.43 99.21

PubTabNet

Model S-TEDS TEDS GriTS-Top GriTS-Con
PaliGemma 2 3B 97.6 97.31 97.99 97.84

GrandStaff

Model CER LER SER
PaliGemma 2 3B 1.6 6.7 2.3

PubChem

  • PaliGemma 2 3B, Full Match: 94.8

DOCCI

Model avg#char avg#sent NES %
PaliGemma 2 3B 529 7.74 28.42
PaliGemma 2 10B 521 7.45 20.27
  • avg#char: Average number of characters
  • avg#sent: Average number of sentences
  • NES: Non entailment sentences

MIMIC-CXR

Model CIDEr BLEU4 Rouge-L RadGraph F1
PaliGemma 2 3B 19.9% 14.6% 31.92% 28.8%
PaliGemma 2 10B 17.4% 15% 32.41% 29.5%

Visual Spatial Reasoning

Model VSR zeroshot split (test) VSR random split (test)
PaliGemma 2 3B 0.75 0.82
PaliGemma 2 10B 0.80 0.87

Ethics and safety

Evaluation approach

Our evaluation methods include structured ethics and safety evaluations across relevant content policies, including:

  • Human evaluation on prompts covering child safety, content safety and representational harms. See the Gemma model card for more details on evaluation approach, but with image captioning and visual question answering setups.
  • Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset (Karkkainen et al., 2021).

Evaluation results

  • The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting internal policies for categories such as child safety, content safety and representational harms.
  • On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes.
Metric Perceived gender Ethnicity Age group
Model size 3B 10B 28B 3B 10B 28B 3B 10B 28B
Maximum
Toxicity 0.14% 0.15% 0.19% 0.29% 0.39% 0.39% 0.26% 0.18% 0.32%
Identity Attack 0.04% 0.02% 0.02% 0.13% 0.06% 0.06% 0.06% 0.03% 0.06%
Insult 0.17% 0.25% 0.17% 0.37% 0.52% 0.52% 0.27% 0.39% 0.24%
Threat 0.55% 0.43% 0.57% 0.83% 0.48% 0.48% 0.64% 0.43% 0.64%
Profanity 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00%
Median
Toxicity 0.13% 0.10% 0.18% 0.07% 0.07% 0.14% 0.12% 0.08% 0.12%
Identity Attack 0.02% 0.01% 0.02% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00%
Insult 0.15% 0.23% 0.14% 0.14% 0.17% 0.13% 0.09% 0.18% 0.16%
Threat 0.35% 0.27% 0.41% 0.28% 0.19% 0.42% 0.27% 0.31% 0.40%
Profanity 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00%

Usage and limitations

Intended usage

Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Prohibited uses of Gemma models are outlined in the Gemma Prohibited Use Policy.

Fine-tune on specific vision-language task:

  • The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation.
  • The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities.
  • The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks.

Vision-language research:

  • The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field.

Ethical considerations and risks

The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:

  • Bias and Fairness
    • VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
  • Misinformation and Misuse
    • VLMs can be misused to generate text that is false, misleading, or harmful.
    • Guidelines are provided for responsible use with the model, see the Responsible Generative AI Toolkit.
  • Transparency and Accountability
    • This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
    • A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.

Risks identified and mitigations:

  • Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases.
  • Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases.
  • Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided: see the Responsible Generative AI Toolkit. Prohibited uses of Gemma models are outlined in the Gemma Prohibited Use Policy.
  • Privacy violations: Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.

Limitations

  • Most limitations inherited from the underlying Gemma 2 models still apply:
    • VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging.
    • Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language.
    • VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements.
    • VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations.
  • PaliGemma 2 was designed first and foremost to serve as a general pre-trained model for fine-tuning to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for general purpose use.
  • PaliGemma 2 is not a multi-turn chatbot. It is designed for a single round of image and text input.