MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning
Abstract
We present MM1.5, a new family of multimodal large language models (MLLMs) designed to enhance capabilities in text-rich image understanding, visual referring and grounding, and multi-image reasoning. Building upon the MM1 architecture, MM1.5 adopts a data-centric approach to model training, systematically exploring the impact of diverse data mixtures across the entire model training lifecycle. This includes high-quality OCR data and synthetic captions for continual pre-training, as well as an optimized visual instruction-tuning data mixture for supervised fine-tuning. Our models range from 1B to 30B parameters, encompassing both dense and mixture-of-experts (MoE) variants, and demonstrate that careful data curation and training strategies can yield strong performance even at small scales (1B and 3B). Additionally, we introduce two specialized variants: MM1.5-Video, designed for video understanding, and MM1.5-UI, tailored for mobile UI understanding. Through extensive empirical studies and ablations, we provide detailed insights into the training processes and decisions that inform our final designs, offering valuable guidance for future research in MLLM development.
Community
TL;DR: MM1.5 is a significant upgrade of MM1. With one single set of weights, MM1.5 excels at (1) reading your charts, tables, and any text-rich images, (2) understanding visual prompts like points and boxes, providing grounded outputs, and (3) multi-image reasoning. Please find the detailed recipes in the paper.
Hi @hoatiz congrats on your work!
Would be great to link the models to the paper page, by including https://huggingface.co/papers/2409.20566 in the respective model cards.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MIO: A Foundation Model on Multimodal Tokens (2024)
- xGen-MM (BLIP-3): A Family of Open Large Multimodal Models (2024)
- Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models (2024)
- From Seconds to Hours: Reviewing MultiModal Large Language Models on Comprehensive Long Video Understanding (2024)
- IAA: Inner-Adaptor Architecture Empowers Frozen Large Language Model with Multimodal Capabilities (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper