MAmmoTH-VL-8B / README.md
yuexiang96's picture
Update README.md
3035be0 verified
metadata
license: apache-2.0
datasets:
  - MAmmoTH-VL/MAmmoTH-VL-Instruct-12M
language:
  - en
base_model:
  - Qwen/Qwen2.5-7B-Instruct
tags:
  - vision
  - multimodal
  - reasoning
  - math
  - STEM
  - VQA
  - Video

MAmmoTH-VL-8B

🏠 Homepage | πŸ€– MAmmoTH-VL-8B | πŸ’» Code | πŸ“„ Arxiv | πŸ“• PDF | πŸ–₯️ Demo

Abstract

Open-source multimodal large language models (MLLMs) have shown significant potential in a broad range of multimodal tasks. However, their reasoning capabilities remain constrained by existing instruction-tuning datasets, which were predominately repurposed from academic datasets such as VQA, AI2D, and ChartQA. These datasets target simplistic tasks, and only provide phrase-level answers without any intermediate rationales. To address these challenges, we introduce a scalable and cost-effective method to construct a large-scale multimodal instruction-tuning dataset with rich intermediate rationales designed to elicit CoT reasoning. Using only open models, we create a dataset containing 12M instruction-response pairs to cover diverse, reasoning-intensive tasks with detailed and faithful rationales. Experiments demonstrate that training MLLMs on this dataset significantly improves reasoning capabilities, achieving state-of-the-art performance on benchmarks such as MathVerse (+8.1%), MMMU-Pro (+7%), and MuirBench (+13.3%). Additionally, the model demonstrates notable improvements of up to 4% on non-reasoning-based benchmarks. Ablation studies further highlight the importance of key components, such as rewriting and self-filtering, in the dataset construction process.

Performance

We highlight different groups of models with different colors: closed-source models, open weights but closed training details, and fully open-source models. Results are from official sources or running with lmms-eval package if unavailable.

Multi-Discipline Knowledge and Mathematical Reasoning

image/png

Chart & Doc Understanding and Multimodal Interactions & Preferences

image/png

Multi-Image and Video

image/png

Citing the Model

@article{guo2024mammothvlelicitingmultimodalreasoning,
      title={MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale}, 
      author={Jarvis Guo and Tuney Zheng and Yuelin Bai and Bo Li and Yubo Wang and King Zhu and Yizhi Li and Graham Neubig and Wenhu Chen and Xiang Yue},
      year={2024},
      eprint={2412.05237},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.05237}, 
}