Xiaotian Han

xiaotianhan

AI & ML interests

Multimodal LLM

Organizations

Posts 3

view post
Post
871
๐Ÿš€ Excited to announce the release of InfiMM-WebMath-40B โ€” the largest open-source multimodal pretraining dataset designed to advance mathematical reasoning in AI! ๐Ÿงฎโœจ

With 40 billion tokens, this dataset aims for enhancing the reasoning capabilities of multimodal large language models in the domain of mathematics.

If you're interested in MLLMs, AI, and math reasoning, check out our work and dataset:

๐Ÿค— HF: InfiMM-WebMath-40B: Advancing Multimodal Pre-Training for Enhanced Mathematical Reasoning (2409.12568)
๐Ÿ“‚ Dataset: Infi-MM/InfiMM-WebMath-40B
view post
Post
2087
๐ŸŽ‰ ๐ŸŽ‰ ๐ŸŽ‰ Happy to share our recent work. We noticed that image resolution plays an important role, either in improving multi-modal large language models (MLLM) performance or in Sora style any resolution encoder decoder, we hope this work can help lift restriction of 224x224 resolution limit in ViT.

ViTAR: Vision Transformer with Any Resolution (2403.18361)

models

None public yet

datasets

None public yet