MedRegA

Model for paper "Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks".

๐ŸŒ Project Page: https://medrega.github.io/

๐Ÿ“„ Paper: https://arxiv.org/abs/2410.18387

๐Ÿ’ป Code: https://github.com/xmed-lab/MedRegA

Introduction

We propose a Region-Aware medical MLLM, MedRegA, which is the first bilingual generalist medical AI system to simultaneously handle image-level and region-level medical vision-language tasks across a broad range of modalities.

Our MedRegA not only enables three region-centric tasks, but also achieves the best performance for visual question answering, report generation and medical image classification over 8 modalities, showcasing significant versatility.

medrega.png

Downloads last month
8
Safetensors
Model size
40.2B params
Tensor type
BF16
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for Luxuriant16/medrega

Finetuned
(1)
this model