Image-Text-to-Text
Safetensors
English
llava_llama
medical
PULSE-7B / README.md
paralym's picture
Update README.md
62e4479 verified
|
raw
history blame
3.32 kB
metadata
license: apache-2.0
datasets:
  - PULSE-ECG/ECGInstruct
  - PULSE-ECG/ECGBench
language:
  - en
pipeline_tag: image-text-to-text
tags:
  - medical

PULSE-7B

Dataset for paper "Teach Multimodal LLMs to Comprehend Electrocardiographic Images".

🌐 Project Page: https://aimedlab.github.io/PULSE/

πŸ“„ Paper: need update

πŸ§‘β€πŸ’» Code: https://github.com/AIMedLab/PULSE

πŸ‘©β€βš•οΈ ECGInstruct(Training): https://huggingface.co/datasets/PULSE-ECG/ECGInstruct

βš–οΈ ECGBench(Testing): https://huggingface.co/datasets/PULSE-ECG/ECGBench

Introduction

We introduce PULSE-7B, a multimodal large language model (MLLM) specifically designed for ECG image interpretation. Leveraging the comprehensive ECGInstruct dataset, which contains over one million instruction-tuning samples, PULSE-7B is tailored to handle a wide range of ECG-related tasks drawn from diverse data sources. While traditional ECG interpretation methods are often constrained by their reliance on raw physiological signals and limited to specific cardiac conditions, PULSE-7B addresses these limitations by enabling robust interpretation of both printed and digital ECG images, making it especially valuable in resource-limited settings where access to raw signals may be restricted. In conjunction with the introduction of ECGBench, a benchmark that includes four key tasks spanning nine datasets, our experiments demonstrate that PULSE-7B establishes new state-of-the-art performance, surpassing general MLLMs with an average accuracy improvement of 15% to 30%. This model showcases the potential to significantly advance ECG image interpretation, providing a more versatile and accurate tool for clinical practice.

Overall performance of PULSE-7B on ECGBench

image/jpeg

Model Performance

In-domain

image/jpeg

Out-of-domain

image/jpeg

Case Study

ECG Image ECG Image ECG Image

Citation

If you find this work helpful, please cite out paper: