shanchen commited on
Commit
7bf5388
1 Parent(s): 9c644c8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - question-answering
4
+ language:
5
+ - en
6
+ - he
7
+ - ja
8
+ - es
9
+ - pt
10
+ tags:
11
+ - medical
12
+ size_categories:
13
+ - n<1K
14
+ ---
15
+
16
+ # WorldMedQA-V: A Multilingual, Multimodal Medical Examination Dataset
17
+
18
+ ## Overview
19
+
20
+ **WorldMedQA-V** is a multilingual and multimodal benchmarking dataset designed to evaluate vision-language models (VLMs) in healthcare contexts. The dataset includes medical examination questions from four countries—Brazil, Israel, Japan, and Spain—in both their original languages and English translations. Each multiple-choice question is paired with a corresponding medical image, enabling the evaluation of VLMs on multimodal data.
21
+
22
+ **Key Features:**
23
+ - **Multilingual:** Supports local languages (Portuguese, Hebrew, Japanese, and Spanish) as well as English translations.
24
+ - **Multimodal:** Each question is accompanied by a medical image, allowing for a comprehensive assessment of VLMs' performance on both textual and visual inputs.
25
+ - **Clinically Validated:** All questions and answers have been reviewed and validated by native-speaking clinicians from the respective countries.
26
+
27
+ ## Dataset Details
28
+
29
+ - **Number of Questions:** 568
30
+ - **Countries Covered:** Brazil, Israel, Japan, Spain
31
+ - **Languages:** Portuguese, Hebrew, Japanese, Spanish, and English
32
+ - **Types of Data:** Multiple-choice questions with medical images
33
+ - **Evaluation:** Performance of models in both local languages and English, with and without medical images
34
+
35
+ The dataset aims to bridge the gap between real-world healthcare settings and AI evaluations, fostering more equitable, effective, and representative applications.
36
+
37
+ ## Data Structure
38
+
39
+ The dataset is provided in TSV format, with the following structure:
40
+ - **ID**: Unique identifier for each question.
41
+ - **Question**: The medical multiple-choice question in the local language.
42
+ - **Options**: List of possible answers (A-D).
43
+ - **Correct Answer**: The correct answer's label.
44
+ - **Image Path**: Path to the corresponding medical image (if applicable).
45
+ - **Language**: The language of the question (original or English translation).
46
+
47
+ ### Example from Brazil:
48
+
49
+ - **Question**: Um paciente do sexo masculino, 55 anos de idade, tabagista 60 maços/ano... [Full medical question]
50
+ - **Options**:
51
+ - A: Aspergilose pulmonar
52
+ - B: Carcinoma pulmonar
53
+ - C: Tuberculose cavitária
54
+ - D: Bronquiectasia com infecção
55
+ - **Correct Answer**: B
56
+ - **Image**: [Link to X-ray image]
57
+
58
+ ## Download and Usage
59
+
60
+ The dataset can be downloaded from [Hugging Face datasets page](https://huggingface.co/datasets/WorldMedQA/V). All code for handling and evaluating the dataset is available in the following repositories:
61
+ - **Dataset Code**: [WorldMedQA GitHub repository](https://github.com/WorldMedQA/V)
62
+ - **Evaluation Code**: [VLMEvalKit GitHub repository](https://github.com/WorldMedQA/VLMEvalKit/tree/main)
63
+
64
+ ## Citation
65
+
66
+ Please cite this dataset as follows:
67
+
68
+ ```bibtex
69
+ @article{WorldMedQA-V2024,
70
+ title={WorldMedQA-V: A Multilingual, Multimodal Medical Examination Dataset},
71
+ author={João Matos et al.},
72
+ journal={Preprint},
73
+ year={2024},
74
+ }