GoodBaiBai88
commited on
Commit
•
5ec6567
1
Parent(s):
648604f
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,226 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- medical
|
5 |
+
- 3D medical image caption
|
6 |
+
- image-text pair
|
7 |
+
- medical report
|
8 |
+
size_categories:
|
9 |
+
- 100K<n<1M
|
10 |
---
|
11 |
+
|
12 |
+
## Dataset Description
|
13 |
+
Large-scale 3D medical multi-modal dataset - Image-Text Pair Dataset (M3D-Cap)
|
14 |
+
|
15 |
+
|
16 |
+
### Dataset Introduction
|
17 |
+
Medical institutions, such as hospitals, store vast amounts of multi-modal data,
|
18 |
+
including medical images and diagnostic reports.
|
19 |
+
However, disclosing these multi-modal datasets involving patient data faces challenges due to sensitivity and privacy concerns.
|
20 |
+
To circumvent these limitations, we collected medical images and reports from publicly accessible professional medical websites.
|
21 |
+
Specifically, each patient case in our dataset includes multiple images along with their corresponding reports, which experts from the Radiopaedia platform meticulously review.
|
22 |
+
Given the crucial role of 3D CT in medical image analysis, particularly in the diagnosis, localization, and measurement of systemic lesions,
|
23 |
+
we focus on 3D CT data. We successfully constructed a largest-scale 3D medical image-text paired dataset, named M3D-Cap,
|
24 |
+
comprising 120K pairs of image-text data. Overall, it is divided into two data folders named ct_case and ct_quizze.
|
25 |
+
ct_quizze is used for medical exams and has higher quality. Each folder contains some image folders and one text file.
|
26 |
+
The image folders contain multiple 2D slices of 3D images, while the text files provide English reports describing the corresponding 3D images,
|
27 |
+
including types of abnormalities and lesions. M3D_Cap.json provides the split scheme.
|
28 |
+
|
29 |
+
|
30 |
+
### Supported Tasks
|
31 |
+
M3D-Cap supports various image-text multimodal tasks in 3D medical scenarios,
|
32 |
+
including image-text retrieval, report generation, and image generation.
|
33 |
+
|
34 |
+
## Dataset Format and Structure
|
35 |
+
|
36 |
+
### Data Format
|
37 |
+
<pre>
|
38 |
+
M3D_Seg/
|
39 |
+
ct_case/
|
40 |
+
000006/
|
41 |
+
Axial_non_contrast/
|
42 |
+
0.jpeg
|
43 |
+
1.jpeg
|
44 |
+
......
|
45 |
+
text.txt
|
46 |
+
......
|
47 |
+
ct_quizze/
|
48 |
+
000007/
|
49 |
+
Axial_non_contrast/
|
50 |
+
0.png
|
51 |
+
1.png
|
52 |
+
......
|
53 |
+
text.txt
|
54 |
+
......
|
55 |
+
......
|
56 |
+
</pre>
|
57 |
+
|
58 |
+
### Dataset Download
|
59 |
+
#### Clone with HTTP
|
60 |
+
```bash
|
61 |
+
git clone
|
62 |
+
```
|
63 |
+
#### Manual Download
|
64 |
+
Download all files from the dataset file manually, which can be done using batch download tools.
|
65 |
+
Note: Due to the large size of the overall dataset, it is divided into subfiles of 20G each.
|
66 |
+
After downloading all files, extract them together to obtain the complete data.
|
67 |
+
|
68 |
+
|
69 |
+
### Dataset Loading Method
|
70 |
+
#### 1. Preprocessing
|
71 |
+
Combine slices under each folder in the dataset to form 3D images and name them according to the image file names (retain plane and phase information),
|
72 |
+
saving them as npy files. Filter the text reports in the dataset to obtain high-quality descriptions.
|
73 |
+
|
74 |
+
#### 2. Build Dataset
|
75 |
+
We provide sample code for building the dataset
|
76 |
+
|
77 |
+
```python
|
78 |
+
class CapDataset(Dataset):
|
79 |
+
def __init__(self, args, tokenizer, mode="train"):
|
80 |
+
self.args = args
|
81 |
+
self.data_root = args.data_root
|
82 |
+
self.tokenizer = tokenizer
|
83 |
+
self.mode = mode
|
84 |
+
|
85 |
+
self.image_tokens = "<im_patch>" * args.proj_out_num
|
86 |
+
|
87 |
+
with open(args.cap_data_path, 'r') as file:
|
88 |
+
self.json_file = json.load(file)
|
89 |
+
self.data_list = self.json_file[mode]
|
90 |
+
|
91 |
+
self.caption_prompts = [
|
92 |
+
"Can you provide a caption consists of findings for this medical image?",
|
93 |
+
"Describe the findings of the medical image you see.",
|
94 |
+
"Please caption this medical scan with findings.",
|
95 |
+
"What is the findings of this image?",
|
96 |
+
"Describe this medical scan with findings.",
|
97 |
+
"Please write a caption consists of findings for this image.",
|
98 |
+
"Can you summarize with findings the images presented?",
|
99 |
+
"Please caption this scan with findings.",
|
100 |
+
"Please provide a caption consists of findings for this medical image.",
|
101 |
+
"Can you provide a summary consists of findings of this radiograph?",
|
102 |
+
"What are the findings presented in this medical scan?",
|
103 |
+
"Please write a caption consists of findings for this scan.",
|
104 |
+
"Can you provide a description consists of findings of this medical scan?",
|
105 |
+
"Please caption this medical scan with findings.",
|
106 |
+
"Can you provide a caption consists of findings for this medical scan?"
|
107 |
+
]
|
108 |
+
|
109 |
+
train_transform = mtf.Compose(
|
110 |
+
[
|
111 |
+
mtf.RandRotate90(prob=0.5, spatial_axes=(1, 2)),
|
112 |
+
mtf.RandFlip(prob=0.10, spatial_axis=0),
|
113 |
+
mtf.RandFlip(prob=0.10, spatial_axis=1),
|
114 |
+
mtf.RandFlip(prob=0.10, spatial_axis=2),
|
115 |
+
mtf.RandScaleIntensity(factors=0.1, prob=0.5),
|
116 |
+
mtf.RandShiftIntensity(offsets=0.1, prob=0.5),
|
117 |
+
|
118 |
+
mtf.ToTensor(dtype=torch.float),
|
119 |
+
]
|
120 |
+
)
|
121 |
+
|
122 |
+
val_transform = mtf.Compose(
|
123 |
+
[
|
124 |
+
mtf.ToTensor(dtype=torch.float),
|
125 |
+
]
|
126 |
+
)
|
127 |
+
set_track_meta(False)
|
128 |
+
|
129 |
+
if mode == 'train':
|
130 |
+
self.transform = train_transform
|
131 |
+
elif mode == 'validation':
|
132 |
+
self.transform = val_transform
|
133 |
+
elif mode == 'test':
|
134 |
+
self.transform = val_transform
|
135 |
+
|
136 |
+
def __len__(self):
|
137 |
+
return len(self.data_list)
|
138 |
+
|
139 |
+
def __getitem__(self, idx):
|
140 |
+
max_attempts = 100
|
141 |
+
for _ in range(max_attempts):
|
142 |
+
try:
|
143 |
+
data = self.data_list[idx]
|
144 |
+
image_path = data["image"]
|
145 |
+
image_abs_path = os.path.join(self.data_root, image_path)
|
146 |
+
image = np.load(image_abs_path) # nomalized 0-1, C,D,H,W
|
147 |
+
image = self.transform(image)
|
148 |
+
|
149 |
+
text_path = data["text"]
|
150 |
+
text_abs_path = os.path.join(self.data_root, text_path)
|
151 |
+
with open(text_abs_path, 'r') as text_file:
|
152 |
+
raw_text = text_file.read()
|
153 |
+
answer = raw_text
|
154 |
+
|
155 |
+
prompt_question = random.choice(self.caption_prompts)
|
156 |
+
|
157 |
+
question = self.image_tokens + prompt_question
|
158 |
+
|
159 |
+
text_tensor = self.tokenizer(
|
160 |
+
question + ' ' + answer, max_length=self.args.max_length, truncation=True, padding="max_length", return_tensors="pt"
|
161 |
+
)
|
162 |
+
|
163 |
+
input_id = text_tensor["input_ids"][0]
|
164 |
+
attention_mask = text_tensor["attention_mask"][0]
|
165 |
+
|
166 |
+
valid_len = torch.sum(attention_mask)
|
167 |
+
if valid_len < len(input_id):
|
168 |
+
input_id[valid_len] = self.tokenizer.eos_token_id
|
169 |
+
|
170 |
+
question_tensor = self.tokenizer(
|
171 |
+
question, max_length=self.args.max_length, truncation=True, padding="max_length", return_tensors="pt"
|
172 |
+
)
|
173 |
+
question_len = torch.sum(question_tensor["attention_mask"][0])
|
174 |
+
|
175 |
+
label = input_id.clone()
|
176 |
+
label[label == self.tokenizer.pad_token_id] = -100
|
177 |
+
label[:question_len] = -100
|
178 |
+
|
179 |
+
ret = {
|
180 |
+
'image': image,
|
181 |
+
'input_id': input_id,
|
182 |
+
'label': label,
|
183 |
+
'attention_mask': attention_mask,
|
184 |
+
'question': question,
|
185 |
+
'answer': answer,
|
186 |
+
'question_type': "Caption",
|
187 |
+
}
|
188 |
+
return ret
|
189 |
+
|
190 |
+
except Exception as e:
|
191 |
+
print(f"Error in __getitem__ at index {idx}: {e}")
|
192 |
+
idx = random.randint(0, len(self.data_list) - 1)
|
193 |
+
```
|
194 |
+
|
195 |
+
|
196 |
+
### Data Splitting
|
197 |
+
The entire dataset is split into
|
198 |
+
‘train, validation, test100, test500, test1k, and test’ using a json file.
|
199 |
+
considering testing costs, we provide different quantities of test samples from 100 to 2k, with the number of ‘test‘ being 2k.
|
200 |
+
|
201 |
+
## Dataset Copyright Information
|
202 |
+
|
203 |
+
All images and reports involved in this dataset are publicly available data.
|
204 |
+
For detailed copyright information, please refer to the corresponding links.
|
205 |
+
|
206 |
+
## Citation
|
207 |
+
If you use this dataset, please cite the following works:
|
208 |
+
|
209 |
+
```BibTeX
|
210 |
+
@misc{bai2024m3d,
|
211 |
+
title={M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models},
|
212 |
+
author={Fan Bai and Yuxin Du and Tiejun Huang and Max Q. -H. Meng and Bo Zhao},
|
213 |
+
year={2024},
|
214 |
+
eprint={2404.00578},
|
215 |
+
archivePrefix={arXiv},
|
216 |
+
primaryClass={cs.CV}
|
217 |
+
}
|
218 |
+
@misc{du2024segvol,
|
219 |
+
title={SegVol: Universal and Interactive Volumetric Medical Image Segmentation},
|
220 |
+
author={Yuxin Du and Fan Bai and Tiejun Huang and Bo Zhao},
|
221 |
+
year={2024},
|
222 |
+
eprint={2311.13385},
|
223 |
+
archivePrefix={arXiv},
|
224 |
+
primaryClass={cs.CV}
|
225 |
+
}
|
226 |
+
```
|