M3D-Seg / README.md
GoodBaiBai88's picture
Update README.md
4aa59b3 verified
|
raw
history blame
5.82 kB
metadata
license: apache-2.0
tags:
  - medical
  - 3D medical segmentation
size_categories:
  - 1K<n<10K

Dataset Description

Large-scale General 3D Medical Image Segmentation Dataset (M3D-Seg)

Dataset Introduction

3D medical segmentation is one of the main challenges in medical image analysis. Currently, due to privacy and cost limitations, there is a lack of large-scale publicly available 3D medical images and annotations. To address this, we have collected 25 publicly available 3D CT segmentation datasets, including CHAOS, HaN-Seg, AMOS22, AbdomenCT-1k, KiTS23, KiPA22, KiTS19, BTCV, Pancreas-CT, 3D-IRCADB, FLARE22, TotalSegmentator, CT-ORG, WORD, VerSe19, VerSe20, SLIVER07, QUBIQ, MSD-Colon, MSD-HepaticVessel, MSD-Liver, MSD-lung, MSD-pancreas, MSD-spleen, LUNA16. These datasets are uniformly encoded from 0000-0024, totaling 5,772 3D images and 149,196 3D mask annotations. Each mask corresponds to semantic labels represented in text. Within each folder, there are two sub-folders, ct and gt, storing data and annotations respectively, and utilizing json files for splitting. ‘dataset_info.txt’ describes the textual representation of each dataset label. As a universal segmentation dataset, more public and private datasets can be unified in the same format, thus building a large-scale 3D medical universal segmentation dataset.

Supported Tasks

As data can be represented in the form of image-mask-text, where masks can be converted to box coordinates through bounding boxes, the dataset supports tasks such as: 3D segmentation: semantic segmentation, textual hint segmentation, inference segmentation, etc. 3D localization: visual grounding, referring expression comprehension, referring expression generation.

Dataset Format and Structure

Data Format

    M3D_Seg/
        0000/
            ct/
                case_00000.npy
                ......
            gt/
                case_00000.(3, 512, 512, 611).npz
                ......
            0000.json
        0001/
        ......

Dataset Download

Clone with HTTP

git clone 

Manual Download

Download all files from the dataset file manually, which can be done using batch download tools. Note: Since the 0024 dataset is large, its compressed files are split into 00, 01, 02 three files. Please merge and decompress them after downloading. As the foreground in mask files is often sparse, to save storage space, we use sparse matrices for storage, saved as npz files, with the file name containing the mask shape, please refer to ‘data_load_demo.py’ for data reading.

Dataset Loading Method

1. If downloading this dataset directly, ‘data_process.py’ is not required for processing, skip directly to step 2

Raw data downloaded from the original data must be processed through ‘data_process.py’ and unified into the M3D-Seg dataset. Please note that due to preprocessing, there are differences between the data provided by this dataset and its original nii.gz files. Please refer to ‘data_process.py’ for processing methods.

2. Build Dataset

We provide sample code for three tasks' Datasets, including semantic segmentation, hint segmentation, and inference segmentation.


Data Splitting

Each file is split into ‘train, validation/test’ using json files, for ease of training and testing models.

Dataset Sources

Dataset Copyright Information

All datasets involved in this dataset are publicly available datasets. For detailed copyright information, please refer to the corresponding dataset links.

Citation

If you use this dataset, please cite the following works:

@misc{bai2024m3d,
      title={M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models}, 
      author={Fan Bai and Yuxin Du and Tiejun Huang and Max Q. -H. Meng and Bo Zhao},
      year={2024},
      eprint={2404.00578},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@misc{du2024segvol,
      title={SegVol: Universal and Interactive Volumetric Medical Image Segmentation}, 
      author={Yuxin Du and Fan Bai and Tiejun Huang and Bo Zhao},
      year={2024},
      eprint={2311.13385},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}