bibtex_url
null | proceedings
stringlengths 58
58
| bibtext
stringlengths 511
974
| abstract
stringlengths 92
2k
| title
stringlengths 30
207
| authors
sequencelengths 1
22
| id
stringclasses 1
value | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 14
values | n_linked_authors
int64 -1
1
| upvotes
int64 -1
1
| num_comments
int64 -1
0
| n_authors
int64 -1
10
| Models
sequencelengths 0
4
| Datasets
sequencelengths 0
1
| Spaces
sequencelengths 0
0
| old_Models
sequencelengths 0
4
| old_Datasets
sequencelengths 0
1
| old_Spaces
sequencelengths 0
0
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values | unique_id
int64 0
855
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://papers.miccai.org/miccai-2024/paper/1365_paper.pdf | @InProceedings{ Lia_IterMask2_MICCAI2024,
author = { Liang, Ziyun and Guo, Xiaoqing and Noble, J. Alison and Kamnitsas, Konstantinos },
title = { { IterMask2: Iterative Unsupervised Anomaly Segmentation via Spatial and Frequency Masking for Brain Lesions in MRI } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Unsupervised anomaly segmentation approaches to pathology segmentation train a model on images of healthy subjects, that they define as the `normal’ data distribution. At inference, they aim to segment any pathologies in new images as ‘anomalies’, as they exhibit patterns that deviate from those in ‘normal’ training data. Prevailing methods follow the ‘corrupt-and-reconstruct’ paradigm. They intentionally corrupt an input image, reconstruct it to follow the learned ‘normal’ distribution, and subsequently segment anomalies based on reconstruction error. Corrupting an input image, however, inevitably leads to suboptimal reconstruction even of normal regions, causing false positives. To alleviate this, we propose a novel iterative spatial mask-refining strategy IterMask2. We iteratively mask areas of the image, reconstruct them, and update the mask based on reconstruction error. This iterative process progressively adds information about areas that are confidently normal as per the model. The increasing content guides reconstruction of nearby masked areas, improving reconstruction of normal tissue under these areas, reducing false positives. We also use high-frequency image content as an auxiliary input to provide additional structural information for masked areas. This further improves reconstruction error of normal in comparison to anomalous areas, facilitating segmentation of the latter. We conduct experiments on several brain lesion datasets and demonstrate effectiveness of our method. Code will be published at: https://github.com/ZiyunLiang/IterMask2 | IterMask2: Iterative Unsupervised Anomaly Segmentation via Spatial and Frequency Masking for Brain Lesions in MRI | [
"Liang, Ziyun",
"Guo, Xiaoqing",
"Noble, J. Alison",
"Kamnitsas, Konstantinos"
] | Conference | 2406.02422 | [
"https://github.com/ZiyunLiang/IterMask2"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 500 |
|
null | https://papers.miccai.org/miccai-2024/paper/3284_paper.pdf | @InProceedings{ Hua_TopologicalGCN_MICCAI2024,
author = { Huang, Tianxiang and Shi, Jing and Jin, Ge and Li, Juncheng and Wang, Jun and Du, Jun and Shi, Jun },
title = { { Topological GCN for Improving Detection of Hip Landmarks from B-Mode Ultrasound Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | The B-mode ultrasound based computer-aided diagnosis (CAD) has demon-strated its effectiveness for diagnosis of Developmental Dysplasia of the Hip (DDH) in infants. However, due to effect of speckle noise in ultrasound im-ages, it is still a challenge task to accurately detect hip landmarks. In this work, we propose a novel hip landmark detection model by integrating the Topological GCN (TGCN) with an Improved Conformer (TGCN-ICF) into a unified framework to improve detection performance. The TGCN-ICF in-cludes two subnetworks: an Improved Conformer (ICF) subnetwork to gen-erate heatmaps and a TGCN subnetwork to additionally refine landmark de-tection. This TGCN can effectively improve detection accuracy with the guidance of class labels. Moreover, a Mutual Modulation Fusion (MMF) module is developed for deeply exchanging and fusing the features extracted from the U-Net and Transformer branches in ICF. The experimental results on the real DDH dataset demonstrate that the proposed TGCN-ICF outper-forms all the compared algorithms. | Topological GCN for Improving Detection of Hip Landmarks from B-Mode Ultrasound Images | [
"Huang, Tianxiang",
"Shi, Jing",
"Jin, Ge",
"Li, Juncheng",
"Wang, Jun",
"Du, Jun",
"Shi, Jun"
] | Conference | 2408.13495 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 501 |
|
null | https://papers.miccai.org/miccai-2024/paper/0712_paper.pdf | @InProceedings{ Zhu_SelfRegUNet_MICCAI2024,
author = { Zhu, Wenhui and Chen, Xiwen and Qiu, Peijie and Farazi, Mohammad and Sotiras, Aristeidis and Razi, Abolfazl and Wang, Yalin },
title = { { SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Since its introduction, UNet has been leading a variety of medical image segmentation tasks. Although numerous follow-up studies have also been dedicated to improving the performance of standard UNet, few have conducted in-depth analyses of the underlying interest pattern of UNet in medical image segmentation. In this paper, we explore the patterns learned in a UNet and observe two important factors that potentially affect its performance: (i) irrelative feature learned caused by asymmetric supervision; (ii) feature redundancy in the feature map. To this end, we propose to balance the supervision between encoder and decoder and reduce the redundant information in the UNet. Specifically, we use the feature map that contains the most semantic information (i.e., the last layer of the decoder) to provide additional supervision to other blocks to provide additional supervision and reduce feature redundancy by leveraging feature distillation. The proposed method can be easily integrated into existing UNet architecture in a plug-and-play fashion with negligible computational cost. The experimental results suggest that the proposed method consistently improves the performance of standard UNets on four medical image segmentation datasets. | SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation | [
"Zhu, Wenhui",
"Chen, Xiwen",
"Qiu, Peijie",
"Farazi, Mohammad",
"Sotiras, Aristeidis",
"Razi, Abolfazl",
"Wang, Yalin"
] | Conference | 2406.14896 | [
"https://github.com/ChongQingNoSubway/SelfReg-UNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 502 |
|
null | https://papers.miccai.org/miccai-2024/paper/1197_paper.pdf | @InProceedings{ Li_Dynamic_MICCAI2024,
author = { Li, Xiao-Xin and Zhu, Fang-Zheng and Yang, Junwei and Chen, Yong and Shen, Dinggang },
title = { { Dynamic Hybrid Unrolled Multi-Scale Network for Accelerated MRI Reconstruction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | In accelerated magnetic resonance imaging (MRI) reconstruction, the anatomy of a patient is recovered from a set of under-sampled measurements. Currently, unrolled hybrid architectures, incorporating both the beneficial bias of convolutions with the power of Transformers have been proven to be successful in solving this ill-posed inverse problem. The multi-scale strategy of the intra-cascades and that of the inter-cascades are used to decrease the high compute cost of Transformers and to rectify the spectral bias of Transformers, respectively. In this work, we proposed a dynamic Hybrid Unrolled Multi-Scale Network (dHUMUS-Net) by incorporating the two multi-scale strategies. A novel Optimal Scale Estimation Network is presented to dynamically create or choose the multi-scale Transformer-based modules in all cascades of dHUMUS-Net. Our dHUMUS-Net achieves significant improvements over the state-of-the-art methods on the publicly available fastMRI dataset. | Dynamic Hybrid Unrolled Multi-Scale Network for Accelerated MRI Reconstruction | [
"Li, Xiao-Xin",
"Zhu, Fang-Zheng",
"Yang, Junwei",
"Chen, Yong",
"Shen, Dinggang"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 503 |
||
null | https://papers.miccai.org/miccai-2024/paper/2541_paper.pdf | @InProceedings{ Lin_ClassBalancing_MICCAI2024,
author = { Lin, Hongxin and Zhang, Chu and Wang, Mingyu and Huang, Bin and Shao, Jingjing and Zhang, Jinxiang and Gao, Zhenhua and Diao, Xianfen and Huang, Bingsheng },
title = { { Class-Balancing Deep Active Learning with Auto-Feature Mixing and Minority Push-Pull Sampling } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Deep neural networks demand large-scale labeled dataset for optimal performance, yet the cost of annotation remains high. Deep active learning (DAL) offers a promising approach to reduce annotation cost while maintaining performance. However, traditional DAL methods often fail to balance performance and computational efficiency, and overlook the challenge posed by class imbalance. To address these challenges, we propose a novel framework, named Class- Balancing Deep Active Learning(CB-DAL), comprising two key modules: auto-mode feature mixing(Auto-FM) and minority push-pull sampling(MPPS). Auto-FM identifies informative samples by simply detecting in inconsistencies in predicted labels after feature mixing, while MPPS mitigates the class imbalance within the selected training pool by selecting candidates whose features close to the minority class centroid while distant from features of the labelled majority class. Evaluated across varying class imbalance ratios and dataset scales, CB-DAL outperforms traditional DAL methods and the counterparts designed for imbalanced dataset. Our method provides a simple yet effective solution to the class imbalance problem in DAL ,with broad potential applications. | Class-Balancing Deep Active Learning with Auto-Feature Mixing and Minority Push-Pull Sampling | [
"Lin, Hongxin",
"Zhang, Chu",
"Wang, Mingyu",
"Huang, Bin",
"Shao, Jingjing",
"Zhang, Jinxiang",
"Gao, Zhenhua",
"Diao, Xianfen",
"Huang, Bingsheng"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 504 |
||
null | https://papers.miccai.org/miccai-2024/paper/4059_paper.pdf | @InProceedings{ Fen_Diversified_MICCAI2024,
author = { Feng, Xiaoyi and Zhang, Minqing and He, Mengxian and Gao, Mengdi and Wei, Hao and Yuan, Wu },
title = { { Diversified and Structure-realistic Fundus Image Synthesis for Diabetic Retinopathy Lesion Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Automated diabetic retinopathy (DR) lesion segmentation aids in improving the efficiency of DR detection. However, obtaining lesion annotations for model training heavily relies on domain expertise and is a labor-intensive process. In addition to classical methods for alleviating label scarcity issues, such as self-supervised and semi-supervised learning, with the rapid development of generative models, several studies have indicated that utilizing synthetic image-mask pairs as data augmentation is promising. Due to the insufficient labeled data available to train powerful generative models, however, the synthetic fundus data suffers from two drawbacks: 1) unrealistic anatomical structures, 2) limited lesion diversity. In this paper, we propose a novel framework to synthesize fundus with DR lesion masks under limited labels. To increase lesion variation, we designed a learnable module to generate anatomically plausible masks as the condition, rather than directly using lesion masks from the limited dataset. To reduce the difficulty of learning intricate structures, we avoid directly generating images solely from lesion mask conditions. Instead, we developed an inpainting strategy that enables the model to generate lesions only within the mask area based on easily accessible healthy fundus images. Subjective evaluations indicate that our approach can generate more realistic fundus images with lesions compared to other generative methods. The downstream lesion segmentation experiments demonstrate that our synthetic data resulted in the most improvement across multiple network architectures, surpassing state-of-the-art methods. | Diversified and Structure-realistic Fundus Image Synthesis for Diabetic Retinopathy Lesion Segmentation | [
"Feng, Xiaoyi",
"Zhang, Minqing",
"He, Mengxian",
"Gao, Mengdi",
"Wei, Hao",
"Yuan, Wu"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 505 |
||
null | https://papers.miccai.org/miccai-2024/paper/2001_paper.pdf | @InProceedings{ Zho_Robust_MICCAI2024,
author = { Zhou, Xiaogen and Sun, Yiyou and Deng, Min and Chu, Winnie Chiu Wing and Dou, Qi },
title = { { Robust Semi-supervised Multimodal Medical Image Segmentation via Cross Modality Collaboration } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Multimodal learning leverages complementary information derived from different modalities, thereby enhancing performance in medical image segmentation. However, prevailing multimodal learning methods heavily rely on extensive well-annotated data from various modalities to achieve accurate segmentation performance. This dependence often poses a challenge in clinical settings due to limited availability of such data. Moreover, the inherent anatomical misalignment between different imaging modalities further complicates the endeavor to enhance segmentation performance. To address this problem, we propose a novel semi-supervised multimodal segmentation framework that is robust to scarce labeled data and misaligned modalities. Our framework employs a novel cross modality collaboration strategy to distill modality-independent knowledge, which is inherently associated with each modality, and integrates this information into a unified fusion layer for feature amalgamation. With a channel-wise semantic consistency loss, our framework ensures alignment of modality-independent information from a feature-wise perspective across modalities, thereby fortifying it against misalignments in multimodal scenarios. Furthermore, our framework effectively integrates contrastive consistent learning to regulate anatomical structures, facilitating anatomical-wise prediction alignment on unlabeled data in semi-supervised segmentation tasks. Our method achieves competitive performance compared to other multimodal methods across three tasks: cardiac, abdominal multi-organ, and thyroid-associated orbitopathy segmentations. It also demonstrates outstanding robustness in scenarios involving scarce labeled data and misaligned modalities. | Robust Semi-supervised Multimodal Medical Image Segmentation via Cross Modality Collaboration | [
"Zhou, Xiaogen",
"Sun, Yiyou",
"Deng, Min",
"Chu, Winnie Chiu Wing",
"Dou, Qi"
] | Conference | 2408.07341 | [
"https://github.com/med-air/CMC"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 506 |
|
null | https://papers.miccai.org/miccai-2024/paper/0484_paper.pdf | @InProceedings{ Li_Image_MICCAI2024,
author = { Li, Zhe and Kainz, Bernhard },
title = { { Image Distillation for Safe Data Sharing in Histopathology } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Histopathology can help clinicians make accurate diagnoses, determine disease prognosis, and plan appropriate treatment strategies. As deep learning techniques prove successful in the medical domain, the primary challenges become limited data availability and concerns about data sharing and privacy. Federated learning has addressed this challenge by training models locally and updating parameters on a server. However, issues, such as domain shift and bias, persist and impact overall performance. Dataset distillation presents an alternative approach to overcoming these challenges. It involves creating a small synthetic dataset that encapsulates essential information, which can be shared without constraints. At present, this paradigm is not practicable as current distillation approaches only generate non human readable representations and exhibit insufficient performance for downstream learning tasks. We train a latent diffusion model and construct a new distilled synthetic dataset with a small number of human readable synthetic images. Selection of maximally informative synthetic images is done via graph community analysis of the representation space. We compare downstream classification models trained on our synthetic distillation data to models trained on real data and reach performances suitable for practical application. | Image Distillation for Safe Data Sharing in Histopathology | [
"Li, Zhe",
"Kainz, Bernhard"
] | Conference | 2406.13536 | [
"https://github.com/ZheLi2020/InfoDist"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 507 |
|
null | https://papers.miccai.org/miccai-2024/paper/2280_paper.pdf | @InProceedings{ Ram_Ensemble_MICCAI2024,
author = { Ramanathan, Vishwesh and Pati, Pushpak and McNeil, Matthew and Martel, Anne L. },
title = { { Ensemble of Prior-guided Expert Graph Models for Survival Prediction in Digital Pathology } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Survival prediction in pathology is a dynamic research field focused on identifying predictive biomarkers to enhance cancer survival models, providing valuable guidance for clinicians in treatment decisions. Graph-based methods, especially Graph Neural Networks (GNNs) leveraging rich interactions among different biological entities, have recently successfully predicted survival. However, the inherent heterogeneity among the entities within tissue slides significantly challenges the learning of GNNs. GNNs, operating with the homophily assumption, diffuse the intricate interactions among heterogeneous tissue entities in a tissue microenvironment. Further, the convoluted downstream task relevant information is not effectively exploited by graph-based methods when working with large slide-graphs. To address these challenges, we propose a novel prior-guided edge-attributed tissue-graph construction, followed by an ensemble of expert graph-attention survival models. Our method exploits diverse prognostic factors within numerous targeted tissue subgraphs of heterogeneous large slide-graphs.
Our method achieves state-of-the-art results on four cancer types, improving overall survival prediction by 4.33% compared to the competing methods. | Ensemble of Prior-guided Expert Graph Models for Survival Prediction in Digital Pathology | [
"Ramanathan, Vishwesh",
"Pati, Pushpak",
"McNeil, Matthew",
"Martel, Anne L."
] | Conference | [
"https://github.com/Vishwesh4/DGNN"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 508 |
||
null | https://papers.miccai.org/miccai-2024/paper/2346_paper.pdf | @InProceedings{ Gu_Reliable_MICCAI2024,
author = { Gu, Ang Nan and Tsang, Michael and Vaseli, Hooman and Tsang, Teresa and Abolmaesumi, Purang },
title = { { Reliable Multi-View Learning with Conformal Prediction for Aortic Stenosis Classification in Echocardiography } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | The fundamental problem with ultrasound-guided diagnosis is that the acquired images are often 2-D cross-sections of a 3-D anatomy, potentially missing important anatomical details. This limitation leads to challenges in ultrasound echocardiography, such as poor visualization of heart valves or foreshortening of ventricles. Clinicians must interpret these images with inherent uncertainty, a nuance absent in machine learning’s one-hot labels. We propose Re-Training for Uncertainty (RT4U), a data-centric method to introduce uncertainty to weakly informative inputs in the training set. This simple approach can be incorporated to existing state-of-the-art aortic stenosis classification methods to further improve their accuracy. When combined with conformal prediction techniques, RT4U can yield adaptively sized prediction sets which are guaranteed to contain the ground truth class to a high accuracy.
We validate the effectiveness of RT4U on three diverse datasets: a public (TMED-2) and a private AS dataset, along with a CIFAR-10-derived toy dataset. Results show improvement on all the datasets. Our source code is publicly available at: https://github.com/an-michaelg/RT4U | Reliable Multi-View Learning with Conformal Prediction for Aortic Stenosis Classification in Echocardiography | [
"Gu, Ang Nan",
"Tsang, Michael",
"Vaseli, Hooman",
"Tsang, Teresa",
"Abolmaesumi, Purang"
] | Conference | 2409.09680 | [
"https://github.com/an-michaelg/RT4U"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 509 |
|
null | https://papers.miccai.org/miccai-2024/paper/1276_paper.pdf | @InProceedings{ Rob_DRIM_MICCAI2024,
author = { Robinet, Lucas and Berjaoui, Ahmad and Kheil, Ziad and Cohen-Jonathan Moyal, Elizabeth },
title = { { DRIM: Learning Disentangled Representations from Incomplete Multimodal Healthcare Data } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Real-life medical data is often multimodal and incomplete, fueling the growing need for advanced deep learning models capable of integrating them efficiently.
The use of diverse modalities, including histopathology slides, MRI, and genetic data, offers unprecedented opportunities to improve prognosis prediction and to unveil new treatment pathways.
Contrastive learning, widely used for deriving representations from paired data in multimodal tasks, assumes that different views contain the same task-relevant information and leverages only shared information.
This assumption becomes restrictive when handling medical data since each modality also harbors specific knowledge relevant to downstream tasks.
We introduce DRIM, a new multimodal method for capturing these shared and unique representations, despite data sparsity.
More specifically, given a set of modalities, we aim to encode a representation for each one that can be divided into two components: one encapsulating patient-related information common across modalities and the other, encapsulating modality-specific details.
This is achieved by increasing the shared information among different patient modalities while minimizing the overlap between shared and unique components within each modality.
Our method outperforms state-of-the-art algorithms on glioma patients survival prediction tasks, while being robust to missing modalities. To promote reproducibility, the code is made publicly available at https://github.com/Lucas-rbnt/DRIM. | DRIM: Learning Disentangled Representations from Incomplete Multimodal Healthcare Data | [
"Robinet, Lucas",
"Berjaoui, Ahmad",
"Kheil, Ziad",
"Cohen-Jonathan Moyal, Elizabeth"
] | Conference | 2409.17055 | [
"https://github.com/Lucas-rbnt/DRIM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 510 |
|
null | https://papers.miccai.org/miccai-2024/paper/3437_paper.pdf | @InProceedings{ Hua_Resolving_MICCAI2024,
author = { Huang, Yuliang and Eiben, Bjoern and Thielemans, Kris and McClelland, Jamie R. },
title = { { Resolving Variable Respiratory Motion From Unsorted 4D Computed Tomography } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | 4D Computed Tomography (4DCT) is widely used for many clinical applications such as radiotherapy treatment planning, PET and ventilation imaging. However, common 4DCT methods reconstruct multiple breath cycles into a single, arbitrary breath cycle which can lead to various artefacts, impacting the downstream clinical applications. Surrogate driven motion models can estimate continuous variable motion across multiple cycles based on CT segments `unsorted’ from 4DCT, but it requires respiration surrogate signals with strong correlation to the internal motion, which are not always available. The method proposed in this study eliminates such dependency by adapting the hyper-gradient method to the optimization of surrogate signals as hyper-parameters, while achieving better or comparable performance, as demonstrated on digital phantom simulations and real patient data. Our method produces a high-quality motion-compensated image together with estimates of the motion, including breath-to-breath variability, throughout the image acquisition. Our method has the potential to improve downstream clinical applications, and also enables retrospective analysis of open access 4DCT dataset where no respiration signals are stored. Code is available at https://github.com/Yuliang-Huang/4DCT-irregular-motion. | Resolving Variable Respiratory Motion From Unsorted 4D Computed Tomography | [
"Huang, Yuliang",
"Eiben, Bjoern",
"Thielemans, Kris",
"McClelland, Jamie R."
] | Conference | 2407.00665 | [
"https://github.com/Yuliang-Huang/4DCT-irregular-motion"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 511 |
|
null | https://papers.miccai.org/miccai-2024/paper/1378_paper.pdf | @InProceedings{ Li_Epileptic_MICCAI2024,
author = { Li, Zhuoyi and Li, Wenjun and Zhu, Ning and Han, Junwei and Liu, Tianming and Chen, Beibei and Yan, Zhiqiang and Zhang, Tuo },
title = { { Epileptic Seizure Detection in SEEG Signals using a Unified Multi-scale Temporal-Spatial-Spectral Transformer Model } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | High-performance methods for automated detection of epileptic stereo-electroencephalography (SEEG) have important clinical research implications, improving the diagnostic efficiency and reducing physician burden. However, few studies have been able to consider the process of seizure propagation, thus failing to fully capture the deep representations and variations of SEEG in the temporal, spatial, and spectral domains. In this paper, we construct a novel long-term SEEG seizure dataset (LTSZ dataset), and propose channel embedding temporal-spatial-spectral transformer (CE-TSS-Transformer) framework. Firstly, we design channel embedding module to reduce feature dimensions and adaptively construct optimal representation for subsequent analysis. Secondly, we integrate unified multi-scale temporal-spatial-spectral analysis to capture multi-level, multi-domain deep features. Finally, we utilize the transformer encoder to learn the global relevance of features, enhancing the network’s ability to express SEEG features. Experimental results demonstrate state-of-the-art detection performance on the LTSZ dataset, achieving sensitivity, specificity, and accuracy of 99.48%, 99.80%, and 99.48%, respectively. Furthermore, we validate the scalability of the proposed framework on two public datasets of different signal sources, demonstrating the power of the CE-TSS-Transformer framework for capturing diverse temporal-spatial-spectral patterns in seizure detection. The code
is available at https://github.com/lizhuoyi-eve/CE-TSS-Transformer. | Epileptic Seizure Detection in SEEG Signals using a Unified Multi-scale Temporal-Spatial-Spectral Transformer Model | [
"Li, Zhuoyi",
"Li, Wenjun",
"Zhu, Ning",
"Han, Junwei",
"Liu, Tianming",
"Chen, Beibei",
"Yan, Zhiqiang",
"Zhang, Tuo"
] | Conference | [
"https://github.com/lizhuoyi-eve/CE-TSS-Transformer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 512 |
||
null | https://papers.miccai.org/miccai-2024/paper/1453_paper.pdf | @InProceedings{ Zho_HeartBeat_MICCAI2024,
author = { Zhou, Xinrui and Huang, Yuhao and Xue, Wufeng and Dou, Haoran and Cheng, Jun and Zhou, Han and Ni, Dong },
title = { { HeartBeat: Towards Controllable Echocardiography Video Synthesis with Multimodal Conditions-Guided Diffusion Models } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Echocardiography (ECHO) video is widely used for cardiac examination. In clinical, this procedure heavily relies on operator experience, which needs years of training and maybe the assistance of deep learning-based systems for enhanced accuracy and efficiency. However, it is challenging since acquiring sufficient customized data (e.g., abnormal cases) for novice training and deep model development is clinically unrealistic. Hence, controllable ECHO video synthesis is highly desirable. In this paper, we propose a novel diffusion-based framework named HeartBeat towards controllable and high-fidelity ECHO video synthesis. Our highlight is three-fold. First, HeartBeat serves as a unified framework that enables perceiving multimodal conditions simultaneously to guide controllable generation. Second, we factorize the multimodal conditions into local and global ones, with two insertion strategies separately provided fine- and coarse-grained controls in a composable and flexible manner. In this way, users can synthesize ECHO videos that conform to their mental imagery by combining multimodal control signals. Third, we propose to decouple the visual concepts and temporal dynamics learning using a two-stage training scheme for simplifying the model training. One more interesting thing is that HeartBeat can easily generalize to mask-guided cardiac MRI synthesis in a few shots, showcasing its scalability to broader applications. Extensive experiments on two public datasets show the efficacy of the proposed HeartBeat. | HeartBeat: Towards Controllable Echocardiography Video Synthesis with Multimodal Conditions-Guided Diffusion Models | [
"Zhou, Xinrui",
"Huang, Yuhao",
"Xue, Wufeng",
"Dou, Haoran",
"Cheng, Jun",
"Zhou, Han",
"Ni, Dong"
] | Conference | 2406.14098 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 513 |
|
null | https://papers.miccai.org/miccai-2024/paper/1410_paper.pdf | @InProceedings{ Wan_Contextguided_MICCAI2024,
author = { Wan, Kaiwen and Wang, Bomin and Wu, Fuping and Gong, Haiyu and Zhuang, Xiahai },
title = { { Context-guided Continual Reinforcement Learning for Landmark Detection with Incomplete Data } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Existing landmark detection methods are primarily designed for centralized learning scenarios where all training data and labels are complete and available throughout the entire training phase. In real-world scenarios, training data may be collected sequentially, covering only part of the region of interest or providing incomplete landmark labels.
In this work, we propose a novel continual reinforcement learning framework to tackle this complex situation in landmark detection.
To handle the increasing number of landmark targets during training, we introduce a Q-learning network that takes both observations and prompts as input. The prompts are stored in a buffer and utilized to guide the prediction for each landmark, enabling our method to adapt to the intricacies of the data collection process.
We validate our approach on two datasets: the RSNA-PBA dataset, representing scenarios with complete images and incomplete labels, and the WB-DXA dataset, representing situations where both images and labels are incomplete. The results demonstrate the effectiveness of the proposed method in landmark detection tasks with complex data structures. The source code will be available from https://github.com/kevinwolcano/CgCRL. | Context-guided Continual Reinforcement Learning for Landmark Detection with Incomplete Data | [
"Wan, Kaiwen",
"Wang, Bomin",
"Wu, Fuping",
"Gong, Haiyu",
"Zhuang, Xiahai"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 514 |
||
null | https://papers.miccai.org/miccai-2024/paper/2090_paper.pdf | @InProceedings{ Lia_3DSAutoMed_MICCAI2024,
author = { Liang, Junjie and Cao, Peng and Yang, Wenju and Yang, Jinzhu and Zaiane, Osmar R. },
title = { { 3D-SAutoMed: Automatic Segment Anything Model for 3D Medical Image Segmentation from Local-Global Perspective } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | 3D medical image segmentation is critical for clinical diagnosis and treatment planning. Recently, with the powerful generalization, the foundational segmentation model SAM is widely used in medical images. However, the existing SAM variants still have many limitations including lack of 3D-aware ability and automatic prompts. To address these limitations, we present a novel SAM-based segmentation framework in 3D medical images, namely 3D-SAutoMed. We respectively propose the Inter- and Intra-slice Attention and Historical slice Information Sharing strategy to share local and global information, so as to enable SAM to be 3D-aware. Meanwhile, we propose a Box Prompt Generator to automatically generate prompt embedding, leading full-automation in SAM. Our results demonstrate that 3D-SAutoMed outperforms advanced universal methods and SAM variants on both metrics and across BTCV, CHAOS and SegTHOR datasets. Particularly, a large improvement of HD score is achieved, e.g. 44% and 20.7% improvement compared with the best result in the other SAM variants on the BTCV and SegTHOR dataset, respectively. | 3D-SAutoMed: Automatic Segment Anything Model for 3D Medical Image Segmentation from Local-Global Perspective | [
"Liang, Junjie",
"Cao, Peng",
"Yang, Wenju",
"Yang, Jinzhu",
"Zaiane, Osmar R."
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 515 |
||
null | https://papers.miccai.org/miccai-2024/paper/2888_paper.pdf | @InProceedings{ Yan_Brain_MICCAI2024,
author = { Yang, Li and He, Zhibin and Zhong, Tianyang and Li, Changhe and Zhu, Dajiang and Han, Junwei and Liu, Tianming and Zhang, Tuo },
title = { { Brain Cortical Functional Gradients Predict Cortical Folding Patterns via Attention Mesh Convolution } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | A close relation between brain function and cortical folding has been demonstrated by macro-/micro- imaging, computational modeling, and genetics. Since gyri and sulci, two basic anatomical building blocks of cortical folding patterns, were suggested to bear different functional roles, a precise mapping from brain function to gyro-sulcal patterns can provide profound insights into both biological and artificial neural networks. However, there lacks a generic theory and effective computational model so far, due to the highly nonlinear relation between them, huge inter-individual variabilities and a sophisticated description of brain function regions/networks distribution as mosaics, such that spatial patterning of them has not been considered. To this end, as a preliminary effort, we adopted brain functional gradients derived from resting-state fMRI to embed the “gradual” change of functional connectivity patterns, and developed a novel attention mesh convolution model to predict cortical gyro-sulcal segmentation maps on individual brains. The convolution on mesh considers the spatial organization of functional gradients and folding patterns on a cortical sheet and the newly designed channel attention block enhances the interpretability of the
contribution of different functional gradients to cortical folding prediction. Experiments show that the prediction performance via our model outperforms other state-of-the-art models. In addition, we found that the dominant functional gradients contribute less to folding prediction. On the activation maps of the last layer, some well-studied cortical landmarks are found on the borders of, rather than within, the highly activated regions. These results and findings suggest that a specifically designed artificial neural network can improve the precision of the mapping between brain functions and cortical folding patterns, and can provide valuable insight of brain anatomy-function relation for neuroscience. | Brain Cortical Functional Gradients Predict Cortical Folding Patterns via Attention Mesh Convolution | [
"Yang, Li",
"He, Zhibin",
"Zhong, Tianyang",
"Li, Changhe",
"Zhu, Dajiang",
"Han, Junwei",
"Liu, Tianming",
"Zhang, Tuo"
] | Conference | 2205.10605 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 516 |
|
null | https://papers.miccai.org/miccai-2024/paper/1554_paper.pdf | @InProceedings{ Wan_Ordinal_MICCAI2024,
author = { Wang, Xin and Tan, Tao and Gao, Yuan and Marcus, Eric and Han, Luyi and Portaluri, Antonio and Zhang, Tianyu and Lu, Chunyao and Liang, Xinglong and Beets-Tan, Regina and Teuwen, Jonas and Mann, Ritse },
title = { { Ordinal Learning: Longitudinal Attention Alignment Model for Predicting Time to Future Breast Cancer Events from Mammograms } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Precision breast cancer (BC) risk assessment is crucial for developing individualized screening and prevention. Despite the promising potential of recent mammogram (MG) based deep learning models in predicting BC risk, they mostly overlook the “time-to-future-event” ordering among patients and exhibit limited explorations into how they track history changes in breast tissue, thereby limiting their clinical application. In this work, we propose a novel method, named OA-BreaCR, to precisely model the ordinal relationship of the time to and between BC events while incorporating longitudinal breast tissue changes in a more explainable manner. We validate our method on public EMBED and inhouse datasets, comparing with existing BC risk prediction and time prediction methods. Our ordinal learning method OA-BreaCR outperforms existing methods in both BC risk and time-to-future-event prediction tasks. Additionally, ordinal heatmap visualizations show the model’s attention over time. Our findings underscore the importance of interpretable and precise risk assessment for enhancing BC screening and prevention efforts. The code will be accessible to the public. | Ordinal Learning: Longitudinal Attention Alignment Model for Predicting Time to Future Breast Cancer Events from Mammograms | [
"Wang, Xin",
"Tan, Tao",
"Gao, Yuan",
"Marcus, Eric",
"Han, Luyi",
"Portaluri, Antonio",
"Zhang, Tianyu",
"Lu, Chunyao",
"Liang, Xinglong",
"Beets-Tan, Regina",
"Teuwen, Jonas",
"Mann, Ritse"
] | Conference | 2409.06887 | [
"https://github.com/xinwangxinwang/OA-BreaCR"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 517 |
|
null | https://papers.miccai.org/miccai-2024/paper/3076_paper.pdf | @InProceedings{ Van_Privacy_MICCAI2024,
author = { Van der Goten, Lennart A. and Smith, Kevin },
title = { { Privacy Protection in MRI Scans Using 3D Masked Autoencoders } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | MRI scans provide valuable medical information, however they also contain sensitive and personally identifiable information that needs to be protected. Whereas MRI metadata is easily sanitized, MRI image data is a privacy risk because it contains information to render highly-realistic 3D visualizations of a patient’s head, enabling malicious actors to possibly identify the subject by cross-referencing a database. Data anonymization and de-identification is concerned with ensuring the privacy and confidentiality of individuals’ personal information. Traditional MRI de-identification methods remove privacy-sensitive parts (e.g. eyes, nose etc.) from a given scan. This comes at the expense of introducing a domain shift that can throw off downstream analyses. In this work, we propose CP-MAE, a model that de-identifies the face by remodeling it (e.g. changing the face) rather than by removing parts using masked autoencoders.
CP-MAE outperforms all previous approaches in terms of downstream task performance as well as de-identification.
With our method we are able to synthesize high-fidelity scans of resolution up to 256^3 on the ADNI and OASIS-3 datasets – compared to 128^3 with previous approaches – which constitutes an eight-fold increase in the number of voxels. | Privacy Protection in MRI Scans Using 3D Masked Autoencoders | [
"Van der Goten, Lennart A.",
"Smith, Kevin"
] | Conference | 2310.15778 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 518 |
|
null | https://papers.miccai.org/miccai-2024/paper/2268_paper.pdf | @InProceedings{ Qiu_Leveraging_MICCAI2024,
author = { Qiu, Jingna and Aubreville, Marc and Wilm, Frauke and Öttl, Mathias and Utz, Jonas and Schlereth, Maja and Breininger, Katharina },
title = { { Leveraging Image Captions for Selective Whole Slide Image Annotation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Acquiring annotations for whole slide images (WSIs)-based deep learning tasks, such as creating tissue segmentation masks or detecting mitotic figures, is a laborious process due to the extensive image size and the significant manual work involved in the annotation. This paper focuses on identifying and annotating specific image regions that optimize model training, given a limited annotation budget. While random sampling helps capture data variance by collecting annotation regions throughout the WSIs, insufficient data curation may result in an inadequate representation of minority classes. Recent studies proposed diversity sampling to select a set of regions that maximally represent unique characteristics of the WSIs. This is done by pretraining on unlabeled data through self-supervised learning and then clustering all regions in the latent space. However, establishing the optimal number of clusters can be difficult and not all clusters are task-relevant. This paper presents prototype sampling, a new method for annotation region selection. It discovers regions exhibiting typical characteristics of each task-specific class. The process entails recognizing class prototypes from extensive histopathology image-caption databases and detecting unlabeled image regions that resemble these prototypes. Our results show that prototype sampling is more effective than random and diversity sampling in identifying annotation regions with valuable training information, resulting in improved model performance in semantic segmentation and mitotic figure detection tasks. Code is available at https://github.com/DeepMicroscopy/Prototype-sampling. | Leveraging Image Captions for Selective Whole Slide Image Annotation | [
"Qiu, Jingna",
"Aubreville, Marc",
"Wilm, Frauke",
"Öttl, Mathias",
"Utz, Jonas",
"Schlereth, Maja",
"Breininger, Katharina"
] | Conference | 2407.06363 | [
"https://github.com/DeepMicroscopy/Prototype-sampling"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 519 |
|
null | https://papers.miccai.org/miccai-2024/paper/0380_paper.pdf | @InProceedings{ Wan_Latent_MICCAI2024,
author = { Wang, Edward and Au, Ryan and Lang, Pencilla and Mattonen, Sarah A. },
title = { { Latent Spaces Enable Transformer-Based Dose Prediction in Complex Radiotherapy Plans } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Evidence is accumulating in favour of using stereotactic ablative body radiotherapy (SABR) to treat multiple cancer lesions in the lung. Multi-lesion lung SABR plans are complex and require significant resources to create. In this work, we propose a novel two-stage latent transformer framework (LDFormer) for dose prediction of lung SABR plans with varying numbers of lesions. In the first stage, patient anatomical information and the dose distribution are encoded into a latent space. In the second stage, a transformer learns to predict the dose latent from the anatomical latents. Causal attention is modified to adapt to different numbers of lesions. LDFormer outperforms a state-of-the-art generative adversarial network on dose conformality in and around lesions, and the performance gap widens when considering overlapping lesions. LDFormer generates predictions of 3-D dose distributions in under 30s on consumer hardware, and has the potential to assist physicians with clinical decision making, reduce resource costs, and accelerate treatment planning. | Latent Spaces Enable Transformer-Based Dose Prediction in Complex Radiotherapy Plans | [
"Wang, Edward",
"Au, Ryan",
"Lang, Pencilla",
"Mattonen, Sarah A."
] | Conference | 2407.08650 | [
"https://github.com/edwardwang1/LDFormer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 520 |
|
null | https://papers.miccai.org/miccai-2024/paper/3643_paper.pdf | @InProceedings{ Buj_Seeing_MICCAI2024,
author = { Bujny, Mariusz and Jesionek, Katarzyna and Nalepa, Jakub and Bartczak, Tomasz and Miszalski-Jamka, Karol and Kostur, Marcin },
title = { { Seeing the Invisible: On Aortic Valve Reconstruction in Non-Contrast CT } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Accurate segmentation of the aortic valve (AV) in computed tomography (CT) scans is crucial for assessing AV disease severity and identifying patients who may benefit from interventional treatments, such as surgical and percutaneous procedures. Evaluation of AV calcium score on non-contrast CT scans emphasizes the importance of identifying AV from these scans. However, it is not a trivial task due to the extremely low visibility of AV in this type of medical images. In this paper, we propose a method for semi-automatic generation of Ground Truth (GT) data for this problem based on image registration. In a weakly-supervised learning process, we train neural network models capable of accurate segmentation of AV based exclusively on non-contrast CT scans. We also present a novel approach for the evaluation of segmentation accuracy, based on per-patient, rigid registration of masks segmented in contrast and non-contrast images. Evaluation on an open-source dataset demonstrates that our model can identify AV with a mean error of less than 1 mm, suggesting significant potential for clinical application. In particular, the model can be used to enhance end-to-end deep learning approaches for AV calcium scoring by offering substantial accuracy improvements and increasing the explainability. Furthermore, it contributes to lowering the rate of false positives in coronary artery calcium scoring through the meticulous exclusion of aortic root calcifications. | Seeing the Invisible: On Aortic Valve Reconstruction in Non-Contrast CT | [
"Bujny, Mariusz",
"Jesionek, Katarzyna",
"Nalepa, Jakub",
"Bartczak, Tomasz",
"Miszalski-Jamka, Karol",
"Kostur, Marcin"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 521 |
||
null | https://papers.miccai.org/miccai-2024/paper/3852_paper.pdf | @InProceedings{ Cho_Misjudging_MICCAI2024,
author = { Cho, Sue Min and Taylor, Russell H. and Unberath, Mathias },
title = { { Misjudging the Machine: Gaze May Forecast Human-Machine Team Performance in Surgery } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | In human-centered assurance, an emerging field in technology-assisted surgery, humans assess algorithmic outputs by interpreting the provided information. Focusing on image-based registration, we investigate whether gaze patterns can predict the efficacy of human-machine collaboration. Gaze data is collected during a user study to assess 2D/3D registration results with different visualization paradigms. We then comprehensively examine gaze metrics (fixation count, fixation duration, stationary gaze entropy, and gaze transition entropy) and their relationship with assessment error. We also test the effect of visualization paradigms on different gaze metrics. There is a significant negative correlation between assessment error and both fixation count and fixation duration; increased fixation counts or duration are associated with lower assessment errors. Neither stationary gaze entropy nor gaze transition entropy displays a significant relationship with assessment error. Notably, visualization paradigms demonstrate a significant impact on all four gaze metrics. Gaze metrics hold potential as predictors for human-machine performance. The importance and impact of various gaze metrics require further task-specific exploration. Our analyses emphasize that the presentation of visual information crucially influences user perception. | Misjudging the Machine: Gaze May Forecast Human-Machine Team Performance in Surgery | [
"Cho, Sue Min",
"Taylor, Russell H.",
"Unberath, Mathias"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 522 |
||
null | https://papers.miccai.org/miccai-2024/paper/2349_paper.pdf | @InProceedings{ Zen_Reliable_MICCAI2024,
author = { Zeng, Hongye and Zou, Ke and Chen, Zhihao and Zheng, Rui and Fu, Huazhu },
title = { { Reliable Source Approximation: Source-Free Unsupervised Domain Adaptation for Vestibular Schwannoma MRI Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Source-Free Unsupervised Domain Adaptation (SFUDA) has recently become a focus in the medical image domain adaptation, as it only utilizes the source model and does not require annotated target data. However, current SFUDA approaches cannot tackle the complex segmentation task across different MRI sequences, such as the vestibular schwannoma segmentation. To address this problem, we proposed Reliable Source Approximation (RSA), which can generate source-like and structure-preserved images from the target domain for updating model parameters and adapting domain shifts. Specifically, RSA deploys a conditional diffusion model to generate multiple source-like images under the guidance of varying edges of one target image. An uncertainty estimation module is then introduced to predict and refine reliable pseudo labels of generated images, and the prediction consistency is developed to select the most reliable generations. Subsequently, all reliable generated images and their pseudo labels are utilized to update the model. Our RSA is validated on vestibular schwannoma segmentation across multi-modality MRI. The experimental results demonstrate that RSA consistently improves domain adaptation performance over other state-of-the-art SFUDA methods. \textbf{We will release all codes for reproduction after acceptance.} | Reliable Source Approximation: Source-Free Unsupervised Domain Adaptation for Vestibular Schwannoma MRI Segmentation | [
"Zeng, Hongye",
"Zou, Ke",
"Chen, Zhihao",
"Zheng, Rui",
"Fu, Huazhu"
] | Conference | 2405.16102 | [
"https://github.com/zenghy96/Reliable-Source-Approximation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 523 |
|
null | https://papers.miccai.org/miccai-2024/paper/0827_paper.pdf | @InProceedings{ Liu_LGS_MICCAI2024,
author = { Liu, Hengyu and Liu, Yifan and Li, Chenxin and Li, Wuyang and Yuan, Yixuan },
title = { { LGS: A Light-weight 4D Gaussian Splatting for Efficient Surgical Scene Reconstruction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | The advent of 3D Gaussian Splatting (3D-GS) techniques and their dynamic scene modeling variants, 4D-GS, offers promising prospects for real-time rendering of dynamic surgical scenarios. However, the prerequisite for modeling dynamic scenes by a large number of Gaussian units, the high-dimensional Gaussian attributes and the high-resolution deformation fields, all lead to serve storage issues that hinder real-time rendering in resource-limited surgical equipment. To surmount these limitations, we introduce a Lightweight 4D Gaussian Splatting framework (LGS) that can liberate the efficiency bottlenecks of both rendering and storage for dynamic endoscopic reconstruction. Specifically, to minimize the redundancy of Gaussian quantities, we propose Deformation-Aware Pruning by gauging the impact of each Gaussian on deformation. Concurrently, to reduce the redundancy of Gaussian attributes, we simplify the representation of textures and lighting in non-crucial areas by pruning the dimensions of Gaussian attributes. We further resolve the feature field redundancy caused by the high resolution of 4D neural spatiotemporal encoder for modeling dynamic scenes via a 4D feature field condensation. Experiments on public benchmarks demonstrate the efficacy of LGS in terms of a compression rate exceeding 9 times while maintaining the pleasing visual quality and real-time rendering efficiency. LGS confirms a substantial step towards its application in robotic surgical services. Project page: https://lgs-endo.github.io/. | LGS: A Light-weight 4D Gaussian Splatting for Efficient Surgical Scene Reconstruction | [
"Liu, Hengyu",
"Liu, Yifan",
"Li, Chenxin",
"Li, Wuyang",
"Yuan, Yixuan"
] | Conference | 2406.16073 | [
"https://github.com/CUHK-AIM-Group/LGS"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 524 |
|
null | https://papers.miccai.org/miccai-2024/paper/1486_paper.pdf | @InProceedings{ Thi_Differentiable_MICCAI2024,
author = { Thies, Mareike and Maul, Noah and Mei, Siyuan and Pfaff, Laura and Vysotskaya, Nastassia and Gu, Mingxuan and Utz, Jonas and Possart, Dennis and Folle, Lukas and Wagner, Fabian and Maier, Andreas },
title = { { Differentiable Score-Based Likelihoods: Learning CT Motion Compensation From Clean Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Motion artifacts can compromise the diagnostic value of computed tomography (CT) images. Motion correction approaches require a per-scan estimation of patient-specific motion patterns. In this work, we train a score-based model to act as a probability density estimator for clean head CT images. Given the trained model, we quantify the deviation of a given motion-affected CT image from the ideal distribution through likelihood computation. We demonstrate that the likelihood can be utilized as a surrogate metric for motion artifact severity in the CT image facilitating the application of an iterative, gradient-based motion compensation algorithm. By optimizing the underlying motion parameters to maximize likelihood, our method effectively reduces motion artifacts, bringing the image closer to the distribution of motion-free scans. Our approach achieves comparable performance to state-of-the-art methods while eliminating the need for a representative data set of motion-affected samples. This is particularly advantageous in real-world applications, where patient motion patterns may exhibit unforeseen variability, ensuring robustness without implicit assumptions about recoverable motion types. | Differentiable Score-Based Likelihoods: Learning CT Motion Compensation From Clean Images | [
"Thies, Mareike",
"Maul, Noah",
"Mei, Siyuan",
"Pfaff, Laura",
"Vysotskaya, Nastassia",
"Gu, Mingxuan",
"Utz, Jonas",
"Possart, Dennis",
"Folle, Lukas",
"Wagner, Fabian",
"Maier, Andreas"
] | Conference | 2404.14747 | [
"https://github.com/mareikethies/moco_diff_likelihood"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 525 |
|
null | https://papers.miccai.org/miccai-2024/paper/0399_paper.pdf | @InProceedings{ Guo_MMSummary_MICCAI2024,
author = { Guo, Xiaoqing and Men, Qianhui and Noble, J. Alison },
title = { { MMSummary: Multimodal Summary Generation for Fetal Ultrasound Video } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | We present the first automated multimodal summary generation system, MMSummary, for medical imaging video, particularly with a focus on fetal ultrasound analysis. Imitating the examination process performed by a human sonographer, MMSummary is designed as a three-stage pipeline, progressing from keyframe detection to keyframe captioning and finally anatomy segmentation and measurement. In the keyframe detection stage, an innovative automated workflow is proposed to progressively select a concise set of keyframes, preserving sufficient video information without redundancy. Subsequently, we adapt a large language model to generate meaningful captions for fetal ultrasound keyframes in the keyframe captioning stage. If a keyframe is captioned as fetal biometry, the segmentation and measurement stage estimates biometric parameters by segmenting the region of interest according to the textual prior. The MMSummary system provides comprehensive summaries for fetal ultrasound examinations and based on reported experiments is estimated to reduce scanning time by approximately 31.5%, thereby suggesting the potential to enhance clinical workflow efficiency. | MMSummary: Multimodal Summary Generation for Fetal Ultrasound Video | [
"Guo, Xiaoqing",
"Men, Qianhui",
"Noble, J. Alison"
] | Conference | 2408.03761 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 526 |
|
null | https://papers.miccai.org/miccai-2024/paper/0310_paper.pdf | @InProceedings{ Pei_DepthDriven_MICCAI2024,
author = { Pei, Jialun and Cui, Ruize and Li, Yaoqian and Si, Weixin and Qin, Jing and Heng, Pheng-Ann },
title = { { Depth-Driven Geometric Prompt Learning for Laparoscopic Liver Landmark Detection } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Laparoscopic liver surgery poses a complex intraoperative dynamic environment for surgeons, where remains a significant challenge to distinguish critical or even hidden structures inside the liver.
Liver anatomical landmarks, e.g., ridge and ligament, serve as important markers for 2D-3D alignment, which can significantly enhance the spatial perception of surgeons for precise surgery. To facilitate the detection of laparoscopic liver landmarks, we collect a novel dataset called L3D, which comprises 1,152 frames with elaborated landmark annotations from surgical videos of 39 patients across two medical sites. For benchmarking purposes, 12 mainstream detection methods are selected and comprehensively evaluated on L3D. Further, we propose a depth-driven geometric prompt learning network, namely D2GPLand. Specifically, we design a Depth-aware Prompt Embedding (DPE) module that is guided by self-supervised prompts and generates semantically relevant geometric information with the benefit of global depth cues extracted from SAM-based features. Additionally, a Semantic-specific Geometric Augmentation (SGA) scheme is introduced to efficiently merge RGB-D spatial and geometric information through reverse anatomic perception. The experimental results indicate that D2GPLand obtains state-of-the-art performance on L3D, with 63.52% DICE and 48.68% IoU scores. Together with 2D-3D fusion technology, our method can directly provide the surgeon with intuitive guidance information in laparoscopic scenarios. Our code and dataset are available at https://github.com/PJLallen/D2GPLand. | Depth-Driven Geometric Prompt Learning for Laparoscopic Liver Landmark Detection | [
"Pei, Jialun",
"Cui, Ruize",
"Li, Yaoqian",
"Si, Weixin",
"Qin, Jing",
"Heng, Pheng-Ann"
] | Conference | 2406.17858 | [
"https://github.com/PJLallen/D2GPLand"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 527 |
|
null | https://papers.miccai.org/miccai-2024/paper/2723_paper.pdf | @InProceedings{ He_mQSM_MICCAI2024,
author = { He, Junjie and Fu, Bangkang and Xiong, Zhenliang and Peng, Yunsong and Wang, Rongpin },
title = { { mQSM: Multitask Learning-based Quantitative Susceptibility Mapping for Iron Analysis in Brain } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Quantitative analysis of brain iron is widely utilized in neurodegenerative diseases, typically accomplished through the utilization of quantitative susceptibility mapping (QSM) and medical image registration. However, this approach heavily relies on registration accuracy, and image registration can alter QSM values, leading to distorted quantitative analysis results. This paper proposes a multi-modal multitask QSM reconstruction algorithm (mQSM) and introduces a mutual Transformer mechanism (mTrans) to efficiently fuse multi-modal information for QSM reconstruction and brain region segmentation tasks. mTrans leverages Transformer computations on Query and Value feature matrices for mutual attention calculation, eliminating the need for additional computational modules and ensuring high efficiency in multi-modal data fusion. Experimental results demonstrate an average dice coefficient of 0.92 for segmentation, and QSM reconstruction achieves an SSIM evaluation of 0.9854 compared to the gold standard. Moreover, segmentation-based (mQSM) brain iron quantitative analysis shows no significant difference from the ground truth, whereas the registration-based approach exhibits notable differences in brain cortical regions compared to the ground truth. Our code is available at https://github.com/TyrionJ/mQSM. | mQSM: Multitask Learning-based Quantitative Susceptibility Mapping for Iron Analysis in Brain | [
"He, Junjie",
"Fu, Bangkang",
"Xiong, Zhenliang",
"Peng, Yunsong",
"Wang, Rongpin"
] | Conference | [
"https://github.com/TyrionJ/mQSM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 528 |
||
null | https://papers.miccai.org/miccai-2024/paper/2514_paper.pdf | @InProceedings{ Zha_ACurvatureGuided_MICCAI2024,
author = { Zhao, Fenqiang and Tang, Yuxing and Lu, Le and Zhang, Ling },
title = { { A Curvature-Guided Coarse-to-Fine Framework for Enhanced Whole Brain Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Whole brain segmentation, which divides the entire brain volume into anatomically labeled regions of interest (ROIs), is a crucial step in brain image analysis. Traditional methods often rely on intricate pipelines that, while accurate, are time-consuming and require expertise due to their complexity. Alternatively, end-to-end deep learning methods offer rapid whole brain segmentation but often sacrifice accuracy due to neglect of geometric features. In this paper, we propose a novel framework that integrates the key curvature feature, previously utilized by complex surface-based pipelines but overlooked by volume-based methods, into deep neural networks, thereby achieving both high accuracy and efficiency. Specifically, we first train a coarse anatomical segmentation model focusing on high-contrast tissue types, i.e., white matter (WM), gray matter (GM), and subcortical regions. Next, we reconstruct the cortical surfaces using the WM/GM interface and compute curvature features for each vertex on the surfaces. These curvature features are then mapped back to the image space, where they are combined with intensity features to train a finer cortical parcellation model. We also simplify the process of cortical surface reconstruction and curvature computation, thereby enhancing the overall efficiency of the framework. Additionally, our framework is flexible and can incorporate any neural network as its backbone. It can serve as a plug-and-play component to enhance the whole brain segmentation results of any segmentation network. Experimental results on the public Mindboggle-101 dataset demonstrate improved segmentation performance with comparable speed compared to various deep learning methods. | A Curvature-Guided Coarse-to-Fine Framework for Enhanced Whole Brain Segmentation | [
"Zhao, Fenqiang",
"Tang, Yuxing",
"Lu, Le",
"Zhang, Ling"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 529 |
||
null | https://papers.miccai.org/miccai-2024/paper/2088_paper.pdf | @InProceedings{ Ran_DeepRepViz_MICCAI2024,
author = { Rane, Roshan Prakash and Kim, JiHoon and Umesha, Arjun and Stark, Didem and Schulz, Marc-André and Ritter, Kerstin },
title = { { DeepRepViz: Identifying potential confounders in deep learning model predictions } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Deep Learning (DL) has emerged as a powerful tool in neuroimaging research. DL models predicting brain pathologies, psychological behaviors, and cognitive traits from neuroimaging data have the potential to discover the neurobiological basis of these phenotypes. However, these models can be biased from information related to age, sex, or spurious imaging artifacts encoded in the neuroimaging data.
In this study, we introduce a lightweight and easy-to-use framework called ‘DeepRepViz’ designed to detect such potential confounders in DL model predictions and enhance the transparency of predictive DL models. DeepRepViz comprises two components - an online visualization tool (available at https://deep-rep-viz.vercel.app/) and a metric called the ‘Con-score’. The tool enables researchers to visualize the final latent representation of their DL model and qualitatively inspect it for biases. The Con-score, or the `concept encoding’ score, quantifies the extent to which potential confounders like sex or age are encoded in the final latent representation and influences the model predictions. We illustrate the rationale of the Con-score formulation using a simulation experiment.
Next, we demonstrate the utility of the DeepRepViz framework by applying it to three typical neuroimaging-based prediction tasks (n=12000). These include (a) distinguishing chronic alcohol users from controls, (b) classifying sex, and (c) predicting the speed of completing a cognitive task known as ‘trail making’.
In the DL model predicting chronic alcohol users, DeepRepViz uncovers a strong influence of sex on the predictions (Con-score=0.35). In the model predicting cognitive task performance, DeepRepViz reveals that age plays a major role (Con-score=0.3). Thus, the DeepRepViz framework enables neuroimaging researchers to systematically examine their model and identify potential biases, thereby improving the transparency of predictive DL models in neuroimaging studies. | DeepRepViz: Identifying potential confounders in deep learning model predictions | [
"Rane, Roshan Prakash",
"Kim, JiHoon",
"Umesha, Arjun",
"Stark, Didem",
"Schulz, Marc-André",
"Ritter, Kerstin"
] | Conference | [
"https://github.com/ritterlab/DeepRepViz"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 530 |
||
null | https://papers.miccai.org/miccai-2024/paper/1527_paper.pdf | @InProceedings{ Mur_Class_MICCAI2024,
author = { Murugesan, Balamurali and Silva-Rodriguez, Julio and Ben Ayed, Ismail and Dolz, Jose },
title = { { Class and Region-Adaptive Constraints for Network Calibration } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | In this work, we present a novel approach to calibrate segmentation networks that considers the inherent challenges posed by different categories and object regions. In particular, we present a formulation that integrates class and region-wise constraints into the learning objective, with multiple penalty weights to account for class and region differences. Finding the optimal penalty weights manually, however, might be unfeasible, and potentially hinder the optimization process. To overcome this limitation, we propose an approach based on Class and Region-Adaptive constraints (CRaC), which allows to learn the class and region-wise penalty weights during training. CRaC is based on a general Augmented Lagrangian method, a well-established technique in constrained optimization. Experimental results on two popular segmentation benchmarks, and two well-known segmentation networks, demonstrate the superiority of CRaC compared to existing approaches. The code is available at: https://github.com/Bala93/CRac/ | Class and Region-Adaptive Constraints for Network Calibration | [
"Murugesan, Balamurali",
"Silva-Rodriguez, Julio",
"Ben Ayed, Ismail",
"Dolz, Jose"
] | Conference | 2403.12364 | [
"https://github.com/Bala93/CRac/"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 531 |
|
null | https://papers.miccai.org/miccai-2024/paper/2376_paper.pdf | @InProceedings{ Zha_Fuzzy_MICCAI2024,
author = { Zhang, Sheng and Nan, Yang and Fang, Yingying and Wang, Shiyi and Xing, Xiaodan and Gao, Zhifan and Yang, Guang },
title = { { Fuzzy Attention-based Border Rendering Network for Lung Organ Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Automatic lung organ segmentation on CT images is crucial for lung disease diagnosis. However, the unlimited voxel values and class imbalance of lung organs can lead to false-negative/positive and leakage issues in advanced methods. Additionally, some slender lung organs are easily lost during the recycled down/up-sample procedure, e.g., bronchioles & arterioles, causing severe discontinuity issue. Inspired by these, this paper introduces an effective lung organ segmentation method called Fuzzy Attention-based Border Rendering (FABR) network. Since fuzzy logic can handle the uncertainty in feature extraction, hence the fusion of deep networks and fuzzy sets should be a viable solution for better performance. Meanwhile, unlike prior top-tier methods that operate on all regular dense points, our FABR depicts lung organ regions as cube-trees, focusing only on recycle-sampled border vulnerable points, rendering the severely discontinuous, false-negative/positive organ regions with a novel Global-Local Cube-tree Fusion (GLCF) module. All experimental results, on four challenging datasets of airway & artery, demonstrate that our method can achieve the favorable performance significantly. | Fuzzy Attention-based Border Rendering Network for Lung Organ Segmentation | [
"Zhang, Sheng",
"Nan, Yang",
"Fang, Yingying",
"Wang, Shiyi",
"Xing, Xiaodan",
"Gao, Zhifan",
"Yang, Guang"
] | Conference | 2406.16189 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 532 |
|
null | https://papers.miccai.org/miccai-2024/paper/2558_paper.pdf | @InProceedings{ Hou_QualityAware_MICCAI2024,
author = { Hou, Tao and Huang, Jiashuang and Jiang, Shu and Ding, Weiping },
title = { { Quality-Aware Fuzzy Min-Max Neural Networks for Dynamic Brain Network Analysis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Dynamic functional connections (dFCs) have been widely used for the diagnosis of brain diseases. However, current dynamic brain network analysis methods ignore the fuzzy information of the brain network and the uncertainty arising from the inconsistent data quality of different windows, providing unreliable integration for multiple windows. In this paper, we propose a dynamic brain network analysis method based on quality-aware fuzzy min-max neural networks (QFMMNet). The individual window of dFCs is treated as a view, and we define three convolution filters to extract features from the brain network under the multi-view learning framework, thereby obtaining multi-view evidence for dFCs. We design multi-view fuzzy min-max neural networks (MFMM) based on fuzzy sets to deal with the fuzzy information of the brain network, which takes evidence as input patterns to generate hyperboxes and serves as the classification layer of each view. A quality-aware ensemble module is introduced to deal with uncertainty, which employs D-S theory to directly model the uncertainty and evaluate the dynamic quality-aware weighting of each view. Experiments on two real schizophrenia datasets demonstrate the effectiveness and advantages of our proposed method. Our codes are available at https://github.com/scurrytao/QFMMNet. | Quality-Aware Fuzzy Min-Max Neural Networks for Dynamic Brain Network Analysis | [
"Hou, Tao",
"Huang, Jiashuang",
"Jiang, Shu",
"Ding, Weiping"
] | Conference | [
"https://github.com/scurrytao/QFMMNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 533 |
||
null | https://papers.miccai.org/miccai-2024/paper/2184_paper.pdf | @InProceedings{ Wan_TriPlane_MICCAI2024,
author = { Wang, Hualiang and Lin, Yiqun and Ding, Xinpeng and Li, Xiaomeng },
title = { { Tri-Plane Mamba: Efficiently Adapting Segment Anything Model for 3D Medical Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | General networks for 3D medical image segmentation have recently undergone extensive exploration. Behind the exceptional performance of these networks lies a significant demand for a large volume of pixel-level annotated data, which is time-consuming and labor-intensive. The emergence of the Segment Anything Model (SAM) has enabled this model to achieve superior performance in 2D medical image segmentation tasks via parameter- and data-efficient feature adaptation. However, the introduction of additional depth channels in 3D medical images not only prevents the sharing of 2D pre-trained features but also results in a quadratic increase in the computational cost for adapting SAM.
To overcome these challenges, we present the \textbf{T}ri-\textbf{P}lane \textbf{M}amba (TP-Mamba) adapters tailored for the SAM, featuring two major innovations: 1) multi-scale 3D convolutional adapters, optimized for efficiently processing local depth-level information, 2) a tri-plane mamba module, engineered to capture long-range depth-level representation without significantly increasing computational costs.
This approach achieves state-of-the-art performance in 3D CT organ segmentation tasks. Remarkably, this superior performance is maintained even with scarce training data. Specifically using only three CT training samples from the BTCV dataset, it surpasses conventional 3D segmentation networks, attaining a Dice score that is up to 12% higher. | Tri-Plane Mamba: Efficiently Adapting Segment Anything Model for 3D Medical Images | [
"Wang, Hualiang",
"Lin, Yiqun",
"Ding, Xinpeng",
"Li, Xiaomeng"
] | Conference | 2409.08492 | [
"https://github.com/xmed-lab/TP-Mamba"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 534 |
|
null | https://papers.miccai.org/miccai-2024/paper/2758_paper.pdf | @InProceedings{ Hwa_Multiorder_MICCAI2024,
author = { Hwang, Yechan and Hwang, Soojin and Wu, Guorong and Kim, Won Hwa },
title = { { Multi-order Simplex-based Graph Neural Network for Brain Network Analysis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | A brain network is defined by wiring anatomical regions in the brain with structural and functional relationships. It has an intricate topology with handful early features/biomarkers of neurodegenerative diseases, which emphasize the importance of analyzing connectomic features alongside region-wise assessments. Various graph neural network (GNN) approaches have been developed for brain network analysis, however, they mainly focused on node-centric analyses often treating edge features as an auxiliary information (i.e., adjacency matrix) to enhance node representations. In response, we propose a method that explicitly learns node and edge embeddings for brain network analysis. Introducing a dual aggregation framework, our model incorporates a novel spatial graph convolution layer with an incidence matrix. Enabling concurrent node-wise and edge-wise information aggregation for both nodes and edges, this framework captures the intricate node-edge relationships within the brain. Demonstrating superior performance on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, our model effectively handles the complex topology of brain networks. Furthermore, our model yields interpretable results with Grad-CAM, selectively identifying brain Regions of Interest (ROIs) and connectivities associated with AD, aligning with prior AD literature. | Multi-order Simplex-based Graph Neural Network for Brain Network Analysis | [
"Hwang, Yechan",
"Hwang, Soojin",
"Wu, Guorong",
"Kim, Won Hwa"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 535 |
||
null | https://papers.miccai.org/miccai-2024/paper/2173_paper.pdf | @InProceedings{ Kim_LLMguided_MICCAI2024,
author = { Kim, Kyungwon and Lee, Yongmoon and Park, Doohyun and Eo, Taejoon and Youn, Daemyung and Lee, Hyesang and Hwang, Dosik },
title = { { LLM-guided Multi-modal Multiple Instance Learning for 5-year Overall Survival Prediction of Lung Cancer } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Accurately predicting the 5-year prognosis of lung cancer patients is crucial for guiding treatment planning and providing optimal patient care. Traditional methods relying on CT image-based cancer stage assessment and morphological analysis of cancer cells in pathology images have encountered challenges in terms of reliability and accuracy due to the complexity and diversity of information within these images.
Recent rapid advancements in deep learning have shown promising performance in prognosis prediction, however utilizing CT and pathology images independently is limited by their differing imaging characteristics and the unique prognostic information.
To effectively address these challenges, this study proposes a novel framework that integrates prognostic capabilities of both CT and pathology images with clinical information, employing a multi-modal integration approach via multiple instance learning, leveraging large language models (LLMs) to analyze clinical notes and align them with image modalities.
The proposed approach was rigorously validated using external datasets from different hospitals, demonstrating superior performance over models reliant on vision or clinical data alone. This highlights the adaptability and strength of LLMs in managing complex multi-modal medical datasets for lung cancer prognosis, marking a significant advance towards more accurate and comprehensive patient care strategies. | LLM-guided Multi-modal Multiple Instance Learning for 5-year Overall Survival Prediction of Lung Cancer | [
"Kim, Kyungwon",
"Lee, Yongmoon",
"Park, Doohyun",
"Eo, Taejoon",
"Youn, Daemyung",
"Lee, Hyesang",
"Hwang, Dosik"
] | Conference | [
"https://github.com/KyleKWKim/LLM-guided-Multimodal-MIL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 536 |
||
null | https://papers.miccai.org/miccai-2024/paper/0560_paper.pdf | @InProceedings{ Kim_Parameter_MICCAI2024,
author = { Kim, Yumin and Choi, Gayoon and Hwang, Seong Jae },
title = { { Parameter Efficient Fine Tuning for Multi-scanner PET to PET Reconstruction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Reducing scan time in Positron Emission Tomography (PET) imaging while maintaining high-quality images is crucial for minimizing patient discomfort and radiation exposure. Due to the limited size of datasets and distribution discrepancy across scanners in medical imaging, fine-tuning in a parameter-efficient and effective manner is on the rise. Motivated by the potential of Parameter Efficient Fine-Tuning (PEFT), we aim to address these issues by effectively leveraging PEFT to improve limited data and GPU resource issues in multi-scanner setups. In this paper, we introduce PETITE, Parameter Efficient Fine-Tuning for MultI-scanner PET to PET REconstruction, which represents the optimal PEFT combination when independently applying encoder-decoder components to each model architecture. To the best of our knowledge, this study is the first to systematically explore the efficacy of diverse PEFT techniques in medical imaging reconstruction tasks via prevalent encoder-decoder models. This investigation, in particular, brings intriguing insights into PETITE as we show further improvements by treating the encoder and decoder separately and mixing different PEFT methods, namely, Mix-PEFT. Using multi-scanner PET datasets comprised of five different scanners, we extensively test the cross-scanner PET scan time reduction performances (i.e., a model pre-trained on one scanner is fine-tuned on a different scanner) of 21 feasible Mix-PEFT combinations to derive optimal PETITE. We show that training with less than 1% parameters using PETITE performs on par with full fine-tuning (i.e., 100% parameter). Code is available at: https://github.com/MICV-yonsei/PETITE | Parameter Efficient Fine Tuning for Multi-scanner PET to PET Reconstruction | [
"Kim, Yumin",
"Choi, Gayoon",
"Hwang, Seong Jae"
] | Conference | 2407.07517 | [
"https://github.com/MICV-yonsei/PETITE"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 537 |
|
null | https://papers.miccai.org/miccai-2024/paper/3161_paper.pdf | @InProceedings{ Gen_Force_MICCAI2024,
author = { Geng, Yimeng and Meng, Gaofeng and Chen, Mingcong and Cao, Guanglin and Zhao, Mingyang and Zhao, Jianbo and Liu, Hongbin },
title = { { Force Sensing Guided Artery-Vein Segmentation via Sequential Ultrasound Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Accurate identification of arteries and veins in ultrasound images is crucial for vascular examinations and interventions in robotics-assisted surgeries. However, current methods for ultrasound vessel segmentation face challenges in distinguishing between arteries and veins due to their morphological similarities. To address this challenge, this study introduces a novel force sensing guided segmentation approach to enhance artery-vein segmentation accuracy by leveraging their distinct deformability. Our proposed method utilizes force magnitude to identify key frames with the most significant vascular deformation in a sequence of ultrasound images. These key frames are then integrated with the current frame through attention mechanisms, with weights assigned in accordance with force magnitude. Our proposed force sensing guided framework can be seamlessly integrated into various segmentation networks and achieves significant performance improvements in multiple U-shaped networks such as U-Net, Swin-unet and Transunet. Furthermore, we contribute the first multimodal ultrasound artery-vein segmentation dataset, Mus-V, which encompasses both force and image data simultaneously. The dataset comprises 3114 ultrasound images of carotid and femoral vessels extracted from 105 videos, with corresponding force data recorded by the force sensor mounted on the US probe. The code and dataset can be available at https://www.kaggle.com/datasets/among22/multimodal-ultrasound-vascular-segmentation | Force Sensing Guided Artery-Vein Segmentation via Sequential Ultrasound Images | [
"Geng, Yimeng",
"Meng, Gaofeng",
"Chen, Mingcong",
"Cao, Guanglin",
"Zhao, Mingyang",
"Zhao, Jianbo",
"Liu, Hongbin"
] | Conference | [
"https://github.com/evelynskip/artery-vein-segmentation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 538 |
||
null | https://papers.miccai.org/miccai-2024/paper/0661_paper.pdf | @InProceedings{ Tan_OSALND_MICCAI2024,
author = { Tang, Jiao and Yue, Yagao and Wan, Peng and Wang, Mingliang and Zhang, Daoqiang and Shao, Wei },
title = { { OSAL-ND: Open-set Active Learning for Nucleus Detection } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | The recent advance of deep learning has shown promising power for nucleus detection that plays an important role in histopathological examination. However, such accurate and reliable deep learning models need enough labeled data for training, which makes active learning (AL) an attractive learning paradigm for reducing the annotation efforts by pathologists. In open-set environments, AL encounters the challenge that the unlabeled data usually contains non-target samples from the unknown classes, resulting in the failure of most AL methods. Although AL has been explored in many open-set classification tasks, research on AL for nucleus detection in the open-set environment remains unexplored. To address the above issues, we propose a two-stage AL framework designed for nucleus detection in an open-set environment (i.e., OSAL-ND). In the first stage, we propose a prototype-based query strategy based on the auxiliary detector to select a candidate set from known classes as pure as possible. In the second stage, we further query the most uncertain samples from the candidate set for the nucleus detection task relying on the target detector. We evaluate the performance of our method on the NuCLS dataset, and the experimental results indicate that our method can not only improve the selection quality on the known classes, but also achieve higher detection accuracy with lower annotation burden in comparison with the existing studies. | OSAL-ND: Open-set Active Learning for Nucleus Detection | [
"Tang, Jiao",
"Yue, Yagao",
"Wan, Peng",
"Wang, Mingliang",
"Zhang, Daoqiang",
"Shao, Wei"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 539 |
||
null | https://papers.miccai.org/miccai-2024/paper/2329_paper.pdf | @InProceedings{ Grz_TabMixer_MICCAI2024,
author = { Grzeszczyk, Michal K. and Korzeniowski, Przemysław and Alabed, Samer and Swift, Andrew J. and Trzciński, Tomasz and Sitek, Arkadiusz },
title = { { TabMixer: Noninvasive Estimation of the Mean Pulmonary Artery Pressure via Imaging and Tabular Data Mixing } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Right Heart Catheterization is a gold standard procedure for diagnosing Pulmonary Hypertension by measuring mean Pulmonary Artery Pressure (mPAP). It is invasive, costly, time-consuming and carries risks. In this paper, for the first time, we explore the estimation of mPAP from videos of noninvasive Cardiac Magnetic Resonance Imaging. To enhance the predictive capabilities of Deep Learning models used for this task, we introduce an additional modality in the form of demographic features and clinical measurements. Inspired by all-Multilayer Perceptron architectures, we present TabMixer, a novel module enabling the integration of imaging and tabular data through spatial, temporal and channel mixing. Specifically, we present the first approach that utilizes Multilayer Perceptrons to interchange tabular information with imaging features in vision models. We test TabMixer for mPAP estimation and show that it enhances the performance of Convolutional Neural Networks, 3D-MLP and Vision Transformers while being competitive with previous modules for imaging and tabular data. Our approach has the potential to improve clinical processes involving both modalities, particularly in noninvasive mPAP estimation, thus, significantly enhancing the quality of life for individuals affected by Pulmonary Hypertension. We provide a source code for using TabMixer at https://github.com/SanoScience/TabMixer. | TabMixer: Noninvasive Estimation of the Mean Pulmonary Artery Pressure via Imaging and Tabular Data Mixing | [
"Grzeszczyk, Michal K.",
"Korzeniowski, Przemysław",
"Alabed, Samer",
"Swift, Andrew J.",
"Trzciński, Tomasz",
"Sitek, Arkadiusz"
] | Conference | 2409.07564 | [
"https://github.com/SanoScience/TabMixer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 540 |
|
null | https://papers.miccai.org/miccai-2024/paper/2644_paper.pdf | @InProceedings{ Wil_ProstNFound_MICCAI2024,
author = { Wilson, Paul F. R. and To, Minh Nguyen Nhat and Jamzad, Amoon and Gilany, Mahdi and Harmanani, Mohamed and Elghareb, Tarek and Fooladgar, Fahimeh and Wodlinger, Brian and Abolmaesumi, Purang and Mousavi, Parvin },
title = { { ProstNFound: Integrating Foundation Models with Ultrasound Domain Knowledge and Clinical Context for Robust Prostate Cancer Detection } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Analysis of high-resolution micro-ultrasound data using deep learning presents a promising avenue for the accurate detection of prostate cancer (PCa). While previous efforts have focused on designing specialized architectures and training them from scratch, they are challenged by limited data availability. Medical foundation models, pre-trained on large and diverse datasets, offer a robust knowledge base that can be adapted to downstream tasks, reducing the need for large task specific datasets. However, their lack of specialized domain knowledge hinders their success: our initial research indicates that even with extensive fine-tuning, existing foundation models falls short of surpassing specialist models’ performance for PCa detection. To address this gap, we propose ProstNFound, a method that empowers foundation models with domain-specific knowledge pertinent to ultrasound imaging and PCa. In this approach, while ultrasound images are fed to a foundation model, specialized auxiliary networks embed high-resolution textural features and clinical markers which are then presented to the network as prompts. Using a multi-center micro-ultrasound dataset with 693 patients, we demonstrate significant improvements over the state-of-the-art in PCa detection. ProstNFound achieves 90% sensitivity at 40% specificity, performance that is competitive with that of expert radiologists reading multi-parametric MRI or micro-ultrasound images, suggesting significant promise for clinical application. Our code will be made available at github.com. | ProstNFound: Integrating Foundation Models with Ultrasound Domain Knowledge and Clinical Context for Robust Prostate Cancer Detection | [
"Wilson, Paul F. R.",
"To, Minh Nguyen Nhat",
"Jamzad, Amoon",
"Gilany, Mahdi",
"Harmanani, Mohamed",
"Elghareb, Tarek",
"Fooladgar, Fahimeh",
"Wodlinger, Brian",
"Abolmaesumi, Purang",
"Mousavi, Parvin"
] | Conference | [
"https://github.com/pfrwilson/prostNfound"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 541 |
||
null | https://papers.miccai.org/miccai-2024/paper/1932_paper.pdf | @InProceedings{ Wan_Automated_MICCAI2024,
author = { Wang, Jinge and Chen, Guilin and Wang, Xuefeng and Wu, Nan and Zhang, Terry Jianguo },
title = { { Automated Robust Muscle Segmentation in Multi-level Contexts using a Probabilistic Inference Framework } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | The paraspinal muscles are crucial for spinal stability, which can be quantitatively analyzed through image segmentation. However, unclear muscle boundaries, severe deformations, and limited training data impose great challenges for existing automatic segmentation methods. This study proposes an automated probabilistic inference framework to reconstruct 3D muscle shapes from thick-slice MRI robustly. Leveraging Fourier basis functions and Gaussian processes, we construct anatomically interpretable shape models. Multi-level contextual observations such as global poses of muscle centroids and local edges are then integrated into posterior estimation to enhance shape model initialization and optimization. The proposed framework is characterized by its intuitive representations and smooth generation capabilities, demonstrating higher accuracy in validation on both public and clinical datasets compared to state-of-the-art methods. The outcomes can aid clinicians and researchers in understanding muscle changes in various conditions, potentially enhancing diagnoses and treatments. | Automated Robust Muscle Segmentation in Multi-level Contexts using a Probabilistic Inference Framework | [
"Wang, Jinge",
"Chen, Guilin",
"Wang, Xuefeng",
"Wu, Nan",
"Zhang, Terry Jianguo"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 542 |
||
null | https://papers.miccai.org/miccai-2024/paper/1948_paper.pdf | @InProceedings{ Bou_PhenDiff_MICCAI2024,
author = { Bourou, Anis and Boyer, Thomas and Gheisari, Marzieh and Daupin, Kévin and Dubreuil, Véronique and De Thonel, Aurélie and Mezger, Valérie and Genovesio, Auguste },
title = { { PhenDiff: Revealing Subtle Phenotypes with Diffusion Models in Real Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | For the past few years, deep generative models have increasingly been used in biological research for a variety of tasks. Recently, they have proven to be valuable for uncovering subtle cell phenotypic differences that are not directly discernible to the human eye. However, current methods employed to achieve this goal mainly rely on Generative Adversarial Networks (GANs). While effective, GANs encompass issues such as training instability and mode collapse, and they do not accurately map images back to the model’s latent space, which is necessary to synthesize, manipulate, and thus interpret outputs based on real images. In this work, we introduce PhenDiff: a multi-class conditional method leveraging Diffusion Models (DMs) designed to identify shifts in cellular phenotypes by translating a real image from one condition to another. We qualitatively and quantitatively validate this method on cases where the phenotypic changes are visible or invisible, such as in low concentrations of drug treatments. Overall, PhenDiff represents a valuable tool for identifying cellular variations in real microscopy images. We anticipate that it could facilitate the understanding of diseases and advance drug discovery through the identification of novel biomarkers. | PhenDiff: Revealing Subtle Phenotypes with Diffusion Models in Real Images | [
"Bourou, Anis",
"Boyer, Thomas",
"Gheisari, Marzieh",
"Daupin, Kévin",
"Dubreuil, Véronique",
"De Thonel, Aurélie",
"Mezger, Valérie",
"Genovesio, Auguste"
] | Conference | 2312.08290 | [
"https://github.com/WarmongeringBeaver/PhenDiff"
] | https://huggingface.co/papers/2312.08290 | 0 | 1 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | Poster | 543 |
null | https://papers.miccai.org/miccai-2024/paper/0223_paper.pdf | @InProceedings{ Hua_Endo4DGS_MICCAI2024,
author = { Huang, Yiming and Cui, Beilei and Bai, Long and Guo, Ziqi and Xu, Mengya and Islam, Mobarakol and Ren, Hongliang },
title = { { Endo-4DGS: Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | In the realm of robot-assisted minimally invasive surgery, dynamic scene reconstruction can significantly enhance downstream tasks and improve surgical outcomes. Neural Radiance Fields (NeRF)-based methods have recently risen to prominence for their exceptional ability to reconstruct scenes but are hampered by slow inference speed, prolonged training, and inconsistent depth estimation. Some previous work utilizes ground truth depth for optimization but it is hard to acquire in the surgical domain. To overcome these obstacles, we present Endo-4DGS, a real-time endoscopic dynamic reconstruction approach that utilizes 3D Gaussian Splatting (GS) for 3D representation. Specifically, we propose lightweight MLPs to capture temporal dynamics with Gaussian deformation fields. To obtain a satisfactory Gaussian Initialization, we exploit a powerful depth estimation foundation model, Depth-Anything, to generate pseudo-depth maps as a geometry prior. We additionally propose confidence-guided learning to tackle the ill-pose problems in monocular depth estimation and enhance the depth-guided reconstruction with surface normal constraints and depth regularization. Our approach has been validated on two surgical datasets, where it can effectively render in real-time, compute efficiently, and reconstruct with remarkable accuracy. Our code is available at https://github.com/lastbasket/Endo-4DGS. | Endo-4DGS: Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting | [
"Huang, Yiming",
"Cui, Beilei",
"Bai, Long",
"Guo, Ziqi",
"Xu, Mengya",
"Islam, Mobarakol",
"Ren, Hongliang"
] | Conference | 2401.16416 | [
"https://github.com/lastbasket/Endo-4DGS"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 544 |
|
null | https://papers.miccai.org/miccai-2024/paper/2284_paper.pdf | @InProceedings{ Jud_Domain_MICCAI2024,
author = { Judge, Arnaud and Judge, Thierry and Duchateau, Nicolas and Sandler, Roman A. and Sokol, Joseph Z. and Bernard, Olivier and Jodoin, Pierre-Marc },
title = { { Domain Adaptation of Echocardiography Segmentation Via Reinforcement Learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Performance of deep learning segmentation models is significantly challenged in its transferability across different medical imaging domains, particularly when aiming to adapt these models to a target domain with insufficient annotated data for effective fine-tuning. While existing domain adaptation (DA) methods propose strategies to alleviate this problem, these methods do not explicitly incorporate human-verified segmentation priors, compromising the potential of a model to produce anatomically plausible segmentations. We introduce RL4Seg, an innovative reinforcement learning framework that reduces the need to otherwise incorporate large expertly annotated datasets in the target domain, and eliminates the need for lengthy manual human review. Using a target dataset of 10,000 unannotated 2D echocardiographic images, RL4Seg not only outperforms existing state-of-the-art DA methods in accuracy but also achieves 99% anatomical validity on a subset of 220 expert-validated subjects from the target domain. Furthermore, our framework’s reward network offers uncertainty estimates comparable with dedicated state-of-the-art uncertainty methods, demonstrating the utility and effectiveness of RL4Seg in overcoming DA challenges in medical image segmentation. | Domain Adaptation of Echocardiography Segmentation Via Reinforcement Learning | [
"Judge, Arnaud",
"Judge, Thierry",
"Duchateau, Nicolas",
"Sandler, Roman A.",
"Sokol, Joseph Z.",
"Bernard, Olivier",
"Jodoin, Pierre-Marc"
] | Conference | 2406.17902 | [
"https://github.com/arnaudjudge/RL4Seg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 545 |
|
null | https://papers.miccai.org/miccai-2024/paper/1156_paper.pdf | @InProceedings{ Fen_Mining_MICCAI2024,
author = { Feng, Siyang and Chen, Jiale and Liu, Zhenbing and Liu, Wentao and Wang, Zimin and Lan, Rushi and Pan, Xipeng },
title = { { Mining Gold from the Sand: Weakly Supervised Histological Tissue Segmentation with Activation Relocalization and Mutual Learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Class activation maps- (CAMs-) based image-level weakly supervised tissue segmentation has became a popular research topic due to the advantage of its low annotation cost. However, there are still two challenges exist in this task: (1) low-quality pseudo masks generation, and (2) training with noisy label supervision. To address these issues, we propose a novel weakly supervised segmentation framework with Activation Relocalization and Mutual Learning (ARML). First, we integrate an Activation Relocalization Scheme (ARS) into classification phase to more accurately cover the useful areas in initial CAMs. Second, to deal with the inevitably noisy annotations in pseudo masks generated by ARS, we propose a noise-robust mutual learning segmentation model. The model promotes peer networks to capture different characteristics of the outputs, and two noise suppression strategies namely samples weighted voting (SWV) and samples relation mining (SRM) are introduced to excavate the potential credible information from noisy annotations. Extensive experiments on BCSS and LUAD-HistoSeg datasets demonstrate that our proposed ARML exceeds many state-of-the-art weakly supervised semantic segmentation methods, which gives a new insight for tissue segmentation tasks. The code is available at: https://github.com/director87/ARML. | Mining Gold from the Sand: Weakly Supervised Histological Tissue Segmentation with Activation Relocalization and Mutual Learning | [
"Feng, Siyang",
"Chen, Jiale",
"Liu, Zhenbing",
"Liu, Wentao",
"Wang, Zimin",
"Lan, Rushi",
"Pan, Xipeng"
] | Conference | [
"https://github.com/director87/ARML"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 546 |
||
null | https://papers.miccai.org/miccai-2024/paper/3434_paper.pdf | @InProceedings{ Osu_Towards_MICCAI2024,
author = { Osuala, Richard and Lang, Daniel M. and Verma, Preeti and Joshi, Smriti and Tsirikoglou, Apostolia and Skorupko, Grzegorz and Kushibar, Kaisar and Garrucho, Lidia and Pinaya, Walter H. L. and Diaz, Oliver and Schnabel, Julia A. and Lekadir, Karim },
title = { { Towards Learning Contrast Kinetics with Multi-Condition Latent Diffusion Models } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Contrast agents in dynamic contrast enhanced magnetic resonance imaging allow to localize tumors and observe their contrast kinetics, which is essential for cancer characterization and respective treatment decision-making. However, contrast agent administration is not only associated with adverse health risks , but also restricted for patients during pregnancy, and for those with kidney malfunction, or other adverse reactions. With contrast uptake as key biomarker for lesion malignancy, cancer recurrence risk, and treatment response, it becomes pivotal to reduce the dependency on intravenous contrast agent administration. To this end, we propose a multi-conditional latent diffusion model capable of acquisition time-conditioned image synthesis of DCE-MRI temporal sequences. To evaluate medical image synthesis, we additionally propose and validate the Fréchet radiomics distance as an image quality measure based on biomarker variability between synthetic and real imaging data. Our results demonstrate our method’s ability to generate realistic multi-sequence fat-saturated breast DCE-MRI and uncover the emerging potential of deep learning based contrast kinetics simulation. We publicly share our accessible codebase at https://github.com/RichardObi/ccnet and provide a user-friendly library for Fréchet radiomics distance calculation at https://pypi.org/project/frd-score. | Towards Learning Contrast Kinetics with Multi-Condition Latent Diffusion Models | [
"Osuala, Richard",
"Lang, Daniel M.",
"Verma, Preeti",
"Joshi, Smriti",
"Tsirikoglou, Apostolia",
"Skorupko, Grzegorz",
"Kushibar, Kaisar",
"Garrucho, Lidia",
"Pinaya, Walter H. L.",
"Diaz, Oliver",
"Schnabel, Julia A.",
"Lekadir, Karim"
] | Conference | 2403.13890 | [
"https://github.com/RichardObi/frd-score"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 547 |
|
null | https://papers.miccai.org/miccai-2024/paper/2613_paper.pdf | @InProceedings{ Li_Spatial_MICCAI2024,
author = { Li, Chen and Hu, Xiaoling and Abousamra, Shahira and Xu, Meilong and Chen, Chao },
title = { { Spatial Diffusion for Cell Layout Generation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Generative models, such as GANs and diffusion models, have been used to augment training sets and boost performances in different tasks. We focus on generative models for cell detection instead, i.e., locating and classifying cells in given pathology images. One important information that has been largely overlooked is the spatial patterns of the cells. In this paper, we propose a spatial-pattern-guided generative model for cell layout generation. Specifically, a novel diffusion model guided by spatial features and generates realistic cell layouts has been proposed. We explore different density models as spatial features for the diffusion model. In downstream tasks, we show that the generated cell layouts can be used to guide the generation of high-quality pathology images. Augmenting with these images can significantly boost the performance of SOTA cell detection methods. The code is available at https://github.com/superlc1995/Diffusion-cell. | Spatial Diffusion for Cell Layout Generation | [
"Li, Chen",
"Hu, Xiaoling",
"Abousamra, Shahira",
"Xu, Meilong",
"Chen, Chao"
] | Conference | 2409.03106 | [
"https://github.com/superlc1995/Diffusion-cell"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 548 |
|
null | https://papers.miccai.org/miccai-2024/paper/2491_paper.pdf | @InProceedings{ Yan_Simplify_MICCAI2024,
author = { Yang, Xinquan and Li, Xuguang and Luo, Xiaoling and Zeng, Leilei and Zhang, Yudi and Shen, Linlin and Deng, Yongqiang },
title = { { Simplify Implant Depth Prediction as Video Grounding: A Texture Perceive Implant Depth Prediction Network } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Surgical guide plate is an important tool for the dental implant surgery. However, the design process heavily relies on the dentist to manually simulate the implant angle and depth. When deep neural network has been applied to assist the dentist quickly locates the implant position, most of them are not able to determine the implant depth. Inspired by the video grounding task which localizes the starting and ending time of the target video segment, in this paper, we simplify the implant depth prediction as video grounding and develop a Texture Perceiver Implant Depth Prediction Network (TPNet), which enables us to directly output the imaplant depth without complex measurements of oral bone. TPNet consists of an implant region detector (IRD) and an implant depth prediction network (IDPNet). IRD is an object detector designed to crop the candidate implant volume from the CBCT, which greatly saves the computation resource. IDPNet takes the cropped CBCT data to predict the implant depth. A Texture Perceive Loss (TPL) is devised to enable the encoder of IDPNet to perceive the texture variation among slices. Extensive experiments on a large dental implant dataset demonstrated that the proposed TPNet achieves superior performance than the existing methods. | Simplify Implant Depth Prediction as Video Grounding: A Texture Perceive Implant Depth Prediction Network | [
"Yang, Xinquan",
"Li, Xuguang",
"Luo, Xiaoling",
"Zeng, Leilei",
"Zhang, Yudi",
"Shen, Linlin",
"Deng, Yongqiang"
] | Conference | 2406.04603 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 549 |
|
null | https://papers.miccai.org/miccai-2024/paper/0290_paper.pdf | @InProceedings{ Gow_Masks_MICCAI2024,
author = { Gowda, Shreyank N. and Clifton, David A. },
title = { { Masks and Manuscripts: Advancing Medical Pre-training with End-to-End Masking and Narrative Structuring } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Contemporary medical contrastive learning faces challenges from inconsistent semantics and sample pair morphology, leading to dispersed and converging semantic shifts. The variability in text reports, due to multiple authors, complicates semantic consistency. To tackle these issues, we propose a two-step approach. Initially, text reports are converted into a standardized triplet format, laying the groundwork for our novel concept of “observations” and “verdicts.” This approach refines the {Entity, Position, Exist} triplet into binary questions, guiding towards a clear “verdict.” We also innovate in visual pre-training with a Meijering-based masking, focusing on features representative of medical images’ local context. By integrating this with our text conversion method, our model advances cross-modal representation in a multimodal contrastive learning framework, setting new benchmarks in medical image analysis. | Masks and Manuscripts: Advancing Medical Pre-training with End-to-End Masking and Narrative Structuring | [
"Gowda, Shreyank N.",
"Clifton, David A."
] | Conference | 2407.16264 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 550 |
|
null | https://papers.miccai.org/miccai-2024/paper/2509_paper.pdf | @InProceedings{ Nov_Ataskconditional_MICCAI2024,
author = { Novosad, Philip and Carano, Richard A. D. and Krishnan, Anitha Priya },
title = { { A task-conditional mixture-of-experts model for missing modality segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Accurate quantification of multiple sclerosis (MS) lesions using multi-contrast magnetic resonance imaging (MRI) plays a crucial role in disease assessment. While many methods for automatic MS lesion segmentation in MRI are available, these methods typically require a fixed set of MRI modalities as inputs. Such full multi-contrast inputs are not always acquired, limiting their utility in practice. To address this issue, a training strategy known as modality dropout (MD) has been widely adopted in the literature. However, models trained via MD still under-perform compared to dedicated models trained for particular modality configurations. In this work, we hypothesize that the poor performance of MD is the result of an overly constrained multi-task optimization problem. To reduce harmful task interference, we propose to incorporate task-conditional mixture-of-expert layers into our segmentation model, allowing different tasks to leverage different parameters subsets. Second, we propose a novel online self-distillation loss to help regularize the model and to explicitly promote model invariance to input modality configuration. Compared to standard MD training, our method demonstrates improved results on a large proprietary clinical trial dataset as well as on a small publicly available dataset of T2 lesions. | A task-conditional mixture-of-experts model for missing modality segmentation | [
"Novosad, Philip",
"Carano, Richard A. D.",
"Krishnan, Anitha Priya"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 551 |
||
null | https://papers.miccai.org/miccai-2024/paper/1593_paper.pdf | @InProceedings{ Gu_Unsupervised_MICCAI2024,
author = { Gu, Mingxuan and Thies, Mareike and Mei, Siyuan and Wagner, Fabian and Fan, Mingcheng and Sun, Yipeng and Pan, Zhaoya and Vesal, Sulaiman and Kosti, Ronak and Possart, Dennis and Utz, Jonas and Maier, Andreas },
title = { { Unsupervised Domain Adaptation using Soft-Labeled Contrastive Learning with Reversed Monte Carlo Method for Cardiac Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Recent unsupervised domain adaptation methods in medical image segmentation adopt centroid/prototypical contrastive learning (CL) to match the source and target features for their excellent ability of representation learning and semantic feature alignment. Of these CL methods, most works extract features with a binary mask generated by similarity measure or thresholding the prediction. However, this hard-threshold (HT) strategy may induce sparse features and incorrect label assignments. Conversely, while the soft-labeling technique has proven effective in addressing the limitations of the HT strategy by assigning importance factors to pixel features, it remains unexplored in CL algorithms. Thus, in this work, we present a novel CL approach leveraging soft pseudo labels for category-wise target centroid generation, complemented by a reversed Monte Carlo method to achieve a more compact target feature space. Additionally, we propose a centroid norm regularizer as an extra magnitude constraint to bolster the model’s robustness. Extensive experiments and ablation studies on two cardiac data sets underscore the effectiveness of each component and reveal a significant enhancement in segmentation results in Dice Similarity Score and Hausdorff Distance 95 compared with a wide range of state-of-the-art methods. | Unsupervised Domain Adaptation using Soft-Labeled Contrastive Learning with Reversed Monte Carlo Method for Cardiac Image Segmentation | [
"Gu, Mingxuan",
"Thies, Mareike",
"Mei, Siyuan",
"Wagner, Fabian",
"Fan, Mingcheng",
"Sun, Yipeng",
"Pan, Zhaoya",
"Vesal, Sulaiman",
"Kosti, Ronak",
"Possart, Dennis",
"Utz, Jonas",
"Maier, Andreas"
] | Conference | [
"https://github.com/MingxuanGu/Soft-Labeled-Contrastive-Learning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 552 |
||
null | https://papers.miccai.org/miccai-2024/paper/1045_paper.pdf | @InProceedings{ Dai_Advancing_MICCAI2024,
author = { Dai, Ling and Zhao, Kaitao and Li, Zhongyu and Zhu, Jihua and Liang, Libin },
title = { { Advancing Sensorless Freehand 3D Ultrasound Reconstruction with a Novel Coupling Pad } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Sensorless freehand 3D ultrasound (US) reconstruction poses a significant challenge, yet it holds considerable importance in improving the accessibility of 3D US applications in clinics. Current mainstream solutions, relying on inertial measurement units or deep learning, encounter issues like cumulative drift. To overcome these limitations, we present a novel sensorless 3D US solution with two key contributions. Firstly, we develop a novel coupling pad for 3D US, which can be seamlessly integrated into the conventional 2D US scanning process. This pad, featuring 3 N-shaped lines, provides 3D spatial information without relying on external tracking devices. Secondly, we introduce a coarse-to-fine optimization method for calculating poses of sequential 2D US images. The optimization begins with a rough estimation of poses and undergoes refinement using a distance-topology discrepancy reduction strategy. The proposed method is validated by both simulation and practical phantom studies, demonstrating its superior performance compared to state-of-the-art methods and good accuracy in 3D US reconstruction. | Advancing Sensorless Freehand 3D Ultrasound Reconstruction with a Novel Coupling Pad | [
"Dai, Ling",
"Zhao, Kaitao",
"Li, Zhongyu",
"Zhu, Jihua",
"Liang, Libin"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 553 |
||
null | https://papers.miccai.org/miccai-2024/paper/1914_paper.pdf | @InProceedings{ Eic_PhysicsInformed_MICCAI2024,
author = { Eichhorn, Hannah and Spieker, Veronika and Hammernik, Kerstin and Saks, Elisa and Weiss, Kilian and Preibisch, Christine and Schnabel, Julia A. },
title = { { Physics-Informed Deep Learning for Motion-Corrected Reconstruction of Quantitative Brain MRI } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | We propose PHIMO, a physics-informed learning-based motion correction method tailored to quantitative MRI. PHIMO leverages information from the signal evolution to exclude motion-corrupted k-space lines from a data-consistent reconstruction. We demonstrate the potential of PHIMO for the application of T2* quantification from gradient echo MRI, which is particularly sensitive to motion due to its sensitivity to magnetic field inhomogeneities. A state-of-the-art technique for motion correction requires redundant acquisition of the k-space center, prolonging the acquisition.
We show that PHIMO can detect and exclude intra-scan motion events and, thus, correct for severe motion artifacts. PHIMO approaches the performance of the state-of-the-art motion correction method, while substantially reducing the acquisition time by over 40%, facilitating clinical applicability. Our code is available at https://github.com/compai-lab/2024-miccai-eichhorn. | Physics-Informed Deep Learning for Motion-Corrected Reconstruction of Quantitative Brain MRI | [
"Eichhorn, Hannah",
"Spieker, Veronika",
"Hammernik, Kerstin",
"Saks, Elisa",
"Weiss, Kilian",
"Preibisch, Christine",
"Schnabel, Julia A."
] | Conference | 2403.08298 | [
"https://github.com/compai-lab/2024-miccai-eichhorn"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 554 |
|
null | https://papers.miccai.org/miccai-2024/paper/2762_paper.pdf | @InProceedings{ Wan_SIXNet_MICCAI2024,
author = { Wang, Xinyi and Xu, Zikang and Zhu, Heqin and Yao, Qingsong and Sun, Yiyong and Zhou, S. Kevin },
title = { { SIX-Net: Spatial-context Information miX-up for Electrode Landmark Detection } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Catheter ablation is a prevalent procedure for treating atrial fibrillation, primarily utilizing catheters equipped with electrodes to gather electrophysiological signals. However, the localization of catheters in fluoroscopy images presents a challenge for clinicians due to the complexity of the intervention processes. In this paper, we propose SIX-Net, a novel algorithm intending to localize landmarks of electrodes in fluoroscopy images precisely, by mixing up spatial-context information from three aspects:
First, we propose a new network architecture specially designed for global-local spatial feature aggregation; Then, we mix up spatial correlations between segmentation and landmark detection, by sequential connections between the two tasks with the help of the Segment Anything Model; Finally, a weighted loss function is carefully designed considering the relative spatial-arrangement information among electrodes in the same image.
Experiment results on the test set and two clinical-challenging subsets reveal that our method outperforms several state-of-the-art landmark detection methods (~50% improvement for RF and ~25% improvement for CS). | SIX-Net: Spatial-context Information miX-up for Electrode Landmark Detection | [
"Wang, Xinyi",
"Xu, Zikang",
"Zhu, Heqin",
"Yao, Qingsong",
"Sun, Yiyong",
"Zhou, S. Kevin"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 555 |
||
null | https://papers.miccai.org/miccai-2024/paper/1678_paper.pdf | @InProceedings{ Li_MoCoDiff_MICCAI2024,
author = { Li, Feng and Zhou, Zijian and Fang, Yu and Cai, Jiangdong and Wang, Qian },
title = { { MoCo-Diff: Adaptive Conditional Prior on Diffusion Network for MRI Motion Correction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Magnetic Resonance Image (MRI) is a powerful medical imaging modality with non-ionizing radiation. However, due to its long scanning time, patient movement is prone to occur during acquisition. Severe motions can significantly degrade the image quality and make the images non-diagnostic. This paper introduces MoCo-Diff, a novel two-stage deep learning framework designed to correct the motion artifacts in 3D MRI volumes. In the first stage, we exploit a novel attention mechanism using shift window-based transformers in both the in-slice and through-slice directions to effectively remove the motion artifacts. In the second stage, the initially-corrected image serves as the prior for realistic MR image restoration. This stage incorporates the pre-trained Stable Diffusion to leverage its robust generative capability and the ControlUNet to fine-tune the diffusion model with the assistance of the prior. Moreover, we introduce an uncertainty predictor to assess the reliability of the motion-corrected images, which not only visually hints the motion correction errors but also enhances motion correction quality by trimming the prior with dynamic weights. Our experiments illustrate MoCo-Diff’s superiority over state-of-the-art approaches in removing motion artifacts and retaining anatomical details across different levels of motion severity. The code is available at https://github.com/fengza/MoCo-Diff. | MoCo-Diff: Adaptive Conditional Prior on Diffusion Network for MRI Motion Correction | [
"Li, Feng",
"Zhou, Zijian",
"Fang, Yu",
"Cai, Jiangdong",
"Wang, Qian"
] | Conference | [
"https://github.com/fengza/MoCo-Diff"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 556 |
||
null | https://papers.miccai.org/miccai-2024/paper/2540_paper.pdf | @InProceedings{ Zha_Mixed_MICCAI2024,
author = { Zhang, Si-Miao and Wang, Jing and Wang, Yi-Xuan and Liu, Tao and Zhu, Haogang and Zhang, Han and Cheng, Jian },
title = { { Mixed Integer Linear Programming for Discrete Sampling Scheme Design in Diffusion MRI } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | In diffusion MRI (dMRI), a uniform single or multiple shell sampling scheme is typically required for data acquisition in q-space, because uniform spherical sampling offers the advantage of capturing more information using fewer samples, leading to superior reconstruction results. Uniform sampling problems can be categorized into continuous and discrete types. While most existing sampling methods focus on the continuous problem that is to design spherical samples continuously from single or multiple shells, this paper primarily investigates two discrete optimization problems, i.e., 1) optimizing the polarity of an existing scheme (P-P), and 2) optimizing the ordering of an existing scheme (P-O). Existing approaches for these two problems mainly rely on greedy algorithms, simulated annealing, and exhaustive search, which fail to obtain global optima within a reasonable timeframe. We propose several Mixed Integer Linear Programming (MILP) based methods to address these problems. To the best of our knowledge, this is the first work that solves these two discrete problems using MILP to obtain global optimal or sufficiently good solutions in 10 minutes. Experiments performed on single and multiple shells demonstrate that our MILP methods can achieve larger separation angles and lower electrostatic energy, resulting better reconstruction results, compared with existing approaches in commonly used software (i.e., CAMINO and MRtrix). | Mixed Integer Linear Programming for Discrete Sampling Scheme Design in Diffusion MRI | [
"Zhang, Si-Miao",
"Wang, Jing",
"Wang, Yi-Xuan",
"Liu, Tao",
"Zhu, Haogang",
"Zhang, Han",
"Cheng, Jian"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 557 |
||
null | https://papers.miccai.org/miccai-2024/paper/3581_paper.pdf | @InProceedings{ Feh_Intraoperative_MICCAI2024,
author = { Fehrentz, Maximilian and Azampour, Mohammad Farid and Dorent, Reuben and Rasheed, Hassan and Galvin, Colin and Golby, Alexandra and Wells III, William M. and Frisken, Sarah and Navab, Nassir and Haouchine, Nazim },
title = { { Intraoperative Registration by Cross-Modal Inverse Neural Rendering } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | We present in this paper a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering. Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively. This disentanglement is achieved by controlling a Neural Radiance Field’s appearance with a multi-style hypernetwork. Once trained, the implicit neural representation serves as a differentiable rendering engine, which can be used to estimate the surgical camera pose by minimizing the dissimilarity between its rendered images and the target intraoperative image. We tested our method on retrospective patients’ data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration. | Intraoperative Registration by Cross-Modal Inverse Neural Rendering | [
"Fehrentz, Maximilian",
"Azampour, Mohammad Farid",
"Dorent, Reuben",
"Rasheed, Hassan",
"Galvin, Colin",
"Golby, Alexandra",
"Wells III, William M.",
"Frisken, Sarah",
"Navab, Nassir",
"Haouchine, Nazim"
] | Conference | 2409.11983 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 558 |
|
null | https://papers.miccai.org/miccai-2024/paper/0410_paper.pdf | @InProceedings{ Rof_Feature_MICCAI2024,
author = { Roffo, Giorgio and Biffi, Carlo and Salvagnini, Pietro and Cherubini, Andrea },
title = { { Feature Selection Gates with Gradient Routing for Endoscopic Image Computing } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | To address overfitting and enhance model generalization in gastroenterological polyp size assessment, our study introduces Feature Selection Gates (FSG) alongside Gradient Routing (GR) for dynamic feature selection. This technique aims to boost Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) by promoting sparse connectivity, thereby reducing overfitting and enhancing generalization. FSG achieves this through sparsification with learnable weights, serving as a regularization strategy. GR further refines this process by optimizing FSG parameters via dual forward passes, independently from the main model, to improve feature re-weighting. Our evaluation spanned multiple datasets, including CIFAR-100 for a broad impact assessment and specialized endoscopic datasets (REAL-Colon [12], Misawa [9], and SUN [13]) focusing on polyp size estimation, covering over 200 polyps in more than 370K frames. The findings indicate that our FSG-enhanced networks substantially enhance performance in both binary and triclass classification tasks related to polyp sizing. Specifically, CNNs experienced an F1 Score improvement to 87.8% in binary classification, while in triclass classification, the ViT-T model reached an F1 Score of 76.5%, outperforming traditional CNNs and ViT-T models. To facilitate further research, we are releasing our codebase, which includes implementations for CNNs, multistream CNNs, ViT, and FSG-augmented variants. This resource aims to standardize the use of endoscopic datasets, providing public training-validation-testing splits for reliable and comparable research in gastroenterological polyp size estimation. The codebase is available at github.com/cosmoimd/feature-selection-gates. | Feature Selection Gates with Gradient Routing for Endoscopic Image Computing | [
"Roffo, Giorgio",
"Biffi, Carlo",
"Salvagnini, Pietro",
"Cherubini, Andrea"
] | Conference | [
"https://github.com/cosmoimd/feature-selection-gates"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 559 |
||
null | https://papers.miccai.org/miccai-2024/paper/0391_paper.pdf | @InProceedings{ Pou_CARMFL_MICCAI2024,
author = { Poudel, Pranav and Shrestha, Prashant and Amgain, Sanskar and Shrestha, Yash Raj and Gyawali, Prashnna and Bhattarai, Binod },
title = { { CAR-MFL: Cross-Modal Augmentation by Retrieval for Multimodal Federated Learning with Missing Modalities } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Multimodal AI has demonstrated superior performance over unimodal approaches by leveraging diverse data sources for more comprehensive analysis. However, applying this effectiveness in healthcare is challenging due to the limited availability of public datasets. Federated learning presents an exciting solution, allowing the use of extensive databases from hospitals and health centers without centralizing sensitive data, thus maintaining privacy and security. Yet, research in multimodal federated learning, particularly in scenarios with missing modalities—a common issue in healthcare datasets—remains scarce, highlighting a critical area for future exploration. Toward this, we propose a novel method for multimodal federated learning with missing modalities. Our contribution lies in a novel cross-modal data augmentation by retrieval, leveraging the small publicly available dataset to fill the missing modalities in the clients. Our method learns the parameters in a federated manner, ensuring privacy protection and improving performance in multiple challenging multimodal benchmarks in the medical domain, surpassing several competitive baselines. | CAR-MFL: Cross-Modal Augmentation by Retrieval for Multimodal Federated Learning with Missing Modalities | [
"Poudel, Pranav",
"Shrestha, Prashant",
"Amgain, Sanskar",
"Shrestha, Yash Raj",
"Gyawali, Prashnna",
"Bhattarai, Binod"
] | Conference | 2407.08648 | [
"https://github.com/bhattarailab/CAR-MFL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 560 |
|
null | https://papers.miccai.org/miccai-2024/paper/3820_paper.pdf | @InProceedings{ Kar_An_MICCAI2024,
author = { Karimi, Davood },
title = { { An approach to building foundation models for brain image analysis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Existing machine learning methods for brain image analysis are mostly based on supervised training. They require large labeled datasets, which can be costly or impossible to obtain. Moreover, the trained models are useful only for the narrow task defined by the labels. In this work, we developed a new method, based on the concept of foundation models, to overcome these limitations. Our model is an attention-based neural network that is trained using a novel self-supervised approach. Specifically, the model is trained to generate brain images in a patch-wise manner, thereby learning the brain structure. To facilitate learning of image details, we propose a new method that encodes high-frequency information using convolutional kernels with random weights. We trained our model on a pool of 10 public datasets. We then applied the model on five independent datasets to perform segmentation, lesion detection, denoising, and brain age estimation. Results showed that the foundation model achieved competitive or better results on all tasks, while significantly reducing the required amount of labeled training data. Our method enables leveraging large unlabeled neuroimaging datasets to effectively address diverse brain image analysis tasks and reduce the time and cost requirements of acquiring labels. | An approach to building foundation models for brain image analysis | [
"Karimi, Davood"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 561 |
||
null | https://papers.miccai.org/miccai-2024/paper/0298_paper.pdf | @InProceedings{ Yeh_Insight_MICCAI2024,
author = { Yeh, Chun-Hsiao and Wang, Jiayun and Graham, Andrew D. and Liu, Andrea J. and Tan, Bo and Chen, Yubei and Ma, Yi and Lin, Meng C. },
title = { { Insight: A Multi-Modal Diagnostic Pipeline using LLMs for Ocular Surface Disease Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Accurate diagnosis of ocular surface diseases is critical in optometry and ophthalmology, which hinge on integrating clinical data sources (e.g., meibography imaging and clinical metadata). Traditional human assessments lack precision in quantifying clinical observations, while current machine-based methods often treat diagnoses as multi-class classification problems, limiting the diagnoses to a predefined closed-set of curated answers without reasoning the clinical relevance of each variable to the diagnosis. To tackle these challenges, we introduce an innovative multi-modal diagnostic pipeline (MDPipe) by employing large language models (LLMs) for ocular surface disease diagnosis.
We first employ a visual translator to interpret meibography images by converting them into quantifiable morphology data, facilitating their integration with clinical metadata and enabling the communication of nuanced medical insight to LLMs. To further advance this communication, we introduce a LLM-based summarizer to contextualize the insight from the combined morphology and clinical metadata, and generate clinical report summaries. Finally, we refine the LLMs’ reasoning ability with domain-specific insight from real-life clinician diagnoses. Our evaluation across diverse ocular surface disease diagnosis benchmarks demonstrates that MDPipe outperforms existing standards, including GPT-4, and provides clinically sound rationales for diagnoses. The project is available at \url{https://danielchyeh.github.io/MDPipe/}. | Insight: A Multi-Modal Diagnostic Pipeline using LLMs for Ocular Surface Disease Diagnosis | [
"Yeh, Chun-Hsiao",
"Wang, Jiayun",
"Graham, Andrew D.",
"Liu, Andrea J.",
"Tan, Bo",
"Chen, Yubei",
"Ma, Yi",
"Lin, Meng C."
] | Conference | 2410.00292 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 562 |
|
null | https://papers.miccai.org/miccai-2024/paper/2548_paper.pdf | @InProceedings{ Zhu_Efficient_MICCAI2024,
author = { Zhu, Yuanzhuo and Lian, Chunfeng and Li, Xianjun and Wang, Fan and Ma, Jianhua },
title = { { Efficient Cortical Surface Parcellation via Full-Band Diffusion Learning at Individual Space } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Cortical parcellation delineates the cerebral cortex into distinct regions based on anatomical and/or functional criteria, a process crucial for neuroscientific research and clinical applications. Conventional methods for cortical parcellation involve spherical mapping and complex feature computation, which are time consuming and prone to error. Recent geometric learning approaches offer some improvements but may still depend on spherical mapping and could be sensitive to mesh variations. In this work, we present Cortex-Diffusion, a fully automatic framework for cortical parcellation on native cortical surfaces without spherical mapping or morphological feature extraction. Leveraging the DiffusionNet as its backbone, Cortex-Diffusion integrates a newly designed module for full-band spectral-accelerated spatial diffusion learning to adaptively aggregate information across highly convoluted meshes, allowing high-resolution geometric representation and accurate vertex-wise delineation. Using only raw 3D vertex coordinates, the model is compact, with merely 0.49 MB of learnable parameters. Extensive experiments on adult and infant datasets demonstrates that Cortex-Diffusion achieves superior accuracy and robustness in cortical parcellation. | Efficient Cortical Surface Parcellation via Full-Band Diffusion Learning at Individual Space | [
"Zhu, Yuanzhuo",
"Lian, Chunfeng",
"Li, Xianjun",
"Wang, Fan",
"Ma, Jianhua"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 563 |
||
null | https://papers.miccai.org/miccai-2024/paper/3351_paper.pdf | @InProceedings{ Wan_LIBR_MICCAI2024,
author = { Wang, Dingrong and Azadvar, Soheil and Heiselman, Jon and Jiang, Xiajun and Miga, Michael and Wang, Linwei },
title = { { LIBR+: Improving Intraoperative Liver Registration by Learning the Residual of Biomechanics-Based Deformable Registration } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | The surgical environment imposes unique challenges to the
intraoperative registration of organ shapes to their preoperatively-imaged
geometry. Biomechanical model-based registration remains popular, while
deep learning solutions remain limited due to the sparsity and variability
of intraoperative measurements and the limited ground-truth deformation of an organ that can be obtained during the surgery. In this paper,
we propose a novel hybrid registration approach that leverage a linearized
iterative boundary reconstruction (LIBR) method based on linear elastic biomechanics, and use deep neural networks to learn its residual to
the ground-truth deformation (LIBR+). We further formulate a dualbranch spline-residual graph convolutional neural network (SR-GCN) to
assimilate information from sparse and variable intraoperative measurements and effectively propagate it through the geometry of the 3D organ.
Experiments on a large intraoperative liver registration dataset demonstrated the consistent improvements achieved by LIBR+ in comparison
to existing rigid, biomechnical model-based non-rigid, and deep-learning
based non-rigid approaches to intraoperative liver registration. | LIBR+: Improving Intraoperative Liver Registration by Learning the Residual of Biomechanics-Based Deformable Registration | [
"Wang, Dingrong",
"Azadvar, Soheil",
"Heiselman, Jon",
"Jiang, Xiajun",
"Miga, Michael",
"Wang, Linwei"
] | Conference | 2403.06901 | [
"https://github.com/wdr123/splineCNN"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 564 |
|
null | https://papers.miccai.org/miccai-2024/paper/1214_paper.pdf | @InProceedings{ Ily_AHybrid_MICCAI2024,
author = { Ilyas, Zaid and Saleem, Afsah and Suter, David and Schousboe, John T. and Leslie, William D. and Lewis, Joshua R. and Gilani, Syed Zulqarnain },
title = { { A Hybrid CNN-Transformer Feature Pyramid Network for Granular Abdominal Aortic Calcification Detection from DXA Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Cardiovascular Diseases (CVDs) stand as the primary global cause of mortality, with Abdominal Aortic Calcification (AAC) being a stable marker of these conditions. AAC can be observed in Dual Energy X-ray absorptiometry (DXA) lateral view Vertebral Fracture Assessment (VFA) scans, usually performed for the detection of vertebral fractures. Early detection of AAC can help reduce the risk of developing clinical CVD by encouraging preventive measures. Recent efforts to automate DXA VFA image analysis for AAC detection are restricted to either predicting an overall AAC score, or they lack performance in granular AAC score prediction. The latter is important in helping clinicians predict CVD associated with the diminished Windkessel effect in the aorta. In this regard, we propose a hybrid Feature Pyramid Network (FPN) based CNN-Transformer architecture (Hybrid-FPN-AACNet) that employs a novel Dual Resolution Self-Attention (DRSA) mechanism to enhance context for self-attention by working on two different resolutions of the input feature map. Moreover, the proposed architecture also employs a novel Efficient Feature Fusion Module (EFFM) that efficiently combines the features from different hierarchies of Hybrid-FPN-AACNet for regression tasks. The proposed architecture has achieved State-Of-The-Art (SOTA) performance at a granular level compared to previous work. | A Hybrid CNN-Transformer Feature Pyramid Network for Granular Abdominal Aortic Calcification Detection from DXA Images | [
"Ilyas, Zaid",
"Saleem, Afsah",
"Suter, David",
"Schousboe, John T.",
"Leslie, William D.",
"Lewis, Joshua R.",
"Gilani, Syed Zulqarnain"
] | Conference | [
"https://github.com/zaidilyas89/Hybrid-FPN-AACNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 565 |
||
null | https://papers.miccai.org/miccai-2024/paper/3846_paper.pdf | @InProceedings{ Bis_Adaptive_MICCAI2024,
author = { Biswas, Koushik and Jha, Debesh and Tomar, Nikhil Kumar and Karri, Meghana and Reza, Amit and Durak, Gorkem and Medetalibeyoglu, Alpay and Antalek, Matthew and Velichko, Yury and Ladner, Daniela and Borhani, Amir and Bagci, Ulas },
title = { { Adaptive Smooth Activation Function for Improved Organ Segmentation and Disease Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | The design of activation functions constitutes a cornerstone for deep learning (DL) applications, exerting a profound influence on the performance and capabilities of neural networks. This influence stems from their ability to introduce non-linearity into the network architecture. By doing so, activation functions empower the network to learn and model intricate data patterns and relationships, surpassing the limitations of linear models. In this study, we propose a new activation function, called {Adaptive Smooth Activation Unit \textit{(\textbf{ASAU})}}, tailored for optimized gradient propagation, thereby enhancing the proficiency of deep networks in medical image analysis. We apply this new activation function to two important and commonly used general tasks in medical image analysis: automatic disease diagnosis and organ segmentation in CT and MRI scans. Our rigorous evaluation on the \textit{RadImageNet} abdominal/pelvis (CT and MRI) demonstrates that our ASAU-integrated classification frameworks achieve a substantial improvement of 4.80\% over ReLU based frameworks in classification accuracy for disease detection. Also, the proposed framework on Liver Tumor Segmentation (LiTS) 2017 Benchmarks obtains 1\%-to-3\% improvement in dice coefficient compared to widely used activations for segmentation tasks. The superior performance and adaptability of ASAU highlight its potential for integration into a wide range of image classification and segmentation tasks. The code is available at \href{https://github.com/koushik313/ASAU}{https://github.com/koushik313/ASAU}. | Adaptive Smooth Activation Function for Improved Organ Segmentation and Disease Diagnosis | [
"Biswas, Koushik",
"Jha, Debesh",
"Tomar, Nikhil Kumar",
"Karri, Meghana",
"Reza, Amit",
"Durak, Gorkem",
"Medetalibeyoglu, Alpay",
"Antalek, Matthew",
"Velichko, Yury",
"Ladner, Daniela",
"Borhani, Amir",
"Bagci, Ulas"
] | Conference | [
"https://github.com/koushik313/ASAU"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 566 |
||
null | https://papers.miccai.org/miccai-2024/paper/0073_paper.pdf | @InProceedings{ Jia_Towards_MICCAI2024,
author = { Jiang, Yuncheng and Hu, Yiwen and Zhang, Zixun and Wei, Jun and Feng, Chun-Mei and Tang, Xuemei and Wan, Xiang and Liu, Yong and Cui, Shuguang and Li, Zhen },
title = { { Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Endorectal ultrasound (ERUS) is an important imaging modality that provides high reliability for diagnosing the depth and boundary of invasion in colorectal cancer. However, the lack of a large-scale ERUS dataset with high-quality annotations hinders the development of automatic ultrasound diagnostics. In this paper, we collected and annotated the first benchmark dataset that covers diverse ERUS scenarios, \textit{i.e.} colorectal cancer segmentation, detection, and infiltration depth staging. Our ERUS-10K dataset comprises 77 videos and 10,000 high-resolution annotated frames. Based on this dataset, we further introduce a benchmark model for colorectal cancer segmentation, named the \textbf{A}daptive \textbf{S}parse-context \textbf{TR}ansformer (\textbf{ASTR}). ASTR is designed based on three considerations: scanning mode discrepancy, temporal information, and low computational complexity. For generalizing to different scanning modes, the adaptive scanning-mode augmentation is proposed to convert between raw sector images and linear scan ones. For mining temporal information, the sparse-context transformer is incorporated to integrate inter-frame local and global features. For reducing computational complexity, the sparse-context block is introduced to extract contextual features from auxiliary frames. Finally, on the benchmark dataset, the proposed ASTR model achieves a $77.6\%$ Dice score in rectal cancer segmentation, largely outperforming previous state-of-the-art methods. | Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development | [
"Jiang, Yuncheng",
"Hu, Yiwen",
"Zhang, Zixun",
"Wei, Jun",
"Feng, Chun-Mei",
"Tang, Xuemei",
"Wan, Xiang",
"Liu, Yong",
"Cui, Shuguang",
"Li, Zhen"
] | Conference | 2408.10067 | [
""
] | https://huggingface.co/papers/2408.10067 | 0 | 0 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 | Poster | 567 |
null | https://papers.miccai.org/miccai-2024/paper/1723_paper.pdf | @InProceedings{ Chu_RetMIL_MICCAI2024,
author = { Chu, Hongbo and Sun, Qiehe and Li, Jiawen and Chen, Yuxuan and Zhang, Lizhong and Guan, Tian and Han, Anjia and He, Yonghong },
title = { { RetMIL: Retentive Multiple Instance Learning for Histopathological Whole Slide Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Histopathological whole slide image (WSI) analysis using deep learning has become a research focus in computational pathology. The current basic paradigm is through the multiple instance learning (MIL) method, which uses a WSI as a bag and the cropped patches as instances. As Transformer has become the mainstream framework of neural networks, many MIL methods based on Transformer have been widely studied. They regard the patches as a sequence to complete tasks based on sequence analysis. However, the long sequence brought by the high heterogeneity and gigapixel nature of WSI will bring challenges to Transformer-based MIL such as high memory consumption, low inference speed, and even low inference performance. To this end, we propose a hierarchical retentive-based MIL method called RetMIL, which is adopted at local and global levels. At the local level, patches are divided into multiple subsequences, and each subsequence is updated through a parallel linear retention mechanism and aggregated by each patch embedding. At the global level, slide-level subsequence is obtained by a serial retention mechanism and attention pooling. And finally using a fully connected layer to predict category score. We conduct experiments on two public CAMELYON and BRACS datasets and an internal TCGASYS-LUNG dataset, confirming that RetMIL not only has state-of-the-art performance but also significantly reduces computational overhead. | RetMIL: Retentive Multiple Instance Learning for Histopathological Whole Slide Image Classification | [
"Chu, Hongbo",
"Sun, Qiehe",
"Li, Jiawen",
"Chen, Yuxuan",
"Zhang, Lizhong",
"Guan, Tian",
"Han, Anjia",
"He, Yonghong"
] | Conference | 2403.10858 | [
"https://github.com/Hongbo-Chu/RetMIL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 568 |
|
null | https://papers.miccai.org/miccai-2024/paper/1280_paper.pdf | @InProceedings{ Jia_Multimodal_MICCAI2024,
author = { Jiang, Songhan and Gan, Zhengyu and Cai, Linghan and Wang, Yifeng and Zhang, Yongbing },
title = { { Multimodal Cross-Task Interaction for Survival Analysis in Whole Slide Pathological Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Survival prediction, utilizing pathological images and genomic profiles, is increasingly important in cancer analysis and prognosis. Despite significant progress, precise survival analysis still faces two main challenges:
(1) The massive pixels contained in whole slide images (WSIs) complicate the process of pathological images, making it difficult to generate an effective representation of the tumor microenvironment (TME).
(2) Existing multimodal methods often rely on alignment strategies to integrate complementary information, which may lead to information loss due to the inherent heterogeneity between pathology and genes.
In this paper, we propose a Multimodal Cross-Task Interaction (MCTI) framework to explore the intrinsic correlations between subtype classification and survival analysis tasks. Specifically, to capture TME-related features in WSIs, we leverage the subtype classification task to mine tumor regions. Simultaneously, multi-head attention mechanisms are applied in genomic feature extraction, adaptively performing genes grouping to obtain task-related genomic embedding. With the joint representation of pathological images and genomic data, we further introduce a Transport-Guided Attention (TGA) module that uses optimal transport theory to model the correlation between subtype classification and survival analysis tasks, effectively transferring potential information. Extensive experiments demonstrate the superiority of our approaches, with MCTI outperforming state-of-the-art frameworks on three public benchmarks. | Multimodal Cross-Task Interaction for Survival Analysis in Whole Slide Pathological Images | [
"Jiang, Songhan",
"Gan, Zhengyu",
"Cai, Linghan",
"Wang, Yifeng",
"Zhang, Yongbing"
] | Conference | 2406.17225 | [
"https://github.com/jsh0792/MCTI"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 569 |
|
null | https://papers.miccai.org/miccai-2024/paper/0584_paper.pdf | @InProceedings{ Kon_AnatomicallyControllable_MICCAI2024,
author = { Konz, Nicholas and Chen, Yuwen and Dong, Haoyu and Mazurowski, Maciej A. },
title = { { Anatomically-Controllable Medical Image Generation with Segmentation-Guided Diffusion Models } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Diffusion models have enabled remarkably high-quality medical image generation, yet it is challenging to enforce anatomical constraints in generated images. To this end, we propose a diffusion model-based method that supports anatomically-controllable medical image generation, by following a multi-class anatomical segmentation mask at each sampling step. We additionally introduce a random mask ablation training algorithm to enable conditioning on a selected combination of anatomical constraints while allowing flexibility in other anatomical areas. We compare our method (“SegGuidedDiff”) to existing methods on breast MRI and abdominal/neck-to-pelvis CT datasets with a wide range of anatomical objects. Results show that our method reaches a new state-of-the-art in the faithfulness of generated images to input anatomical masks on both datasets, and is on par for general anatomical realism. Finally, our model also enjoys the extra benefit of being able to adjust the anatomical similarity of generated images to real images of choice through interpolation in its latent space. SegGuidedDiff has many applications, including cross-modality translation, and the generation of paired or counterfactual data. Our code is available at https://github.com/mazurowski-lab/segmentation-guided-diffusion. | Anatomically-Controllable Medical Image Generation with Segmentation-Guided Diffusion Models | [
"Konz, Nicholas",
"Chen, Yuwen",
"Dong, Haoyu",
"Mazurowski, Maciej A."
] | Conference | 2402.05210 | [
"https://github.com/mazurowski-lab/segmentation-guided-diffusion"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 570 |
|
null | https://papers.miccai.org/miccai-2024/paper/2030_paper.pdf | @InProceedings{ Wan_Advancing_MICCAI2024,
author = { Wang, Hongqiu and Luo, Xiangde and Chen, Wu and Tang, Qingqing and Xin, Mei and Wang, Qiong and Zhu, Lei },
title = { { Advancing UWF-SLO Vessel Segmentation with Source-Free Active Domain Adaptation and a Novel Multi-Center Dataset } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Accurate vessel segmentation in Ultra-Wide-Field Scanning Laser Ophthalmoscopy (UWF-SLO) images is crucial for diagnosing retinal diseases. Although recent techniques have shown encouraging outcomes in vessel segmentation, models trained on one medical dataset often underperform on others due to domain shifts. Meanwhile, manually labeling high-resolution UWF-SLO images is an extremely challenging, time-consuming and expensive task. In response, this study introduces a pioneering framework that leverages a patch-based active domain adaptation approach. By actively recommending a few valuable image patches by the devised Cascade Uncertainty-Predominance (CUP) selection strategy for labeling and model-finetuning, our method significantly improves the accuracy of UWF-SLO vessel segmentation across diverse medical centers. In addition, we annotate and construct the first Multi-center UWF-SLO Vessel Segmentation (MU-VS) dataset to promote this topic research, comprising data from multiple institutions. This dataset serves as a valuable resource for cross-center evaluation, verifying the effectiveness and robustness of our approach. Experimental results demonstrate that our approach surpasses existing domain adaptation and active learning methods, considerably reducing the gap between the Upper and Lower bounds with minimal annotations, highlighting our method’s practical clinical value. We will release our dataset and code to facilitate relevant research (https://github.com/whq-xxh/SFADA-UWF-SLO). | Advancing UWF-SLO Vessel Segmentation with Source-Free Active Domain Adaptation and a Novel Multi-Center Dataset | [
"Wang, Hongqiu",
"Luo, Xiangde",
"Chen, Wu",
"Tang, Qingqing",
"Xin, Mei",
"Wang, Qiong",
"Zhu, Lei"
] | Conference | 2406.13645 | [
"https://github.com/whq-xxh/SFADA-UWF-SLO"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 571 |
|
null | https://papers.miccai.org/miccai-2024/paper/3051_paper.pdf | @InProceedings{ Lam_Robust_MICCAI2024,
author = { Lambert, Benjamin and Forbes, Florence and Doyle, Senan and Dojat, Michel },
title = { { Robust Conformal Volume Estimation in 3D Medical Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Volumetry is one of the principal downstream applications of 3D medical image segmentation, for example, to detect abnormal tissue growth or for surgery planning. Conformal Prediction is a promising framework for uncertainty quantification, providing calibrated predictive intervals associated with automatic volume measurements. However, this methodology is based on the hypothesis that calibration and test samples are exchangeable, an assumption that is in practice often violated in medical image applications. A weighted formulation of Conformal Prediction can be framed to mitigate this issue, but its empirical investigation in the medical domain is still lacking. A potential reason is that it relies on the estimation of the density ratio between the calibration and test distributions, which is likely to be intractable in scenarios involving high-dimensional data. To circumvent this, we propose an efficient approach for density ratio estimation relying on the compressed latent representations generated by the segmentation model. Our experiments demonstrate the efficiency of our approach to reduce the coverage error in the presence of covariate shifts, in both synthetic and real-world settings. | Robust Conformal Volume Estimation in 3D Medical Images | [
"Lambert, Benjamin",
"Forbes, Florence",
"Doyle, Senan",
"Dojat, Michel"
] | Conference | 2407.19938 | [
"https://github.com/benolmbrt/wcp_miccai"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 572 |
|
null | https://papers.miccai.org/miccai-2024/paper/1502_paper.pdf | @InProceedings{ Beh_Leveraging_MICCAI2024,
author = { Behrendt, Finn and Bhattacharya, Debayan and Mieling, Robin and Maack, Lennart and Krüger, Julia and Opfer, Roland and Schlaefer, Alexander },
title = { { Leveraging the Mahalanobis Distance to enhance Unsupervised Brain MRI Anomaly Detection } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Unsupervised Anomaly Detection (UAD) methods rely on healthy data distributions to identify anomalies as outliers. In brain MRI, a common approach is reconstruction-based UAD, where generative models reconstruct healthy brain MRIs, and anomalies are detected as deviations between input and reconstruction. However, this method is sensitive to imperfect reconstructions, leading to false positives that impede the segmentation. To address this limitation, we construct multiple reconstructions with probabilistic diffusion models. We then analyze the resulting distribution of these reconstructions using the Mahalanobis distance (MHD) to identify anomalies as outliers. By leveraging information about normal variations and covariance of individual pixels within this distribution, we effectively refine anomaly scoring, leading to improved segmentation.
Our experimental results demonstrate substantial performance improvements across various data sets. Specifically, compared to relying solely on single reconstructions, our approach achieves relative improvements of 15.9%, 35.4%, 48.0%, and 4.7% in terms of AUPRC for the BRATS21, ATLAS, MSLUB and WMH data sets, respectively. | Leveraging the Mahalanobis Distance to enhance Unsupervised Brain MRI Anomaly Detection | [
"Behrendt, Finn",
"Bhattacharya, Debayan",
"Mieling, Robin",
"Maack, Lennart",
"Krüger, Julia",
"Opfer, Roland",
"Schlaefer, Alexander"
] | Conference | 2407.12474 | [
"https://github.com/FinnBehrendt/Mahalanobis-Unsupervised-Anomaly-Detection"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 573 |
|
null | https://papers.miccai.org/miccai-2024/paper/2052_paper.pdf | @InProceedings{ Du_The_MICCAI2024,
author = { Du, Yuning and Dharmakumar, Rohan and Tsaftaris, Sotirios A. },
title = { { The MRI Scanner as a Diagnostic: Image-less Active Sampling } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Despite the high diagnostic accuracy of Magnetic Resonance Imaging (MRI), using MRI as a Point-of-Care (POC) disease identification tool poses significant accessibility challenges due to the use of high magnetic field strength and lengthy acquisition times.
We ask a simple question: Can we dynamically optimise acquired samples, at the patient level, according to an (automated) downstream decision task, while discounting image reconstruction?
We propose an ML-based framework that learns an active sampling strategy, via reinforcement learning, at a patient-level to directly infer disease from undersampled k-space. We validate our approach by inferring Meniscus Tear in undersampled knee MRI data, where we achieve diagnostic performance comparable with ML-based diagnosis, using fully sampled k-space data. We analyse task-specific sampling policies, showcasing the adaptability of our active sampling approach. The introduced frugal sampling strategies have the potential to reduce high field strength requirements that in turn strengthen the viability of MRI-based POC disease identification and associated preliminary screening tools. | The MRI Scanner as a Diagnostic: Image-less Active Sampling | [
"Du, Yuning",
"Dharmakumar, Rohan",
"Tsaftaris, Sotirios A."
] | Conference | 2406.16754 | [
"https://github.com/vios-s/MRI_Active_Sampling"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 574 |
|
null | https://papers.miccai.org/miccai-2024/paper/1861_paper.pdf | @InProceedings{ Bou_3D_MICCAI2024,
author = { Bourigault, Emmanuelle and Jamaludin, Amir and Zisserman, Andrew },
title = { { 3D Spine Shape Estimation from Single 2D DXA } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Scoliosis is currently assessed solely on 2D lateral deviations, but recent studies have also revealed the importance of other imaging planes in understanding the deformation of the spine. Consequently, extracting the spinal geometry in 3D would help quantify these spinal deformations and aid diagnosis.
In this study, we propose an automated general framework to estimate the {\em 3D }spine shape from {\em 2D} DXA scans. We achieve this by explicitly predicting the sagittal view of the spine from the DXA scan. Using these two orthogonal projections of the spine (coronal in DXA, and sagittal from the prediction), we are able to describe the 3D shape of the spine.
The prediction is learnt from over 30k paired images of DXA and MRI scans. We assess the performance of the method on a held out test set, and achieve high accuracy. Our code is available at \href{https://github.com/EmmanuelleB985/DXA_to_3D}{https://github.com/EmmanuelleB985/DXA-to-3D.} | 3D Spine Shape Estimation from Single 2D DXA | [
"Bourigault, Emmanuelle",
"Jamaludin, Amir",
"Zisserman, Andrew"
] | Conference | [
"https://github.com/EmmanuelleB985/DXA-to-3D"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 575 |
||
null | https://papers.miccai.org/miccai-2024/paper/0640_paper.pdf | @InProceedings{ Che_Can_MICCAI2024,
author = { Chen, Jiawei and Jiang, Yue and Yang, Dingkang and Li, Mingcheng and Wei, Jinjie and Qian, Ziyun and Zhang, Lihua },
title = { { Can LLMs’ Tuning Methods Work in Medical Multimodal Domain? } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | While Large Language Models (LLMs) excel in world knowledge understanding, adapting them to specific subfields requires precise adjustments. Due to the model’s vast scale, traditional global fine-tuning methods for large models can be computationally expensive and impact generalization. To address this challenge, a range of innovative Parameters-Efficient Fine-Tuning (PEFT) methods have emerged and achieved remarkable success in both LLMs and Large Vision-Language Models (LVLMs). In the medical domain, fine-tuning a medical Vision-Language Pretrained (VLP) model is essential for adapting it to specific tasks. Can the fine-tuning methods for large models be transferred to the medical field to enhance transfer learning efficiency? In this paper, we delve into the fine-tuning methods of LLMs and conduct extensive experiments to investigate the impact of fine-tuning methods for large models on the existing multimodal model in the medical domain from the training data level and the model structure level. We show the different impacts of fine-tuning methods for large models on medical VLMs and develop the most efficient ways to fine-tune medical VLP models. We hope this research can guide medical domain researchers in optimizing VLMs’ training costs, fostering the broader application of VLMs in healthcare fields. The code and dataset have been released at https://github.com/TIMMY-CHAN/MILE. | Can LLMs’ Tuning Methods Work in Medical Multimodal Domain? | [
"Chen, Jiawei",
"Jiang, Yue",
"Yang, Dingkang",
"Li, Mingcheng",
"Wei, Jinjie",
"Qian, Ziyun",
"Zhang, Lihua"
] | Conference | [
"https://github.com/TIMMY-CHAN/MILE"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 576 |
||
null | https://papers.miccai.org/miccai-2024/paper/1025_paper.pdf | @InProceedings{ Yua_HecVL_MICCAI2024,
author = { Yuan, Kun and Srivastav, Vinkle and Navab, Nassir and Padoy, Nicolas },
title = { { HecVL: Hierarchical Video-Language Pretraining for Zero-shot Surgical Phase Recognition } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Natural language could play an important role in developing generalist surgical models by providing a broad source of supervision from raw texts. This flexible form of supervision can enable the model’s transferability across datasets and tasks as natural language can be used to reference learned visual concepts or describe new ones. In this work, we present HecVL, a novel hierarchical video-language pretraining approach for building a generalist surgical model. Specifically, we construct a hierarchical video-text paired dataset by pairing the surgical lecture video with three hierarchical levels of texts: at clip-level, atomic actions using transcribed audio texts; at phase-level, conceptual text summaries; and at video-level, overall abstract text of the surgical procedure. Then, we propose a novel fine-to-coarse contrastive learning framework that learns separate embedding spaces for the three video-text hierarchies using a single model. By disentangling embedding spaces of different hierarchical levels, the learned multi-modal representations encode short-term and long-term surgical concepts in the same model. Thanks to the injected textual semantics, we demonstrate that the HecVL approach can enable zero-shot surgical phase recognition without any human annotation. Furthermore, we show that the same HecVL model for surgical phase recognition can be transferred across different surgical procedures and medical centers. The source code will be made available at https://github.com/CAMMA-public/HecVL | HecVL: Hierarchical Video-Language Pretraining for Zero-shot Surgical Phase Recognition | [
"Yuan, Kun",
"Srivastav, Vinkle",
"Navab, Nassir",
"Padoy, Nicolas"
] | Conference | 2405.10075 | [
"https://github.com/CAMMA-public/HecVL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 577 |
|
null | https://papers.miccai.org/miccai-2024/paper/2133_paper.pdf | @InProceedings{ Kon_Achieving_MICCAI2024,
author = { Kong, Qingpeng and Chiu, Ching-Hao and Zeng, Dewen and Chen, Yu-Jen and Ho, Tsung-Yi and Hu, Jingtong and Shi, Yiyu },
title = { { Achieving Fairness Through Channel Pruning for Dermatological Disease Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Numerous studies have revealed that deep learning-based medical image classification models may exhibit bias towards specific demographic attributes, such as race, gender, and age. Existing bias mitigation methods often achieve a high level of fairness at the cost of significant accuracy degradation. In response to this challenge, we propose an innovative and adaptable Soft Nearest Neighbor Loss-based channel pruning framework, which achieves fairness through channel pruning. Traditionally, channel pruning is utilized to accelerate neural network inference. However, our work demonstrates that pruning can also be a potent tool for achieving fairness. Our key insight is that different channels in a layer contribute differently to the accuracy of different groups. By selectively pruning critical channels that lead to the accuracy difference between the privileged and unprivileged groups, we can effectively improve fairness without sacrificing accuracy significantly. Experiments conducted on two skin lesion diagnosis datasets across multiple sensitive attributes validate the effectiveness of our method in achieving a state-of-the-art trade-off between accuracy and fairness. Our code is available at https://github.com/Kqp1227/Sensitive-Channel-Pruning. | Achieving Fairness Through Channel Pruning for Dermatological Disease Diagnosis | [
"Kong, Qingpeng",
"Chiu, Ching-Hao",
"Zeng, Dewen",
"Chen, Yu-Jen",
"Ho, Tsung-Yi",
"Hu, Jingtong",
"Shi, Yiyu"
] | Conference | 2405.08681 | [
"https://github.com/Kqp1227/Sensitive-Channel-Pruning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 578 |
|
null | https://papers.miccai.org/miccai-2024/paper/0761_paper.pdf | @InProceedings{ Che_WsiCaption_MICCAI2024,
author = { Chen, Pingyi and Li, Honglin and Zhu, Chenglu and Zheng, Sunyi and Shui, Zhongyi and Yang, Lin },
title = { { WsiCaption: Multiple Instance Generation of Pathology Reports for Gigapixel Whole-Slide Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Whole slide images are the foundation of digital pathology for the diagnosis and treatment of carcinomas. Writing pathology reports is laborious and error-prone for inexperienced pathologists. To reduce the workload and improve clinical automation, we investigate how to generate pathology reports given whole slide images. On the data end, we curated the largest WSI-text dataset (PathText). In specific, we collected nearly 10000 high-quality WSI-text pairs for visual-language models by recognizing and cleaning pathology reports which narrate diagnostic slides in TCGA. On the model end, we propose the multiple instance generative model (MI-Gen) which can produce pathology reports for gigapixel WSIs. We benchmark our model on the largest subset of PathText. Experimental results show our model can generate pathology reports which contain multiple clinical clues and achieve competitive performance on certain slide-level tasks. We observe that simple semantic extraction from the pathology reports can achieve the best performance (0.838 of F1 score) on BRCA subtyping surpassing previous state-of-the-art approaches. Our collected dataset and related code are available at
https://github.com/cpystan/Wsi-Caption. | WsiCaption: Multiple Instance Generation of Pathology Reports for Gigapixel Whole-Slide Images | [
"Chen, Pingyi",
"Li, Honglin",
"Zhu, Chenglu",
"Zheng, Sunyi",
"Shui, Zhongyi",
"Yang, Lin"
] | Conference | 2311.16480 | [
"https://github.com/cpystan/Wsi-Caption/tree/master"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 579 |
|
null | https://papers.miccai.org/miccai-2024/paper/1423_paper.pdf | @InProceedings{ Ma_Adaptive_MICCAI2024,
author = { Ma, Siteng and Du, Honghui and Curran, Kathleen M. and Lawlor, Aonghus and Dong, Ruihai },
title = { { Adaptive Curriculum Query Strategy for Active Learning in Medical Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Deep active learning (AL) is commonly used to reduce labeling costs in medical image analysis. Deep learning (DL) models typically exhibit a preference for learning from easy data and simple patterns before they learn from complex ones. However, existing AL methods often employ a fixed query strategy for sample selection, which may cause the model to focus too closely on challenging-to-classify data. The result is a deceleration of the convergence of DL models and an increase in the amount of labeled data required to train them. To address this issue, we propose a novel Adaptive Curriculum Query Strategy for AL in Medical Image Classification. During the training phase, our strategy leverages Curriculum Learning principles to initially prioritize the selection of a diverse range of samples to cover various difficulty levels, facilitating rapid model convergence. Once the distribution of the selected samples closely matches that of the entire dataset, the query strategy shifts its focus towards difficult-to-classify data based on uncertainty. This novel approach enables the model to achieve superior performance with fewer labeled samples. We perform extensive experiments demonstrating that our model not only requires fewer labeled samples but outperforms state-of-the-art models in terms of efficiency and effectiveness. The code is publicly available at https://github.com/HelenMa9998/Easy_hard_AL. | Adaptive Curriculum Query Strategy for Active Learning in Medical Image Classification | [
"Ma, Siteng",
"Du, Honghui",
"Curran, Kathleen M.",
"Lawlor, Aonghus",
"Dong, Ruihai"
] | Conference | [
"https://github.com/HelenMa9998/Easy_hard_AL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 580 |
||
null | https://papers.miccai.org/miccai-2024/paper/1733_paper.pdf | @InProceedings{ Xia_GMoD_MICCAI2024,
author = { Xiang, ZhiPeng and Cui, ShaoGuo and Shang, CaoZhi and Jiang, Jingfeng and Zhang, Liqiang },
title = { { GMoD: Graph-driven Momentum Distillation Framework with Active Perception of Disease Severity for Radiology Report Generation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Automatic radiology report generation is a challenging task that seeks to produce comprehensive and semantically consistent detailed descriptions from radiography (e.g., X-ray), alleviating the heavy workload of radiologists. Previous work explored the introduction of diagnostic information through multi-label classification. However, such methods can only provide a binary positive or negative classification result, leading to the omission of critical information regarding disease severity. We propose a Graph-driven Momentum Distillation (GMoD) approach to guide the model in actively perceiving the apparent disease severity implicitly conveyed in each radiograph. The proposed GMoD introduces two novel modules: Graph-based Topic Classifier (GTC) and Momentum Topic-Signal Distiller (MTD). Specifically, GTC combines symptoms and lung diseases to build topic maps and focuses on potential connections between them. MTD constrains the GTC to focus on the confidence of each disease being negative or positive by constructing pseudo labels, and then uses the multi-label classification results to assist the model in perceiving joint features to generate a more accurate report. Extensive experiments and analyses on IU-Xray and MIMIC-CXR benchmark datasets demonstrate that our GMoD outperforms state-of-the-art method. Our code is available at https://github.com/xzp9999/GMoD-mian. | GMoD: Graph-driven Momentum Distillation Framework with Active Perception of Disease Severity for Radiology Report Generation | [
"Xiang, ZhiPeng",
"Cui, ShaoGuo",
"Shang, CaoZhi",
"Jiang, Jingfeng",
"Zhang, Liqiang"
] | Conference | [
"https://github.com/xzp9999/GMoD-mian"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 581 |
||
null | https://papers.miccai.org/miccai-2024/paper/2165_paper.pdf | @InProceedings{ Shi_Integrative_MICCAI2024,
author = { Shi, Zhan and Zhang, Jingwei and Kong, Jun and Wang, Fusheng },
title = { { Integrative Graph-Transformer Framework for Histopathology Whole Slide Image Representation and Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | In digital pathology, the multiple instance learning (MIL) strategy is widely used in the weakly supervised histopathology whole slide image (WSI) classification task where giga-pixel WSIs are only labeled at the slide level. However, existing attention-based MIL approaches often overlook contextual information and intrinsic spatial relationships between neighboring tissue tiles, while graph-based MIL frameworks have limited power to recognize the long-range dependencies. In this paper, we introduce the integrative graph-transformer framework that simultaneously captures the context-aware relational features and global WSI representations through a novel Graph Transformer Integration (GTI) block. Specifically, each GTI block consists of a Graph Convolutional Network (GCN) layer modeling neighboring relations at the local instance level and an efficient global attention model capturing comprehensive global information from extensive feature embeddings. Extensive experiments on three publicly available WSI datasets: TCGA-NSCLC, TCGA-RCC and BRIGHT, demonstrate the superiority of our approach over current state-of-the-art MIL methods, achieving an improvement of 1.0% to 2.6% in accuracy and 0.7%-1.6% in AUROC. | Integrative Graph-Transformer Framework for Histopathology Whole Slide Image Representation and Classification | [
"Shi, Zhan",
"Zhang, Jingwei",
"Kong, Jun",
"Wang, Fusheng"
] | Conference | 2403.18134 | [
"https://github.com/StonyBrookDB/igt-wsi"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 582 |
|
null | https://papers.miccai.org/miccai-2024/paper/1637_paper.pdf | @InProceedings{ Ma_PX2Tooth_MICCAI2024,
author = { Ma, Wen and Wu, Huikai and Xiao, Zikai and Feng, Yang and Wu, Jian and Liu, Zuozhu },
title = { { PX2Tooth: Reconstructing the 3D Point Cloud Teeth from a Single Panoramic X-ray } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Reconstructing the 3D anatomical structures of the oral cavity, which originally reside in the cone-beam CT (CBCT), from a single 2D Panoramic X-ray(PX) remains a critical yet challenging task, as it can effectively reduce radiation risks and treatment costs during the diagnostic in digital dentistry. However, current methods are either error-prone or only trained/evaluated on small-scale datasets (less than 50 cases), resulting in compromised trustworthiness. In this paper, we propose PX2Tooth, a novel approach to reconstruct 3D teeth using a single PX image with a two-stage framework. First, we design the PXSegNet to segment the permanent teeth from the PX images, providing clear positional, morphological, and categorical information for each tooth. Subsequently, we design a novel tooth generation network (TGNet) that learns to transform random point clouds into 3D teeth. TGNet integrates the segmented patch information and introduces a Prior Fusion Module (PFM) to enhance the generation quality, especially in the root apex region. Moreover, we construct a dataset comprising 499 pairs of CBCT and Panoramic X-rays. Extensive experiments demonstrate that PX2Tooth can achieve an Intersection over Union (IoU) of 0.793, significantly surpassing previous methods, underscoring the great potential of artificial intelligence in digital dentistry. | PX2Tooth: Reconstructing the 3D Point Cloud Teeth from a Single Panoramic X-ray | [
"Ma, Wen",
"Wu, Huikai",
"Xiao, Zikai",
"Feng, Yang",
"Wu, Jian",
"Liu, Zuozhu"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 583 |
||
null | https://papers.miccai.org/miccai-2024/paper/1472_paper.pdf | @InProceedings{ Jia_M4oE_MICCAI2024,
author = { Jiang, Yufeng and Shen, Yiqing },
title = { { M4oE: A Foundation Model for Medical Multimodal Image Segmentation with Mixture of Experts } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Medical imaging data is inherently heterogeneous across different modalities and clinical centers, posing unique challenges for developing generalizable foundation models. Conventional entail training distinct models per dataset or using a shared encoder with modality-specific decoders. However, these approaches incur heavy computational overheads and suffer from poor scalability. To address these limitations, we propose the Medical Multi-Modal Mixture of Experts (M4oE) framework, leveraging the SwinUNet architecture. Specifically, M4oE comprises modality-specific experts, each separately initialized to learn features encoding domain knowledge. Subsequently, a gating network is integrated during fine-tuning to dynamically modulate each expert’s contribution to the collective predictions. This enhances model interpretability as well as the generalization ability while retaining expertise specialization. Simultaneously, the M4oE architecture amplifies the model’s parallel processing capabilities, and it also ensures the model’s adaptation to new modalities with ease. Experiments across three modalities reveal that M4oE can achieve 3.45% over STU-Net-L, 5.11% over MED3D, and 11.93% over SAM-Med2D across the MICCAI FLARE22, AMOS2022, and ATLAS2023 datasets. Moreover, M4oE showcases a significant reduction in training duration with 7 hours less, while maintaining a parameter count that is only 30% of its compared methods. The code is available at https://github.com/JefferyJiang-YF/M4oE. | M4oE: A Foundation Model for Medical Multimodal Image Segmentation with Mixture of Experts | [
"Jiang, Yufeng",
"Shen, Yiqing"
] | Conference | [
"https://github.com/JefferyJiang-YF/M4oE"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 584 |
||
null | https://papers.miccai.org/miccai-2024/paper/1272_paper.pdf | @InProceedings{ Hou_ConceptAttention_MICCAI2024,
author = { Hou, Junlin and Xu, Jilan and Chen, Hao },
title = { { Concept-Attention Whitening for Interpretable Skin Lesion Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | The black-box nature of deep learning models has raised concerns about their interpretability for successful deployment in real-world clinical applications. To address the concerns, eXplainable Artificial Intelligence (XAI) aims to provide clear and understandable explanations of the decision-making process. In the medical domain, concepts such as attributes of lesions or abnormalities serve as key evidence for deriving diagnostic results. Existing concept-based models mainly depend on concepts that appear independently and require fine-grained concept annotations such as bounding boxes. However, a medical image usually contains multiple concepts, and the fine-grained concept annotations are difficult to acquire. In this paper, we aim to interpret representations in deep neural networks by aligning the axes of the latent space with known concepts of interest. We propose a novel Concept-Attention Whitening (CAW) framework for interpretable skin lesion diagnosis. CAW is comprised of a disease diagnosis branch and a concept alignment branch. In the former branch, we train a convolutional neural network (CNN) with an inserted CAW layer to perform skin lesion diagnosis. The CAW layer decorrelates features and aligns image features to conceptual meanings via an orthogonal matrix. In the latter branch, the orthogonal matrix is calculated under the guidance of the concept attention mask. We particularly introduce a weakly-supervised concept mask generator that only leverages coarse concept labels for filtering local regions that are relevant to certain concepts, improving the optimization of the orthogonal matrix. Extensive experiments on two public skin lesion diagnosis datasets demonstrated that CAW not only enhanced interpretability but also maintained a state-of-the-art diagnostic performance. | Concept-Attention Whitening for Interpretable Skin Lesion Diagnosis | [
"Hou, Junlin",
"Xu, Jilan",
"Chen, Hao"
] | Conference | 2404.05997 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 585 |
|
null | https://papers.miccai.org/miccai-2024/paper/2014_paper.pdf | @InProceedings{ Li_Anatomical_MICCAI2024,
author = { Li, Qingqiu and Yan, Xiaohan and Xu, Jilan and Yuan, Runtian and Zhang, Yuejie and Feng, Rui and Shen, Quanli and Zhang, Xiaobo and Wang, Shujun },
title = { { Anatomical Structure-Guided Medical Vision-Language Pre-training } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Learning medical visual representations through vision-language pre-training has reached remarkable progress. Despite the promising performance, it still faces challenges, i.e., local alignment lacks interpretability and clinical relevance, and the insufficient internal and external representation learning of image-report pairs. To address these issues, we propose an Anatomical Structure-Guided (ASG) framework. Specifically, we parse raw reports into triplets <anatomical region, finding, existence>, and fully utilize each element as supervision to enhance representation learning. For anatomical region, we design an automatic anatomical region-sentence alignment paradigm in collaboration with radiologists, considering them as the minimum semantic units to explore fine-grained local alignment. For finding and existence, we regard them as image tags, applying an image-tag recognition decoder to associate image features with their respective tags within each sample and constructing soft labels for contrastive learning to improve the semantic association of different image-report pairs. We evaluate the proposed ASG framework on two downstream tasks, including five public benchmarks. Experimental results demonstrate that our method outperforms the state-of-the-art methods. Our code is available at https://asgmvlp.github.io. | Anatomical Structure-Guided Medical Vision-Language Pre-training | [
"Li, Qingqiu",
"Yan, Xiaohan",
"Xu, Jilan",
"Yuan, Runtian",
"Zhang, Yuejie",
"Feng, Rui",
"Shen, Quanli",
"Zhang, Xiaobo",
"Wang, Shujun"
] | Conference | 2403.09294 | [
"https://github.com/ASGMVLP/ASGMVLP_CODE"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 586 |
|
null | https://papers.miccai.org/miccai-2024/paper/1841_paper.pdf | @InProceedings{ Liu_Featureprompting_MICCAI2024,
author = { Liu, Xueyu and Shi, Guangze and Wang, Rui and Lai, Yexin and Zhang, Jianan and Sun, Lele and Yang, Quan and Wu, Yongfei and Li, Ming and Han, Weixia and Zheng, Wen },
title = { { Feature-prompting GBMSeg: One-Shot Reference Guided Training-Free Prompt Engineering for Glomerular Basement Membrane Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/1841_paper.pdf | Feature-prompting GBMSeg: One-Shot Reference Guided Training-Free Prompt Engineering for Glomerular Basement Membrane Segmentation | [
"Liu, Xueyu",
"Shi, Guangze",
"Wang, Rui",
"Lai, Yexin",
"Zhang, Jianan",
"Sun, Lele",
"Yang, Quan",
"Wu, Yongfei",
"Li, Ming",
"Han, Weixia",
"Zheng, Wen"
] | Conference | 2406.16271 | [
"https://github.com/SnowRain510/GBMSeg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 587 |
|
null | https://papers.miccai.org/miccai-2024/paper/2585_paper.pdf | @InProceedings{ Yua_Adapting_MICCAI2024,
author = { Yuan, Zhouhang and Fang, Zhengqing and Huang, Zhengxing and Wu, Fei and Yao, Yu-Feng and Li, Yingming },
title = { { Adapting Pre-trained Generative Model to Medical Image for Data Augmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Deep learning-based medical image recognition requires a large number of expert-annotated data. As medical image data is often scarce and class imbalanced, many researchers have tried to synthesize medical images as training samples. However, the quality of the generated data determines the effectiveness of the method, which in turn is related to the amount of data available for training. To produce high-quality data augmentation in few-shot settings, we try to adapt large-scale pre-trained generative models to medical images. Specifically, we adapt MAGE (a masked image modeling-based generative model) as the pre-trained generative model, and then an Adapter is implemented within each layer to learn class-wise medical knowledge. In addition, to reduce the complexity caused by high-dimensional latent space, we introduce a vector quantization loss as a constraint during fine-tuning. The experiments are conducted on three different medical image datasets. The results show that our methods produce more realistic augmentation samples than existing generative models, with whom the classification accuracy increased by 5.16%, 2.74% and 3.62% on the three datasets respectively. The results demonstrate that adapting pre-trained generative models for medical image synthesis is a promising way in limited data situations. | Adapting Pre-trained Generative Model to Medical Image for Data Augmentation | [
"Yuan, Zhouhang",
"Fang, Zhengqing",
"Huang, Zhengxing",
"Wu, Fei",
"Yao, Yu-Feng",
"Li, Yingming"
] | Conference | [
"https://github.com/YuanZhouhang/VQ-MAGE-Med"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 588 |
||
null | https://papers.miccai.org/miccai-2024/paper/1123_paper.pdf | @InProceedings{ She_DataAlgorithmArchitecture_MICCAI2024,
author = { Sheng, Yi and Yang, Junhuan and Li, Jinyang and Alaina, James and Xu, Xiaowei and Shi, Yiyu and Hu, Jingtong and Jiang, Weiwen and Yang, Lei },
title = { { Data-Algorithm-Architecture Co-Optimization for Fair Neural Networks on Skin Lesion Dataset } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | As Artificial Intelligence (AI) increasingly integrates into
our daily lives, fairness has emerged as a critical concern, particularly in medical AI, where datasets often reflect inherent biases due to social factors like the underrepresentation of marginalized communities and socioeconomic barriers to data collection. Traditional approaches to mitigating these biases have focused on data augmentation and the development of fairness-aware training algorithms. However, this paper
argues that the architecture of neural networks, a core component of Machine Learning (ML), plays a crucial role in ensuring fairness. We demonstrate that addressing fairness effectively requires a holistic approach that simultaneously considers data, algorithms, and architecture. Utilizing Automated ML (AutoML) technology, specifically Neural Architecture Search (NAS), we introduce a novel framework, BiaslessNAS, designed to achieve fair outcomes in analyzing skin lesion datasets. BiaslessNAS incorporates fairness considerations at every stage of the NAS process, leading to the identification of neural networks that are not only more accurate but also significantly fairer. Our experiments show that BiaslessNAS achieves a 2.55% increase in accuracy and a 65.50% improvement in fairness compared to traditional NAS methods, underscoring the importance of integrating fairness into neural network architecture for better outcomes in medical AI applications. | Data-Algorithm-Architecture Co-Optimization for Fair Neural Networks on Skin Lesion Dataset | [
"Sheng, Yi",
"Yang, Junhuan",
"Li, Jinyang",
"Alaina, James",
"Xu, Xiaowei",
"Shi, Yiyu",
"Hu, Jingtong",
"Jiang, Weiwen",
"Yang, Lei"
] | Conference | 2407.13896 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 589 |
|
null | https://papers.miccai.org/miccai-2024/paper/2390_paper.pdf | @InProceedings{ Che_AUnified_MICCAI2024,
author = { Chen, Boqi and Oliva, Junier and Niethammer, Marc },
title = { { A Unified Model for Longitudinal Multi-Modal Multi-View Prediction with Missingness } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Medical records often consist of different modalities, such as images, text, and tabular information. Integrating all modalities offers a holistic view of a patient’s condition, while analyzing them longitudinally provides a better understanding of disease progression. However, real-world longitudinal medical records present challenges: 1) patients may lack some or all of the data for a specific timepoint, and 2) certain modalities or views might be absent for all patients during a particular period. In this work, we introduce a unified model for longitudinal multi-modal multi-view prediction with missingness. Our method allows as many timepoints as desired for input, and aims to leverage all available data, regardless of their availability. We conduct extensive experiments on the knee osteoarthritis dataset from the Osteoarthritis Initiative (OAI) for pain and Kellgren-Lawrence grade (KLG) prediction at a future timepoint. We demonstrate the effectiveness of our method by comparing results from our unified model to specific models that use the same modality and view combinations during training and evaluation. We also show the benefit of having extended temporal data and provide post-hoc analysis for a deeper understanding of each modality/view’s importance for different tasks. | A Unified Model for Longitudinal Multi-Modal Multi-View Prediction with Missingness | [
"Chen, Boqi",
"Oliva, Junier",
"Niethammer, Marc"
] | Conference | 2403.12211 | [
"https://github.com/uncbiag/UniLMMV"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 590 |
|
null | https://papers.miccai.org/miccai-2024/paper/2247_paper.pdf | @InProceedings{ Das_Decoupled_MICCAI2024,
author = { Das, Ankit and Gautam, Chandan and Cholakkal, Hisham and Agrawal, Pritee and Yang, Feng and Savitha, Ramasamy and Liu, Yong },
title = { { Decoupled Training for Semi-supervised Medical Image Segmentation with Worst-Case-Aware Learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | While semi-supervised learning (SSL) has demonstrated remarkable success in natural image segmentation, tackling medical image segmentation with limited annotated data remains a highly relevant and challenging research problem. Many existing approaches rely on a shared network for learning from both labeled and unlabeled data, facing difficulties in fully exploiting labeled data due to interference from unreliable pseudo-labels. Additionally, they suffer from degradation in model quality resulting from training with unreliable pseudo-labels. To address these challenges, we propose a novel training strategy that uses two distinct decoders—one for labeled data and another for unlabeled data. This decoupling enhances the model’s ability to fully leverage the knowledge embedded within the labeled data. Moreover, we introduce an additional decoder, referred to as the ``worst-case-aware decoder,” which indirectly assesses potential worst case scenario that might emerge from pseudo-label training. We employ adversarial training of the encoder to learn features aimed at avoiding this worst case scenario. Our experimental results on three medical image segmentation datasets demonstrate that our method shows improvements in range of 5.6% - 28.10% (in terms of dice score) compared to the state-of-the-art techniques. The source code is available at \url{https://github.com/thesupermanreturns/decoupled}. | Decoupled Training for Semi-supervised Medical Image Segmentation with Worst-Case-Aware Learning | [
"Das, Ankit",
"Gautam, Chandan",
"Cholakkal, Hisham",
"Agrawal, Pritee",
"Yang, Feng",
"Savitha, Ramasamy",
"Liu, Yong"
] | Conference | [
"https://github.com/thesupermanreturns/decoupled"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 591 |
||
null | https://papers.miccai.org/miccai-2024/paper/3097_paper.pdf | @InProceedings{ Par_CAPTUREGAN_MICCAI2024,
author = { Park, Chunsu and Kim, Seonho and Lee, DongEon and Lee, SiYeoul and Kambaluru, Ashok and Park, Chankue and Kim, MinWoo },
title = { { CAPTURE-GAN: Conditional Attribute Preservation through Unveiling Realistic GAN for artifact removal in dual-energy CT imaging } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | This study addresses the challenge of detecting bone marrow edema (BME) using dual-energy CT (DECT), a task complicated by the lower contrast DECT offers compared to MRI and the presence of artifacts inherent in the image formation process. Despite the advancements in AI-based solutions for image enhancement, achieving an artifact-free outcome in DECT remains difficult due to the impracticality of obtaining paired ground-truth and artifact-containing images for supervised learning. To overcome this, we explore unsupervised techniques such as CycleGAN and AttGAN for artifact removal, which, while effective in other domains, face challenges in DECT due to the similarity between artifact and pathological patterns. Our contribution, the Conditional Attribute Preservation through Unveiling Realistic GAN (CAPTURE-GAN), innovatively combines a generative model with conditional constraints through masking and classification models to not only minimize artifacts but also preserve the pathology of BME and the anatomical integrity of bone. By incorporating bone priors into CycleGAN and adding a disease classification network, CAPTURE-GAN significantly improves the specificity and sensitivity of BME detection in DECT imaging. Our approach demonstrates a substantial enhancement in generating artifact-free images, ensuring that critical diagnostic patterns are not obscured, thereby advancing the potential for DECT in diagnosing and localizing lesions accurately. | CAPTURE-GAN: Conditional Attribute Preservation through Unveiling Realistic GAN for artifact removal in dual-energy CT imaging | [
"Park, Chunsu",
"Kim, Seonho",
"Lee, DongEon",
"Lee, SiYeoul",
"Kambaluru, Ashok",
"Park, Chankue",
"Kim, MinWoo"
] | Conference | [
"https://github.com/pnu-amilab/CAPTURE-GAN"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 592 |
||
null | https://papers.miccai.org/miccai-2024/paper/2749_paper.pdf | @InProceedings{ Lyu_MetaUNETR_MICCAI2024,
author = { Lyu, Pengju and Zhang, Jie and Zhang, Lei and Liu, Wenjian and Wang, Cheng and Zhu, Jianjun },
title = { { MetaUNETR: Rethinking Token Mixer Encoding for Efficient Multi-Organ Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | The Transformer architecture and versatile CNN backbones have led to advanced progress in sequence modeling and dense prediction tasks. A critical development is the incorporation of different token mixing modules such as ConvNeXt, Swin Transformer. However, findings within the MetaFormer framework suggest these token mixers have a lesser influence on representation learning than the architecture itself. Yet, their impact on 3D medical images remains unclear, motivating our investigation into different token mixers (self-attention, convolution, MLP, recurrence, global filter, and Mamba) in 3D medical image segmentation architectures, and further prompting a reevaluation of the backbone architecture’s role to achieve the trade off in accuracy and efficiency. In the paper, we propose a unified segmentation architecture—MetaUNETR featuring a novel TriCruci layer that decomposes the token mixing processes along each spatial direction while simultaneously preserving precise positional information on its orthogonal plane. By employing the Centered Kernel Alignment (CKA) analysis on feature learning capabilities among these token mixers, we find that the overall architecture of the model, rather than any specific token mixers, plays a more crucial role in determining the model’s performance. Our method is validated across multiple benchmarks varying in size and scale, including the BTCV, AMOS, and AbdomenCT-1K datasets, achieving the top segmentation performance while reducing the model’s parameters by about 80% compared to the state-of-the-art method. This study provides insights for future research on the design and optimization of backbone architecture, steering towards more efficient foundational segmentation models. The source code is available at https://github.com/lyupengju/MetaUNETR. | MetaUNETR: Rethinking Token Mixer Encoding for Efficient Multi-Organ Segmentation | [
"Lyu, Pengju",
"Zhang, Jie",
"Zhang, Lei",
"Liu, Wenjian",
"Wang, Cheng",
"Zhu, Jianjun"
] | Conference | [
"https://github.com/lyupengju/MetaUNETR"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 593 |
||
null | https://papers.miccai.org/miccai-2024/paper/3325_paper.pdf | @InProceedings{ Hua_Robustly_MICCAI2024,
author = { Huang, Peng and Hu, Shu and Peng, Bo and Zhang, Jiashu and Wu, Xi and Wang, Xin },
title = { { Robustly Optimized Deep Feature Decoupling Network for Fatty Liver Diseases Detection } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Current medical image classification efforts mainly aim for higher average performance, often neglecting the balance between different classes. This can lead to significant differences in recognition accuracy between classes and obvious recognition weaknesses. Without the support of massive data, deep learning faces challenges in fine-grained classification of fatty liver. In this paper, we propose an innovative deep learning framework that combines feature decoupling and adaptive adversarial training. Firstly, we employ two iteratively compressed decouplers to supervised decouple common features and specific features related to fatty liver in abdominal ultrasound images. Subsequently, the decoupled features are concatenated with the original image after transforming the color space and are fed into the classifier. During adversarial training, we adaptively adjust the perturbation and balance the adversarial strength by the accuracy of each class. The model will eliminate recognition weaknesses by correctly classifying adversarial samples, thus improving recognition robustness. Finally, the accuracy of our method improved by 4.16%, achieving 82.95%. As demonstrated by extensive experiments, our method is a generalized learning framework that can be directly used to eliminate the recognition weaknesses of any classifier while improving its average performance. Code is available at https://github.com/HP-ML/MICCAI2024. | Robustly Optimized Deep Feature Decoupling Network for Fatty Liver Diseases Detection | [
"Huang, Peng",
"Hu, Shu",
"Peng, Bo",
"Zhang, Jiashu",
"Wu, Xi",
"Wang, Xin"
] | Conference | 2406.17338 | [
"https://github.com/HP-ML/MICCAI2024"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 594 |
|
null | https://papers.miccai.org/miccai-2024/paper/2276_paper.pdf | @InProceedings{ Hua_Uncovering_MICCAI2024,
author = { Huang, Yanquan and Dan, Tingting and Kim, Won Hwa and Wu, Guorong },
title = { { Uncovering Cortical Pathways of Prion-like Pathology Spreading in Alzheimer’s Disease by Neural Optimal Mass Transport } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Tremendous efforts have been made to investigate stereotypical patterns of tau aggregates in Alzheimer’s disease (AD), current positron emission tomography (PET) technology lacks the capability to quantify the dynamic spreading flows of tau propagation in disease progression, despite the fact that AD is characterized by the propagation of tau aggregates throughout the brain in a prion-like manner. We address this challenge by formulating the seek for latent cortical tau propagation pathways into a well-studied physics model of the optimal mass transport (OMT) problem, where the dynamic behavior of tau spreading across longitudinal tau-PET scans is constrained by the geometry of the brain cortex. In this context, we present a variational framework for dynamical system of tau propagation in the brain, where the spreading flow field is essentially a Wasserstein geodesic between two density distributions of spatial tau accumulation. Meanwhile, our variational framework provides a flexible approach to model the possible increase of tau aggregates and alleviate the issue of vanishing flows by introducing a total variation (TV) regularization on flow field. Following the spirit of physics-informed deep model, we derive the governing equation of the new TV-based unbalanced OMT model and customize an explainable generative adversarial network to (1) parameterize the population-level OMT using generator and (2) predict tau spreading flow for the unseen subject by the trained discriminator. We have evaluated the accuracy of our proposed model using the ADNI and OASIS datasets, focusing on its ability to herald future tau accumulation. Since our deep model follows the second law of thermodynamics, we further investigate the propagation mechanism of tau aggregates as AD advances. Compared to existing methodologies, our physics-informed approach delivers superior accuracy and interpretability, showcasing promising potential for uncovering novel neurobiological mechanisms. | Uncovering Cortical Pathways of Prion-like Pathology Spreading in Alzheimer’s Disease by Neural Optimal Mass Transport | [
"Huang, Yanquan",
"Dan, Tingting",
"Kim, Won Hwa",
"Wu, Guorong"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 595 |
||
null | https://papers.miccai.org/miccai-2024/paper/0703_paper.pdf | @InProceedings{ Fen_Enhancing_MICCAI2024,
author = { Feng, Chun-Mei },
title = { { Enhancing Label-efficient Medical Image Segmentation with Text-guided Diffusion Models } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Aside from offering state-of-the-art performance in medical image generation, denoising diffusion probabilistic models (DPM) can also serve as a representation learner to capture semantic information and potentially be used as an image representation for downstream tasks, e.g., segmentation. However, these latent semantic representations rely heavily on labor-intensive pixel-level annotations as supervision, limiting the usability of DPM in medical image segmentation. To address this limitation, we propose an enhanced diffusion segmentation model, called TextDiff, that improves semantic representation through inexpensive medical text annotations, thereby explicitly establishing semantic representation and language correspondence for diffusion models. Concretely, TextDiff extracts intermediate activations of the Markov step of the reverse diffusion process in a pretrained diffusion model on large-scale natural images and learns additional expert knowledge by combining them with complementary and readily available diagnostic text information. TextDiff freezes the dual-branch multi-modal structure and mines the latent alignment of semantic features in diffusion models with diagnostic descriptions by only training the cross-attention mechanism and pixel classifier, making it possible to enhance semantic representation with inexpensive text. Extensive experiments on public QaTa-COVID19 and MoNuSeg datasets show that our TextDiff is significantly superior to the state-of-the-art multi-modal segmentation methods with only a few training samples. Our code and models will be publicly available. | Enhancing Label-efficient Medical Image Segmentation with Text-guided Diffusion Models | [
"Feng, Chun-Mei"
] | Conference | 2407.05323 | [
"https://github.com/chunmeifeng/TextDiff"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 596 |
|
null | https://papers.miccai.org/miccai-2024/paper/3664_paper.pdf | @InProceedings{ Yu_PET_MICCAI2024,
author = { Yu, Boxiao and Ozdemir, Savas and Dong, Yafei and Shao, Wei and Shi, Kuangyu and Gong, Kuang },
title = { { PET Image Denoising Based on 3D Denoising Diffusion Probabilistic Model: Evaluations on Total-Body Datasets } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Due to various physical degradation factors and limited photon counts detected, obtaining high-quality images from low-dose Positron emission tomography (PET) scans is challenging. The Denoising Diffusion Probabilistic Model (DDPM), an advanced distribution learning-based generative model, has shown promising performance across various computer-vision tasks. However, currently DDPM is mainly investigated in 2D mode, which has limitations for PET image denoising, as PET is usually acquired, reconstructed, and analyzed in 3D mode. In this work, we proposed a 3D DDPM method for PET image denoising, which employed a 3D convolutional network to train the score function, enabling the network to learn 3D distribution. The total-body 18F-FDG PET datasets acquired from the Siemens Biograph Vision Quadra scanner (axial field of view > 1m) were employed to evaluate the 3D DDPM method, as these total-body datasets needed 3D operations the most to leverage the rich information from different axial slices. All models were trained on 1/20 low-dose images and then evaluated on 1/4, 1/20, and 1/50 low-dose images, respectively. Experimental results indicated that 3D DDPM significantly outperformed 2D DDPM and 3D UNet in qualitative and quantitative assessments, capable of recovering finer structures and more accurate edge contours from low-quality PET images. Moreover, 3D DDPM revealed greater robustness when there were noise level mismatches between training and testing data. Finally, comparing 3D DDPM with 2D DDPM in terms of uncertainty revealed 3D DDPM’s higher confidence in reproducibility. | PET Image Denoising Based on 3D Denoising Diffusion Probabilistic Model: Evaluations on Total-Body Datasets | [
"Yu, Boxiao",
"Ozdemir, Savas",
"Dong, Yafei",
"Shao, Wei",
"Shi, Kuangyu",
"Gong, Kuang"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 597 |
||
null | https://papers.miccai.org/miccai-2024/paper/0117_paper.pdf | @InProceedings{ Gao_Aligning_MICCAI2024,
author = { Gao, Yunhe and Gu, Difei and Zhou, Mu and Metaxas, Dimitris },
title = { { Aligning Human Knowledge with Visual Concepts Towards Explainable Medical Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Although explainability is essential in the clinical diagnosis, most deep learning models still function as black boxes without elucidating their decision-making process. In this study, we investigate the explainable model development that can mimic the decision-making process of human experts by fusing the domain knowledge of explicit diagnostic criteria. We introduce a simple yet effective framework, Explicd, towards Explainable language-informed criteria-based diagnosis. Explicd initiates its process by querying domain knowledge from either large language models (LLMs) or human experts to establish diagnostic criteria across various concept axes (e.g., color, shape, texture, or specific patterns of diseases). By leveraging a pretrained vision-language model, Explicd injects these criteria into the embedding space as knowledge anchors, thereby facilitating the learning of corresponding visual concepts within medical images. The final diagnostic outcome is determined based on the similarity scores between the encoded visual concepts and the textual criteria embeddings. Through extensive evaluation on five medical image classification benchmarks, Explicd has demonstrates its inherent explianability and extends to improve classification performance compared to traditional black-box models. | Aligning Human Knowledge with Visual Concepts Towards Explainable Medical Image Classification | [
"Gao, Yunhe",
"Gu, Difei",
"Zhou, Mu",
"Metaxas, Dimitris"
] | Conference | 2406.05596 | [
"https://github.com/yhygao/Explicd"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 598 |
|
null | https://papers.miccai.org/miccai-2024/paper/1027_paper.pdf | @InProceedings{ Tho_EchoNarrator_MICCAI2024,
author = { Thomas, Sarina and Cao, Qing and Novikova, Anna and Kulikova, Daria and Ben-Yosef, Guy },
title = { { EchoNarrator: Generating natural text explanations for ejection fraction predictions } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Ejection fraction (EF) of the left ventricle (LV) is considered as one of the most important measurements for diagnosing acute heart failure and can be estimated during cardiac ultrasound acquisition. While recent successes in deep learning research successfully estimate EF values, the proposed models often lack an explanation for the prediction. However, providing clear and intuitive explanations for clinical measurement predictions would increase the trust of cardiologists in these models.
In this paper, we explore predicting EF measurements with Natural Language Explanation (NLE). We propose a model that in a single forward pass combines estimation of the LV contour over multiple frames, together with a set of modules and routines for computing various motion and shape attributes that are associated with ejection fraction. It then feeds the attributes into a large language model to generate text that helps to explain the network’s outcome in a human-like manner. We provide experimental evaluation of our explanatory output, as well as EF prediction, and show that our model can provide EF comparable to state-of-the-art together with meaningful and accurate natural language explanation to the prediction. The project page can be found at https://github.com/guybenyosef/EchoNarrator . | EchoNarrator: Generating natural text explanations for ejection fraction predictions | [
"Thomas, Sarina",
"Cao, Qing",
"Novikova, Anna",
"Kulikova, Daria",
"Ben-Yosef, Guy"
] | Conference | [
"https://github.com/guybenyosef/EchoNarrator"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 599 |