Datasets:

bibtex_url
null
proceedings
stringlengths
58
58
bibtext
stringlengths
511
974
abstract
stringlengths
92
2k
title
stringlengths
30
207
authors
sequencelengths
1
22
id
stringclasses
1 value
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
14 values
n_linked_authors
int64
-1
1
upvotes
int64
-1
1
num_comments
int64
-1
0
n_authors
int64
-1
10
Models
sequencelengths
0
4
Datasets
sequencelengths
0
1
Spaces
sequencelengths
0
0
old_Models
sequencelengths
0
4
old_Datasets
sequencelengths
0
1
old_Spaces
sequencelengths
0
0
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
unique_id
int64
0
855
null
https://papers.miccai.org/miccai-2024/paper/2260_paper.pdf
@InProceedings{ Sch_LargeScale_MICCAI2024, author = { Schnabel, Till N. and Lill, Yoriko and Benitez, Benito K. and Nalabothu, Prasad and Metzler, Philipp and Mueller, Andreas A. and Gross, Markus and Gözcü, Baran and Solenthaler, Barbara }, title = { { Large-Scale 3D Infant Face Model } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Learned 3-dimensional face models have emerged as valuable tools for statistically modeling facial variations, facilitating a wide range of applications in computer graphics, computer vision, and medicine. While these models have been extensively developed for adult faces, research on infant face models remains sparse, limited to a few models trained on small datasets, none of which are publicly available. We propose a novel approach to address this gap by developing a large-scale 3D INfant FACE model (INFACE) using a diverse set of face scans. By harnessing uncontrolled and incomplete data, INFACE surpasses previous efforts in both scale and accessibility. Notably, it represents the first publicly available shape model of its kind, facilitating broader adoption and further advancements in the field. We showcase the versatility of our learned infant face model through multiple potential clinical applications, including shape and appearance completion for mesh cleaning and treatment planning, as well as 3D face reconstruction from images captured in uncontrolled environments. By disentangling expression and identity, we further enable the neutralization of facial features — a crucial capability given the unpredictable nature of infant scanning.
Large-Scale 3D Infant Face Model
[ "Schnabel, Till N.", "Lill, Yoriko", "Benitez, Benito K.", "Nalabothu, Prasad", "Metzler, Philipp", "Mueller, Andreas A.", "Gross, Markus", "Gözcü, Baran", "Solenthaler, Barbara" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
700
null
https://papers.miccai.org/miccai-2024/paper/1496_paper.pdf
@InProceedings{ Gao_DeSAM_MICCAI2024, author = { Gao, Yifan and Xia, Wei and Hu, Dingdu and Wang, Wenkui and Gao, Xin }, title = { { DeSAM: Decoupled Segment Anything Model for Generalizable Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Deep learning-based medical image segmentation models often suffer from domain shift, where the models trained on a source domain do not generalize well to other unseen domains. As a prompt-driven foundation model with powerful generalization capabilities, the Segment Anything Model (SAM) shows potential for improving the cross-domain robustness of medical image segmentation. However, SAM performs significantly worse in automatic segmentation scenarios than when manually prompted, hindering its direct application to domain generalization. Upon further investigation, we discovered that the degradation in performance was related to the coupling effect of inevitable poor prompts and mask generation. To address the coupling effect, we propose the Decoupled SAM (DeSAM). DeSAM modifies SAM’s mask decoder by introducing two new modules: a prompt-relevant IoU module (PRIM) and a prompt-decoupled mask module (PDMM). PRIM predicts the IoU score and generates mask embeddings, while PDMM extracts multi-scale features from the intermediate layers of the image encoder and fuses them with the mask embeddings from PRIM to generate the final segmentation mask. This decoupled design allows DeSAM to leverage the pre-trained weights while minimizing the performance degradation caused by poor prompts. We conducted experiments on publicly available cross-site prostate and cross-modality abdominal image segmentation datasets. The results show that our DeSAM leads to a substantial performance improvement over previous state-of-theart domain generalization methods. The code is publicly available at https://github.com/yifangao112/DeSAM.
DeSAM: Decoupled Segment Anything Model for Generalizable Medical Image Segmentation
[ "Gao, Yifan", "Xia, Wei", "Hu, Dingdu", "Wang, Wenkui", "Gao, Xin" ]
Conference
2306.00499
[ "https://github.com/yifangao112/DeSAM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
701
null
https://papers.miccai.org/miccai-2024/paper/1955_paper.pdf
@InProceedings{ Imr_BrainShift_MICCAI2024, author = { Imre, Baris and Thibeau-Sutre, Elina and Reimer, Jorieke and Kho, Kuan and Wolterink, Jelmer M. }, title = { { Brain-Shift: Unsupervised Pseudo-Healthy Brain Synthesis for Novel Biomarker Extraction in Chronic Subdural Hematoma } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Chronic subdural hematoma (cSDH) is a common neurological condition characterized by the accumulation of blood between the brain and the dura mater. This accumulation of blood can exert pressure on the brain, potentially leading to fatal outcomes. Treatment options for cSDH are limited to invasive surgery or non-invasive management. Traditionally, the midline shift, hand-measured by experts from an ideal sagittal plane, and the hematoma volume have been the primary metrics for quantifying and analyzing cSDH. However, these approaches do not quantify the local 3D brain deformation caused by cSDH. We propose a novel method using anatomy-aware unsupervised diffeomorphic pseudo-healthy synthesis to generate brain deformation fields. The deformation fields derived from this process are utilized to extract biomarkers that quantify the shift in the brain due to cSDH. We use CT scans of 121 patients for training and validation of our method and find that our metrics allow the identification of patients who require surgery. Our results indicate that automatically obtained brain deformation fields might contain prognostic value for personalized cSDH treatment.
Brain-Shift: Unsupervised Pseudo-Healthy Brain Synthesis for Novel Biomarker Extraction in Chronic Subdural Hematoma
[ "Imre, Baris", "Thibeau-Sutre, Elina", "Reimer, Jorieke", "Kho, Kuan", "Wolterink, Jelmer M." ]
Conference
2403.19415
[ "https://github.com/MIAGroupUT/Brain-Shift" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
702
null
https://papers.miccai.org/miccai-2024/paper/2571_paper.pdf
@InProceedings{ Xu_Novelty_MICCAI2024, author = { Xu, Rui and Yu, Dan and Yang, Xuan and Ye, Xinchen and Wang, Zhihui and Wang, Yi and Wang, Hongkai and Li, Haojie and Huang, Dingpin and Xu, Fangyi and Gan, Yi and Tu, Yuan and Hu, Hongjie }, title = { { Novelty Detection Based Discriminative Multiple Instance Feature Mining to Classify NSCLC PD-L1 Status on HE-Stained Histopathological Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
It is crucial to analyze HE-stained histopathological whole slide images (WSIs) to classify PD-L1 status for non-small cell lung cancer (NSCLC) patients, due to the expensive immunohistochemical examination performed in practical clinics. Usually, a multiple instance learning (MIL) framework is applied to resolve the classification problems of WSIs. However, the existing MIL methods cannot perform well on the PD-L1 status classification, due to unlearnable instance features and challenging instances containing weak visual differences. To address this problem, we propose a novelty detection based discriminative multiple instance feature mining method. It contains a trainable instance feature encoder, learning effective information from the on-hand dataset to reduce the domain difference problem, and a novelty detection based instance feature mining mechanism, selecting typical instances to train the encoder for mining more discriminative instance features. We evaluate the proposed method on a private NSCLC PD-L1 dataset and the widely used public Camelyon16 dataset that is targeted for breast cancer identification. Experimental results show that the proposed method is not only effective in predicting NSCLC PD-L1 status but also generalized well on the public dataset.
Novelty Detection Based Discriminative Multiple Instance Feature Mining to Classify NSCLC PD-L1 Status on HE-Stained Histopathological Images
[ "Xu, Rui", "Yu, Dan", "Yang, Xuan", "Ye, Xinchen", "Wang, Zhihui", "Wang, Yi", "Wang, Hongkai", "Li, Haojie", "Huang, Dingpin", "Xu, Fangyi", "Gan, Yi", "Tu, Yuan", "Hu, Hongjie" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
703
null
https://papers.miccai.org/miccai-2024/paper/1037_paper.pdf
@InProceedings{ Cha_QueryNet_MICCAI2024, author = { Chai, Jiaxing and Luo, Zhiming and Gao, Jianzhe and Dai, Licun and Lai, Yingxin and Li, Shaozi }, title = { { QueryNet: A Unified Framework for Accurate Polyp Segmentation and Detection } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Recently, deep learning-based methods have demonstrated effectiveness in the diagnosing of polyps, which holds clinical significance in the prevention of colorectal cancer. These methods can be broadly categorized into two tasks: Polyp Segmentation (PS) and Polyp Detection (PD). The advantage of PS lies in precise localization, but it is constrained by the contrast of the polyp area. On the other hand, PD provides the advantages of global perspective but is susceptible to issues such as false positives or missed detections. Despite substantial progress in both tasks, there has been limited exploration of integrating these two tasks. To address this problem, we introduce QueryNet, a unified framework for accurate polyp segmentation and detection. Specially, our QueryNet is constructed on top of Mask2Former, a query-based segmentation model. It conceptualizes object queries as cluster centers and constructs a detection branch to handle both tasks. Extensive quantitative and qualitative experiments on five public benchmarks verify that this unified framework effectively mitigates the task-specific limitations, thereby enhancing the overall performance. Furthermore, QueryNet achieves comparable performance against state-of-the-art PS and PD methods.
QueryNet: A Unified Framework for Accurate Polyp Segmentation and Detection
[ "Chai, Jiaxing", "Luo, Zhiming", "Gao, Jianzhe", "Dai, Licun", "Lai, Yingxin", "Li, Shaozi" ]
Conference
[ "https://github.com/JiaxingChai/Query_Net" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
704
null
https://papers.miccai.org/miccai-2024/paper/2427_paper.pdf
@InProceedings{ Oh_Are_MICCAI2024, author = { Oh, Ji-Hun and Falahkheirkhah, Kianoush and Bhargava, Rohit }, title = { { Are We Ready for Out-of-Distribution Detection in Digital Pathology? } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
The detection of semantic and covariate out-of-distribution (OOD) examples is a critical yet overlooked challenge in digital pathology (DP). Recently, substantial insight and methods on OOD detection were presented by the ML community, but how do they fare in DP applications? To this end, we establish a benchmark study, our highlights being: 1) the adoption of proper evaluation protocols, 2) the comparison of diverse detectors in both a single and multi-model setting, and 3) the exploration into advanced ML settings like transfer learning (ImageNet vs. DP pre-training) and choice of architecture (CNNs vs. transformers). Through our comprehensive experiments, we contribute new insights and guidelines, paving the way for future research and discussion.
Are We Ready for Out-of-Distribution Detection in Digital Pathology?
[ "Oh, Ji-Hun", "Falahkheirkhah, Kianoush", "Bhargava, Rohit" ]
Conference
2407.13708
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
705
null
https://papers.miccai.org/miccai-2024/paper/0912_paper.pdf
@InProceedings{ Zho_Reprogramming_MICCAI2024, author = { Zhou, Yuhang and Du, Siyuan and Li, Haolin and Yao, Jiangchao and Zhang, Ya and Wang, Yanfeng }, title = { { Reprogramming Distillation for Medical Foundation Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Medical foundation models pre-trained on large-scale datasets have demonstrated powerful versatile capabilities for various tasks. However, due to the gap between pre-training tasks (or modalities) and downstream tasks (or modalities), the real-world computation and speed constraints, it might not be straightforward to apply medical foundation models in the downstream scenarios. Previous methods, such as parameter efficient fine-tuning (PEFT) methods and knowledge distillation (KD) methods, are unable to simultaneously address the task (or modality) inconsistency and achieve personalized lightweight deployment under diverse real-world demands. To address the above issues, we propose a novel framework called Reprogramming Distillation (RD). On one hand, RD reprograms the original feature space of the foundation model so that it is more relevant to downstream scenarios, aligning tasks and modalities. On the other hand, through a co-training mechanism and a shared classifier, connections are established between the reprogrammed knowledge and the knowledge of student models, ensuring that the reprogrammed feature space can be smoothly mimic by the student model of different structures. Further, to reduce the randomness under different training conditions, we design a Centered Kernel Alignment (CKA) distillation to promote robust knowledge transfer. Empirically, we show that on extensive datasets, RD consistently achieve superior performance compared with previous PEFT and KD methods. Source code is available at: https://github.com/MediaBrain-SJTU/RD
Reprogramming Distillation for Medical Foundation Models
[ "Zhou, Yuhang", "Du, Siyuan", "Li, Haolin", "Yao, Jiangchao", "Zhang, Ya", "Wang, Yanfeng" ]
Conference
2407.06504
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
706
null
https://papers.miccai.org/miccai-2024/paper/1068_paper.pdf
@InProceedings{ Yan_Spatiotemporal_MICCAI2024, author = { Yan, Ruodan and Schönlieb, Carola-Bibiane and Li, Chao }, title = { { Spatiotemporal Graph Neural Network Modelling Perfusion MRI } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Perfusion MRI (pMRI) offers valuable insights into tumor vascularity and promises to predict tumor genotypes, thus benefiting prognosis for glioma patients, yet effective models tailored to 4D pMRI are still lacking. This study presents the first attempt to model 4D pMRI using a GNN-based spatiotemporal model (PerfGAT), integrating spatial information and temporal kinetics to predict Isocitrate DeHydrogenase (IDH) mutation status in glioma patients. Specifically, we propose a graph structure learning approach based on edge attention and negative graphs to optimize temporal correlations modeling. Moreover, we design a dual-attention feature fusion module to integrate spatiotemporal features while addressing tumor-related brain regions. Further, we develop a class-balanced augmentation methods tailored to spatiotemporal data, which could mitigate the common label imbalance issue in clinical datasets. Our experimental results demonstrate that the proposed method outperforms other state-of-the-art approaches, promising to model pMRI effectively for patient characterization.
Spatiotemporal Graph Neural Network Modelling Perfusion MRI
[ "Yan, Ruodan", "Schönlieb, Carola-Bibiane", "Li, Chao" ]
Conference
2406.06434
[ "https://github.com/DaisyYan2000/PerfGAT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
707
null
https://papers.miccai.org/miccai-2024/paper/2847_paper.pdf
@InProceedings{ Ise_nnUNet_MICCAI2024, author = { Isensee, Fabian and Wald, Tassilo and Ulrich, Constantin and Baumgartner, Michael and Roy, Saikat and Maier-Hein, Klaus and Jäger, Paul F. }, title = { { nnU-Net Revisited: A Call for Rigorous Validation in 3D Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
The release of nnU-Net marked a paradigm shift in 3D medical image segmentation, demonstrating that a properly configured U-Net architecture could still achieve state-of-the-art results. Despite this, the pursuit of novel architectures, and the respective claims of superior performance over the U-Net baseline, continued. In this study, we demonstrate that many of these recent claims fail to hold up when scrutinized for common validation shortcomings, such as the use of inadequate baselines, insufficient datasets, and neglected computational resources. By meticulously avoiding these pitfalls, we conduct a thorough and comprehensive benchmarking of current segmentation methods including CNN-based, Transformer-based, and Mamba-based approaches. In contrast to current beliefs, we find that the recipe for state-of-the-art performance is 1) employing CNN-based U-Net models, including ResNet and ConvNeXt variants, 2) using the nnU-Net framework, and 3) scaling models to modern hardware resources. These results indicate an ongoing innovation bias towards novel architectures in the field and underscore the need for more stringent validation standards in the quest for scientific progress.
nnU-Net Revisited: A Call for Rigorous Validation in 3D Medical Image Segmentation
[ "Isensee, Fabian", "Wald, Tassilo", "Ulrich, Constantin", "Baumgartner, Michael", "Roy, Saikat", "Maier-Hein, Klaus", "Jäger, Paul F." ]
Conference
2404.09556
[ "https://github.com/MIC-DKFZ/nnUNet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
708
null
https://papers.miccai.org/miccai-2024/paper/3756_paper.pdf
@InProceedings{ Pla_ARegionBased_MICCAI2024, author = { Playout, Clément and Legault, Zacharie and Duval, Renaud and Boucher, Marie Carole and Cheriet, Farida }, title = { { A Region-Based Approach to Diabetic Retinopathy Classification with Superpixel Tokenization } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
We explore the efficacy of a region-based method for image tokenization, aimed at enhancing the resolution of images fed to a Transformer. This method involves segmenting the image into regions using SLIC superpixels. Spatial features, derived from a pretrained model are aggregated segment-wise and input into a streamlined Vision Transformer (ViT). Our model introduces two novel contributions: the matching of segments to semantic prototypes and the graph-based clustering of tokens to merge similar adjacent segments. This approach leads to a model that not only competes effectively in classifying diabetic retinopathy but also produces high-resolution attribution maps, thereby enhancing the interpretability of its predictions.
A Region-Based Approach to Diabetic Retinopathy Classification with Superpixel Tokenization
[ "Playout, Clément", "Legault, Zacharie", "Duval, Renaud", "Boucher, Marie Carole", "Cheriet, Farida" ]
Conference
[ "https://github.com/ClementPla/RetinalViT/tree/prototype_superpixels" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
709
null
https://papers.miccai.org/miccai-2024/paper/2494_paper.pdf
@InProceedings{ Jia_MGDR_MICCAI2024, author = { Jiang, Bo and Li, Yapeng and Wan, Xixi and Chen, Yuan and Tu, Zhengzheng and Zhao, Yumiao and Tang, Jin }, title = { { MGDR: Multi-Modal Graph Disentangled Representation for Brain Disease Prediction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
In the task of disease prediction, medical data with different modalities can provide much complementary information for disease diagnosis. However, existing multi-modal learning methods often tend to focus on learning shared representation across modalities for disease diagnosis, without fully exploiting the complementary information from multiple modalities. To overcome this limitation, in this paper, we propose a novel Multi-modal Graph Disentangled Representation (MGDR) approach for brain disease prediction problem. Specifically, we first construct a specific modality graph for each modality data and employ Graph Convolutional Network (GCN) to learn node representations. Then, we learn the common information across different modalities and private information of each modality by developing a disentangled representation of modalities model. Moreover, to remove the possible noise from the private information, we employ a contrastive learning module to learn more compact representation of private information for each modality. Also, a new Multi-modal Perception Attention (MPA) module is employed to integrate feature representations of multiple private information. Finally, we integrate both common and private information together for disease prediction. Experiments on both ABIDE and TADPOLE datasets demonstrate that our MGDR method achieves the best performance when compared with some recent advanced methods.
MGDR: Multi-Modal Graph Disentangled Representation for Brain Disease Prediction
[ "Jiang, Bo", "Li, Yapeng", "Wan, Xixi", "Chen, Yuan", "Tu, Zhengzheng", "Zhao, Yumiao", "Tang, Jin" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
710
null
https://papers.miccai.org/miccai-2024/paper/1757_paper.pdf
@InProceedings{ Wan_Weakly_MICCAI2024, author = { Wang, Haoyu and Li, Kehan and Zhu, Jihua and Wang, Fan and Lian, Chunfeng and Ma, Jianhua }, title = { { Weakly Supervised Tooth Instance Segmentation on 3D Dental Models with Multi-Label Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Automatic tooth segmentation on 3D dental models is a fundamental task for computer-aided orthodontic treatment. Many deep learning methods aimed at precise tooth segmentation currently require meticulous point-wise annotations, which are extremely time-consuming and labor-intensive. To address this issue, we proposed a weakly supervised tooth instance segmentation network (WS-TIS), which only requires coarse class labels along with approximately 50% of point-wise tooth annotations. Our WS-TIS consists of two stages, including tooth discriminative localization and tooth instance segmentation. Precise tooth localization is frequently pivotal in instance segmentation. However, annotation of tooth centroids or bounding boxes is often challenging when we have limited point-wise tooth annotations. Therefore, we designed a proxy task to weakly supervise tooth localization. Specifically, we utilize a fine-grained multi-label classification task, equipping with the disentangled re-sampling strategy and a gated attention mechanism which can assist the network in learning discriminative tooth features. With discriminative features, certain feature visualization techniques can be easily employed to locate these discriminative regions, thereby accurately cropping out the teeth. In the second stage, a segmentation module was trained on limited annotated data (approximately 50% of all teeth) to accurately segment each tooth from cropping regions. Experiments on Teeth3DS demonstrate that our method with weakly supervised learning and weak annotations, achieves superior performance comparable to state-of-the-art approaches with full annotations.
Weakly Supervised Tooth Instance Segmentation on 3D Dental Models with Multi-Label Learning
[ "Wang, Haoyu", "Li, Kehan", "Zhu, Jihua", "Wang, Fan", "Lian, Chunfeng", "Ma, Jianhua" ]
Conference
[ "https://github.com/ladderlab-xjtu/WS-TIS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
711
null
https://papers.miccai.org/miccai-2024/paper/2008_paper.pdf
@InProceedings{ Fis_Progressive_MICCAI2024, author = { Fischer, Stefan M. and Felsner, Lina and Osuala, Richard and Kiechle, Johannes and Lang, Daniel M. and Peeken, Jan C. and Schnabel, Julia A. }, title = { { Progressive Growing of Patch Size: Resource-Efficient Curriculum Learning for Dense Prediction Tasks } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
In this work, we introduce Progressive Growing of Patch Size, a resource-efficient implicit curriculum learning approach for dense prediction tasks. Our curriculum approach is defined by growing the patch size during model training, which gradually increases the task’s difficulty. We integrated our curriculum into the nnU-Net framework and evaluated the methodology on all 10 tasks of the Medical Segmentation Decathlon. With our approach, we are able to substantially reduce runtime, computational costs, and CO$_{2}$ emissions of network training compared to classical constant patch size training. In our experiments, the curriculum approach resulted in improved convergence. We are able to outperform standard nnU-Net training, which is trained with constant patch size, in terms of Dice Score on 7 out of 10 MSD tasks while only spending roughly 50\% of the original training runtime. To the best of our knowledge, our Progressive Growing of Patch Size is the first successful employment of a sample-length curriculum in the form of patch size in the field of computer vision. Our code is publicly available at \url{https://github.com}.
Progressive Growing of Patch Size: Resource-Efficient Curriculum Learning for Dense Prediction Tasks
[ "Fischer, Stefan M.", "Felsner, Lina", "Osuala, Richard", "Kiechle, Johannes", "Lang, Daniel M.", "Peeken, Jan C.", "Schnabel, Julia A." ]
Conference
2407.07853
[ "https://github.com/compai-lab/2024-miccai-fischer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
712
null
https://papers.miccai.org/miccai-2024/paper/2185_paper.pdf
@InProceedings{ Ham_CT2Rep_MICCAI2024, author = { Hamamci, Ibrahim Ethem and Er, Sezgin and Menze, Bjoern }, title = { { CT2Rep: Automated Radiology Report Generation for 3D Medical Imaging } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Medical imaging plays a crucial role in diagnosis, with radiology reports serving as vital documentation. Automating report generation has emerged as a critical need to alleviate the workload of radiologists. While machine learning has facilitated report generation for 2D medical imaging, extending this to 3D has been unexplored due to computational complexity and data scarcity. We introduce the first method to generate radiology reports for 3D medical imaging, specifically targeting chest CT volumes. Given the absence of comparable methods, we establish a baseline using an advanced 3D vision encoder in medical imaging to demonstrate our method’s effectiveness, which leverages a novel auto-regressive causal transformer. Furthermore, recognizing the benefits of leveraging information from previous visits, we augment CT2Rep with a cross-attention-based multi-modal fusion module and hierarchical memory, enabling the incorporation of longitudinal multimodal data.
CT2Rep: Automated Radiology Report Generation for 3D Medical Imaging
[ "Hamamci, Ibrahim Ethem", "Er, Sezgin", "Menze, Bjoern" ]
Conference
2403.06801
[ "https://github.com/ibrahimethemhamamci/CT2Rep" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
713
null
https://papers.miccai.org/miccai-2024/paper/3391_paper.pdf
@InProceedings{ Liu_DiffRect_MICCAI2024, author = { Liu, Xinyu and Li, Wuyang and Yuan, Yixuan }, title = { { DiffRect: Latent Diffusion Label Rectification for Semi-supervised Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Semi-supervised medical image segmentation aims to leverage limited annotated data and rich unlabeled data to perform accurate segmentation. However, existing semi-supervised methods are highly dependent on the quality of self-generated pseudo labels, which are prone to incorrect supervision and confirmation bias. Meanwhile, they are insufficient in capturing the label distributions in latent space and suffer from limited generalization to unlabeled data. To address these issues, we propose a Latent Diffusion Label Rectification Model (DiffRect) for semi-supervised medical image segmentation. DiffRect first utilizes a Label Context Calibration Module (LCC) to calibrate the biased relationship between classes by learning the category-wise correlation in pseudo labels, then apply Latent Feature Rectification Module (LFR) on the latent space to formulate and align the pseudo label distributions of different levels via latent diffusion. It utilizes a denoising network to learn the coarse to fine and fine to precise consecutive distribution transportations. We evaluate DiffRect on three public datasets: ACDC, MS-CMRSEG 2019, and Decathlon Prostate. Experimental results demonstrate the effectiveness of DiffRect, e.g. it achieves 82.40\% Dice score on ACDC with only 1\% labeled scan available, outperforms the previous state-of-the-art by 4.60\% in Dice, and even rivals fully supervised performance. Code will be made publicly available.
DiffRect: Latent Diffusion Label Rectification for Semi-supervised Medical Image Segmentation
[ "Liu, Xinyu", "Li, Wuyang", "Yuan, Yixuan" ]
Conference
2407.09918
[ "https://github.com/CUHK-AIM-Group/DiffRect" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
714
null
https://papers.miccai.org/miccai-2024/paper/1898_paper.pdf
@InProceedings{ The_TractOracle_MICCAI2024, author = { Théberge, Antoine and Descoteaux, Maxime and Jodoin, Pierre-Marc }, title = { { TractOracle: towards an anatomically-informed reward function for RL-based tractography } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Reinforcement learning (RL)-based tractography is a competitive alternative to machine learning and classical tractography algorithms due to its high anatomical accuracy obtained without the need for any annotated data. However, the reward functions so far used to train RL agents do not encapsulate anatomical knowledge which causes agents to generate spurious false positives tracts. In this paper, we propose a new RL tractography system, TractOracle, which relies on a reward network trained for streamline classification. This network is used both as a reward function during training as well as a mean for stopping the tracking process early and thus reduce the number of false positive streamlines. This makes our system a unique method that evaluates and reconstructs WM streamlines at the same time. We report ratios of true and false positives improved by almost 20\% on one dataset and a 2x improvement of the amount of true-positives on another dataset, by far the best results ever reported in tractography.
TractOracle: towards an anatomically-informed reward function for RL-based tractography
[ "Théberge, Antoine", "Descoteaux, Maxime", "Jodoin, Pierre-Marc" ]
Conference
2403.17845
[ "https://github.com/scil-vital/TrackToLearn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
715
null
https://papers.miccai.org/miccai-2024/paper/3369_paper.pdf
@InProceedings{ Kar_Longitudinal_MICCAI2024, author = { Karaman, Batuhan K. and Dodelzon, Katerina and Akar, Gozde B. and Sabuncu, Mert R. }, title = { { Longitudinal Mammogram Risk Prediction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Breast cancer is one of the leading causes of mortality among women worldwide. Early detection and risk assessment play a crucial role in improving survival rates. Therefore, annual or biennial mammograms are often recommended for screening in high-risk groups. Mammograms are typically interpreted by expert radiologists based on the Breast Imaging Reporting and Data System (BI-RADS), which provides a uniform way to describe findings and categorizes them to indicate the level of concern for breast cancer. Recently, machine learning (ML) and computational approaches have been developed to automate and improve the interpretation of mammograms. However, both BI-RADS and the ML-based methods focus on the analysis of data from the present and sometimes the most recent prior visit. While it has been shown that temporal changes in image features of longitudinal scans are valuable for quantifying breast cancer risk, no prior work has systematically studied this. In this paper, we extend a state-of-the-art ML model to ingest an arbitrary number of longitudinal mammograms and predict future breast cancer risk. On a large scale dataset, we demonstrate that our model, LoMaR, achieves state-of-the-art performance when presented with only the present mammogram. Furthermore, we use LoMaR to characterize the predictive value of prior visits. Our results show that longer histories (e.g., up to four prior annual mammograms) can significantly boost the accuracy of predicting future breast cancer risk, particularly beyond the short-term. Our code and model weights are available at https://github.com/batuhankmkaraman/LoMaR.
Longitudinal Mammogram Risk Prediction
[ "Karaman, Batuhan K.", "Dodelzon, Katerina", "Akar, Gozde B.", "Sabuncu, Mert R." ]
Conference
2404.19083
[ "https://github.com/batuhankmkaraman/LoMaR" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
716
null
https://papers.miccai.org/miccai-2024/paper/2216_paper.pdf
@InProceedings{ Gaz_AcneAI_MICCAI2024, author = { Gazeau, Léa and Nguyen, Hang and Nguyen, Zung and Lebedeva, Mariia and Nguyen, Thanh and To, Tat-Dat and Le Digabel, Jimmy and Filiol, Jérome and Josse, Gwendal and Perlis, Clifford and Wolfe, Jonathan }, title = { { AcneAI: A new acne severity assessment method using digital images and deep learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
In this paper we present a new AcneAI system that automatically analyses facial acne images in a precise way, detecting and scoring every single acne lesion within an image. Its workflow consists of three main steps: 1) segmentation of all acne and acne-like lesions, 2) scoring of each acne lesion, 3) combining individual acne lesion scores into an overall acne severity score for the whole image, that ranges from 0 to 100. Our clinical tests on the Acne04 dataset shows that AcneAI has an Intraclass Correlation Coefficient (ICC) score of 0.8 in severity classification. We obtained an area under the curve (AUC) of 0.88 in detecting inflammatory lesions in a clinical dataset obtained from a multi-centric clinical trial.
AcneAI: A new acne severity assessment method using digital images and deep learning
[ "Gazeau, Léa", "Nguyen, Hang", "Nguyen, Zung", "Lebedeva, Mariia", "Nguyen, Thanh", "To, Tat-Dat", "Le Digabel, Jimmy", "Filiol, Jérome", "Josse, Gwendal", "Perlis, Clifford", "Wolfe, Jonathan" ]
Conference
[ "https://github.com/AIpourlapeau/acne04v2" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
717
null
https://papers.miccai.org/miccai-2024/paper/2283_paper.pdf
@InProceedings{ Wan_Enhancing_MICCAI2024, author = { Wang, Diwei and Yuan, Kun and Muller, Candice and Blanc, Frédéric and Padoy, Nicolas and Seo, Hyewon }, title = { { Enhancing Gait Video Analysis in Neurodegenerative Diseases by Knowledge Augmentation in Vision Language Model } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
We present a knowledge augmentation strategy for assessing the diagnostic groups and gait impairment from monocular gait videos. Based on a large-scale pre-trained Vision Language Model (VLM), our model learns and improves visual, textual, and numerical representations of patient gait videos, through a collective learning across three distinct modalities: gait videos, class-specific descriptions, and numerical gait parameters. Our specific contributions are two-fold: First, we adopt a knowledge-aware prompt tuning strategy to utilize the class-specific medical description in guiding the text prompt learning. Second, we integrate the paired gait parameters in the form of numerical texts to enhance the numeracy of the textual representation. Results demonstrate that our model not only significantly outperforms state-of-the-art methods in video-based classification tasks but also adeptly decodes the learned class-specific text features into natural language descriptions using the vocabulary of quantitative gait parameters. The code and the model will be made available at our project page: https://lisqzqng.github.io/GaitAnalysisVLM/.
Enhancing Gait Video Analysis in Neurodegenerative Diseases by Knowledge Augmentation in Vision Language Model
[ "Wang, Diwei", "Yuan, Kun", "Muller, Candice", "Blanc, Frédéric", "Padoy, Nicolas", "Seo, Hyewon" ]
Conference
2403.13756
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
718
null
https://papers.miccai.org/miccai-2024/paper/2611_paper.pdf
@InProceedings{ Yan_Finegrained_MICCAI2024, author = { Yan, Zhongnuo and Yang, Xin and Luo, Mingyuan and Chen, Jiongquan and Chen, Rusi and Liu, Lian and Ni, Dong }, title = { { Fine-grained Context and Multi-modal Alignment for Freehand 3D Ultrasound Reconstruction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Fine-grained spatio-temporal learning is crucial for freehand 3D ultrasound reconstruction. Previous works mainly resorted to the coarse-grained spatial features and the separated temporal dependency learning and struggles for fine-grained spatio-temporal learning. Mining spatio-temporal information in fine-grained scales is extremely challenging due to learning difficulties in long-range dependencies. In this context, we propose a novel method to exploit the long-range dependency management capabilities of the state space model (SSM) to address the above challenge. Our contribution is three-fold. First, we propose ReMamba, which mines multi-scale spatio-temporal information by devising a multi-directional SSM. Second, we propose an adaptive fusion strategy that introduces multiple inertial measurement units as auxiliary temporal information to enhance spatio-temporal perception. Last, we design an online alignment strategy that encodes the temporal information as pseudo labels for multi-modal alignment to further improve reconstruction performance. Extensive experimental validations on two large-scale datasets show remarkable improvement from our method over competitors.
Fine-grained Context and Multi-modal Alignment for Freehand 3D Ultrasound Reconstruction
[ "Yan, Zhongnuo", "Yang, Xin", "Luo, Mingyuan", "Chen, Jiongquan", "Chen, Rusi", "Liu, Lian", "Ni, Dong" ]
Conference
2407.04242
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
719
null
https://papers.miccai.org/miccai-2024/paper/0974_paper.pdf
@InProceedings{ Kon_GazeDETR_MICCAI2024, author = { Kong, Yan and Wang, Sheng and Cai, Jiangdong and Zhao, Zihao and Shen, Zhenrong and Li, Yonghao and Fei, Manman and Wang, Qian }, title = { { Gaze-DETR: Using Expert Gaze to Reduce False Positives in Vulvovaginal Candidiasis Screening } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Accurate detection of vulvovaginal candidiasis is critical for women’s health, yet its sparse distribution and visually ambiguous characteristics pose significant challenges for accurate identification by pathologists and neural networks alike. Our eye-tracking data reveals that areas garnering sustained attention - yet not marked by experts after deliberation - are often aligned with false positives of neural networks. Leveraging this finding, we introduce Gaze-DETR, a pioneering method that integrates gaze data to enhance neural network precision by diminishing false positives. Gaze-DETR incorporates a universal gaze-guided warm-up protocol applicable across various detection methods and a gaze-guided rectification strategy specifically designed for DETR-based models. Our comprehensive tests confirm that Gaze-DETR surpasses existing leading methods, showcasing remarkable improvements in detection accuracy and generalizability. Our code is available at https://github.com/YanKong0408/Gaze-DETR.
Gaze-DETR: Using Expert Gaze to Reduce False Positives in Vulvovaginal Candidiasis Screening
[ "Kong, Yan", "Wang, Sheng", "Cai, Jiangdong", "Zhao, Zihao", "Shen, Zhenrong", "Li, Yonghao", "Fei, Manman", "Wang, Qian" ]
Conference
2405.09463
[ "https://github.com/YanKong0408/Gaze-DETR" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
720
null
https://papers.miccai.org/miccai-2024/paper/2519_paper.pdf
@InProceedings{ Hua_IOSSAM_MICCAI2024, author = { Huang, Xinrui and He, Dongming and Li, Zhenming and Zhang, Xiaofan and Wang, Xudong }, title = { { IOSSAM: Label Efficient Multi-View Prompt-Driven Tooth Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Segmenting and labeling teeth from 3D Intraoral Scans (IOS) plays a significant role in digital dentistry. Dedicated learning-based methods have shown impressive results, while they suffer from expensive point-wise annotations. We aim at IOS segmentation with only low-cost 2D bounding-boxes annotations in the occlusal view. To accomplish this objective, we propose a SAM-based multi-view prompt-driven IOS segmentation method (IOSSAM) which learns prompts to utilize the pre-trained shape knowledge embedded in the visual foundation model SAM. Specifically, our method introduces an occlusal prompter trained on a dataset with weak annotations to generate category-related prompts for the occlusal view segmentation. We further develop a dental crown prompter to produce reasonable prompts for the dental crown view segmentation by considering the crown length prior and the generated occlusal view segmentation. We carefully design a novel view-aware label diffusion strategy to lift 2D segmentation to 3D field. We validate our method on a real IOS dataset, and the results show that our method outperforms recent weakly-supervised methods and is even comparable with fully-supervised methods.
IOSSAM: Label Efficient Multi-View Prompt-Driven Tooth Segmentation
[ "Huang, Xinrui", "He, Dongming", "Li, Zhenming", "Zhang, Xiaofan", "Wang, Xudong" ]
Conference
[ "https://github.com/ar-inspire/IOSSAM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
721
null
https://papers.miccai.org/miccai-2024/paper/0458_paper.pdf
@InProceedings{ Shi_Centerline_MICCAI2024, author = { Shi, Pengcheng and Hu, Jiesi and Yang, Yanwu and Gao, Zilve and Liu, Wei and Ma, Ting }, title = { { Centerline Boundary Dice Loss for Vascular Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Vascular segmentation in medical imaging plays a crucial role in analysing morphological and functional assessments. Traditional methods, like the centerline Dice (clDice) loss, ensure topology preservation but falter in capturing geometric details, especially under translation and deformation. The combination of clDice with traditional Dice loss can lead to diameter imbalance, favoring larger vessels. Addressing these challenges, we introduce the centerline boundary Dice (cbDice) loss function, which harmonizes topological integrity and geometric nuances, ensuring consistent segmentation across various vessel sizes. cbDice enriches the clDice approach by including boundary-aware aspects, thereby improving geometric detail recognition. It matches the performance of the boundary difference over union (B-DoU) loss through a mask-distance-based approach, enhancing traslation sensitivity. Crucially, cbDice incorporates radius information from vascular skeletons, enabling uniform adaptation to vascular diameter changes and maintaining balance in branch growth and fracture impacts. Furthermore, we conducted a theoretical analysis of clDice variants (cl-X-Dice). We validated cbDice’s efficacy on three diverse vascular segmentation datasets, encompassing both 2D and 3D, and binary and multi-class segmentation. Particularly, the method integrated with cbDice demonstrated outstanding performance on the MICCAI 2023 TopCoW Challenge dataset. Our code is made publicly available at: https://github.com/PengchengShi1220/cbDice.
Centerline Boundary Dice Loss for Vascular Segmentation
[ "Shi, Pengcheng", "Hu, Jiesi", "Yang, Yanwu", "Gao, Zilve", "Liu, Wei", "Ma, Ting" ]
Conference
2407.01517
[ "https://github.com/PengchengShi1220/cbDice" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
722
null
https://papers.miccai.org/miccai-2024/paper/1481_paper.pdf
@InProceedings{ Caf_Two_MICCAI2024, author = { Cafaro, Alexandre and Dorent, Reuben and Haouchine, Nazim and Lepetit, Vincent and Paragios, Nikos and Wells III, William M. and Frisken, Sarah }, title = { { Two Projections Suffice for Cerebral Vascular Reconstruction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
3D reconstruction of cerebral vasculature from 2D biplanar projections could significantly improve diagnosis and treatment planning. We introduce a novel approach to tackle this challenging task by initially backprojecting the two projections, a process that traditionally results in unsatisfactory outcomes due to inherent ambiguities. To overcome this, we employ a U-Net approach trained to resolve these ambiguities, leading to significant improvement in reconstruction quality. The process is further refined using a Maximum A Posteriori strategy with a prior that favors continuity, leading to enhanced 3D reconstructions. We evaluated our approach using a comprehensive dataset comprising segmentations from approximately 700 MR angiography scans, from which we generated paired realistic biplanar DRRs. Upon testing with held-out data, our method achieved an 80% Dice similarity w.r.t the ground truth, superior to existing methods. Our code and dataset are available at https://github.com/Wapity/3DBrainXVascular.
Two Projections Suffice for Cerebral Vascular Reconstruction
[ "Cafaro, Alexandre", "Dorent, Reuben", "Haouchine, Nazim", "Lepetit, Vincent", "Paragios, Nikos", "Wells III, William M.", "Frisken, Sarah" ]
Conference
[ "https://github.com/Wapity/3DBrainXVascular" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
723
null
https://papers.miccai.org/miccai-2024/paper/3196_paper.pdf
@InProceedings{ Bae_Conditional_MICCAI2024, author = { Bae, Juyoung and Tong, Elizabeth and Chen, Hao }, title = { { Conditional Diffusion Model for Versatile Temporal Inpainting in 4D Cerebral CT Perfusion Imaging } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Cerebral CT Perfusion (CTP) sequence imaging is a widely used modality for stroke assessment. While high temporal resolution of CT scans is crucial for accurate diagnosis, it correlates to increased radiation exposure. A promising solution is to generate synthetic CT scans to artificially enhance the temporal resolution of the sequence. We present a versatile CTP sequence inpainting model based on a conditional diffusion model, which can inpaint temporal gaps with synthetic scan to a fine 1-second interval, agnostic to both the duration of the gap and the sequence length. We achieve this by incorporating a carefully engineered conditioning scheme that exploits the intrinsic patterns of time-concentration dynamics. Our approach is much more flexible and clinically relevant compared to existing interpolation methods that either (1) lack such perfusion-specific guidances or (2) require all the known scans in the sequence, thereby imposing constraints on the length and acquisition interval. Such flexibility allows our model to be effectively applied to other tasks, such as repairing sequences with significant motion artifacts. Our model can generate accurate and realistic CT scans to inpaint gaps as wide as 8 seconds while achieving both perceptual quality and diagnostic information comparable to the ground-truth 1-second resolution sequence. Extensive experiments demonstrate the superiority of our model over prior arts in numerous metrics and clinical applicability. Our code is available at https://github.com/baejustin/CTP_Inpainting_Diffusion.
Conditional Diffusion Model for Versatile Temporal Inpainting in 4D Cerebral CT Perfusion Imaging
[ "Bae, Juyoung", "Tong, Elizabeth", "Chen, Hao" ]
Conference
[ "https://github.com/baejustin/CTP_Inpainting_Diffusion" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
724
null
https://papers.miccai.org/miccai-2024/paper/3533_paper.pdf
@InProceedings{ Swa_SAM_MICCAI2024, author = { Swain, Bishal R. and Cheoi, Kyung J. and Ko, Jaepil }, title = { { SAM Guided Task-Specific Enhanced Nuclei Segmentation in Digital Pathology } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Cell nuclei segmentation is crucial in digital pathology for various diagnoses and treatments which are prominently performed using semantic segmentation that focus on scalable receptive field and multi-scale information. In such segmentation tasks, U-Net based task-specific encoders excel in capturing fine-grained information but fall short integrating high-level global context. Conversely, foundation models inherently grasp coarse-level features but are not as proficient as task-specific models to provide fine-grained details. To this end, we propose utilizing the foundation model to guide the task-specific supervised learning by dynamically combining their global and local latent representations, via our proposed X-Gated Fusion Block, which uses Gated squeeze and excitation block followed by Cross-attention to dynamically fuse latent representations. Through our experiments across datasets and visualization analysis, we demonstrate that the integration of task-specific knowledge with general insights from foundational models can drastically increase performance, even outperforming domain-specific semantic segmentation models to achieve state-of-the-art results by increasing the Dice score and mIoU by approximately 12% and 17.22% on CryoNuSeg, 15.55% and 16.77% on NuInsSeg, and 9% on both metrics for the CoNIC dataset. Our code will be released at https://cvpr-kit.github.io/SAM-Guided-Enhanced-Nuclei-Segmentation/.
SAM Guided Task-Specific Enhanced Nuclei Segmentation in Digital Pathology
[ "Swain, Bishal R.", "Cheoi, Kyung J.", "Ko, Jaepil" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
725
null
https://papers.miccai.org/miccai-2024/paper/1255_paper.pdf
@InProceedings{ Li_Multicategory_MICCAI2024, author = { Li, Dongzhe and Yang, Baoyao and Zhan, Weide and He, Xiaochen }, title = { { Multi-category Graph Reasoning for Multi-modal Brain Tumor Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Many multi-modal tumor segmentation methods have been proposed to localize diseased areas from the brain images, facilitating the intelligence of diagnosis. However, existing studies commonly ignore the relationship between multiple categories in brain tumor segmentation, leading to irrational tumor area distribution in the predictive results. To address this issue, this work proposes a Multi-category Region-guided Graph Reasoning Network, which models the dependency between multiple categories using a Multi-category Interaction Module (TMIM), thus enabling more accurate subregion localization of brain tumors. To improve the recognition of tumors’ blurred boundaries, a Region-guided Reasoning Module is also incorporated into the network, which captures semantic relationships between regions and contours via graph reasoning. In addition, we introduce a shared cross-attention encoder in the feature extraction stage to facilitate the comprehensive utilization of multi-modal information. Experimental results on the BraTS2019 and BraTS2020 datasets demonstrate that our method outperforms the current state-of-the-art methods.
Multi-category Graph Reasoning for Multi-modal Brain Tumor Segmentation
[ "Li, Dongzhe", "Yang, Baoyao", "Zhan, Weide", "He, Xiaochen" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
726
null
https://papers.miccai.org/miccai-2024/paper/0920_paper.pdf
@InProceedings{ Tah_Enhancing_MICCAI2024, author = { Tahghighi, Peyman and Zhang, Yunyan and Souza, Roberto and Komeili, Amin }, title = { { Enhancing New Multiple Sclerosis Lesion Segmentation via Self-supervised Pre-training and Synthetic Lesion Integration } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Multiple Sclerosis (MS) is a chronic and severe inflammatory disease of the central nervous system. In MS, the myelin sheath covering nerve fibres is attacked by the self-immune system, leading to communication issues between the brain and the rest of the body. Image-based biomarkers, such as lesions seen with Magnetic Resonance Imaging (MRI), are essential in MS diagnosis and monitoring. Further, detecting newly formed lesions provides crucial information for assessing disease progression and treatment outcomes. However, annotating changes between MRI scans is time-consuming and subject to inter-expert variability. Methods proposed for new lesion segmentation have utilized limited data available for training the model, failing to harness the full capacity of the models and resulting in limited generalizability. To enhance the performance of the new MS lesion segmentation model, we propose a self-supervised pre-training scheme based on image masking that is used to initialize the weights of the model, which then is trained for the new lesion segmentation task using a mix of real and synthetic data created by a synthetic lesion data augmentation method that we propose. Experiments on the MSSEG-2 challenge dataset demonstrate that utilizing self-supervised pre-training and adding synthetic lesions during training improves the model’s performance. We achieved a Dice score of 56.15±7.06% and an F1 score of 56.69±9.12%, which is 2.06% points and 3.3% higher, respectively, than the previous best existing method. Code is available at: https://github.com/PeymanTahghighi/SSLMRI.
Enhancing New Multiple Sclerosis Lesion Segmentation via Self-supervised Pre-training and Synthetic Lesion Integration
[ "Tahghighi, Peyman", "Zhang, Yunyan", "Souza, Roberto", "Komeili, Amin" ]
Conference
[ "https://github.com/PeymanTahghighi/SSLMRI" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
727
null
https://papers.miccai.org/miccai-2024/paper/3074_paper.pdf
@InProceedings{ Zhu_Symptom_MICCAI2024, author = { Zhu, Ye and Xu, Jingwen and Lyu, Fei and Yuen, Pong C. }, title = { { Symptom Disentanglement in Chest X-ray Images for Fine-Grained Progression Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Chest radiography is a commonly used diagnostic imaging exam for monitoring disease severity. Machine learning has made significant strides in static tasks (e.g., segmentation or diagnosis) based on a single medical image. However, disease progression monitoring based on longitudinal images remain fairly underexplored, which provides informative clues for early prognosis and timely intervention. In practice, the development of underlying disease typically accompanies with the occurrence and changes of multiple specific symptoms. Inspired by this, we propose a multi-stage framework to model the complex progression from symptom perspective. Specifically, we introduce two consecutive modules namely Symptom Disentangler (SD) and Symptom Progression Learner (SPL) to learn from static diagnosis to dynamic disease development. By explicitly extracting the symptom-specific features from a pair of chest radiographs using a set of learnable symptom-aware embeddings in SD module, the SPL module can leverage these features for obtaining the symptom progression features, which will be utilized for the final progression prediction. Experimental results on the public dataset Chest ImaGenome show superior performance compared to current state-of-the-art method.
Symptom Disentanglement in Chest X-ray Images for Fine-Grained Progression Learning
[ "Zhu, Ye", "Xu, Jingwen", "Lyu, Fei", "Yuen, Pong C." ]
Conference
[ "https://github.com/zhuye98/SDPL.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
728
null
https://papers.miccai.org/miccai-2024/paper/3979_paper.pdf
@InProceedings{ Wan_SAMMed3DMoE_MICCAI2024, author = { Wang, Guoan and Ye, Jin and Cheng, Junlong and Li, Tianbin and Chen, Zhaolin and Cai, Jianfei and He, Junjun and Zhuang, Bohan }, title = { { SAM-Med3D-MoE: Towards a Non-Forgetting Segment Anything Model via Mixture of Experts for 3D Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Volumetric medical image segmentation is pivotal in enhancing disease diagnosis, treatment planning, and advancing medical research. While existing volumetric foundation models for medical image segmentation, such as SAM-Med3D and SegVol, have shown remarkable performance on general organs and tumors, their ability to segment certain categories in clinical downstream tasks remains limited. Supervised Finetuning (SFT) serves as an effective way to adapt such foundation models for task-specific downstream tasks and achieve remarkable performance in those tasks. However, it would inadvertently degrade the general knowledge previously stored in the original foundation model. In this paper, we propose SAM-Med3D-MoE, a novel framework that seamlessly integrates task-specific finetuned models with the foundational model, creating a unified model at minimal additional training expense for an extra gating network. This gating network, in conjunction with a selection strategy, allows the unified model to achieve comparable performance of the original models in their respective tasks — both general and specialized — without updating any parameters of them. Our comprehensive experiments demonstrate the efficacy of SAM-Med3D-MoE, with an average Dice performance increase from 53.2\% to 56.4\% on 15 specific classes. It especially gets remarkable gains of 29.6\%, 8.5\%, 11.2\% on the spinal cord, esophagus, and right hip, respectively. Additionally, it achieves 48.9\% Dice on the challenging SPPIN2023 Challenge, significantly surpassing the general expert’s performance of 32.3\%. We anticipate that SAM-Med3D-MoE can serve as a new framework for adapting the foundation model to specific areas in medical image analysis. Codes and datasets will be publicly available.
SAM-Med3D-MoE: Towards a Non-Forgetting Segment Anything Model via Mixture of Experts for 3D Medical Image Segmentation
[ "Wang, Guoan", "Ye, Jin", "Cheng, Junlong", "Li, Tianbin", "Chen, Zhaolin", "Cai, Jianfei", "He, Junjun", "Zhuang, Bohan" ]
Conference
2407.04938
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
729
null
https://papers.miccai.org/miccai-2024/paper/1208_paper.pdf
@InProceedings{ Zha_Synchronous_MICCAI2024, author = { Zhang, Jianhai and Wan, Tonghua and MacDonald, M. Ethan and Menon, Bijoy K. and Qiu, Wu and Ganesh, Aravind }, title = { { Synchronous Image-Label Diffusion with Anisotropic Noise for Stroke Lesion Segmentation on Non-contrast CT } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Automated segmentation of stroke lesions on non-contrast CT (NCCT) images is essential for efficient diagnosis of stroke patients. Although diffusion probabilistic models have shown promising advancements across various fields, their application to medical imaging exposes limitations due to the use of conventional isotropic Gaussian noise. Isotropic Gaussian noise overlooks the structural information and strong voxel dependencies in medical images. In this paper, a novel framework employing synchronous diffusion processes on image-labels is introduced, combined with a sampling strategy for anisotropic noise, to improve stroke lesion segmentation performance on NCCT. Our method acknowledges the significance of anatomical information during diffusion, contrasting with the traditional diffusion processes that assume isotropic Gaussian noise added to voxels independently. By integrating correlations among image voxels within specific anatomical regions into the denoising process, our approach enhances the robustness of neural networks, resulting in improved accuracy in stroke lesion segmentation. The proposed method has been evaluated on two datasets where experimental results demonstrate the capability of the proposed method to accurately segment ischemic infarcts on NCCT images. Furthermore, comparative analysis against state-of-the-art models, including U-net, transformer, and DPM-based segmentation methods, highlights the advantages of our method in terms of segmentation metrics. The code is publicly available at https://github.com/zhangjianhai/SADPM.
Synchronous Image-Label Diffusion with Anisotropic Noise for Stroke Lesion Segmentation on Non-contrast CT
[ "Zhang, Jianhai", "Wan, Tonghua", "MacDonald, M. Ethan", "Menon, Bijoy K.", "Qiu, Wu", "Ganesh, Aravind" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
730
null
https://papers.miccai.org/miccai-2024/paper/0078_paper.pdf
@InProceedings{ Fan_DiffExplainer_MICCAI2024, author = { Fang, Yingying and Wu, Shuang and Jin, Zihao and Wang, Shiyi and Xu, Caiwen and Walsh, Simon and Yang, Guang }, title = { { DiffExplainer: Unveiling Black Box Models Via Counterfactual Generation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
In the field of medical imaging, particularly in tasks related to early disease detection and prognosis, understanding the reasoning behind AI model predictions is imperative for assessing their reliability. Conventional explanation methods encounter challenges in identifying decisive features in medical image classifications, especially when discriminative features are subtle or not immediately evident. To address this limitation, we propose an agent model capable of generating counterfactual images that prompt different decisions when plugged into a black box model. By employing this agent model, we can uncover influential image patterns that impact the black model’s final predictions. Through our methodology, we efficiently identify features that influence decisions of the deep black box. We validated our approach in the rigorous domain of medical prognosis tasks, showcasing its efficacy and potential to enhance the reliability of deep learning models in medical image classification compared to existing interpretation methods. The code is available at: \url{https://github.com/ayanglab/DiffExplainer}.
DiffExplainer: Unveiling Black Box Models Via Counterfactual Generation
[ "Fang, Yingying", "Wu, Shuang", "Jin, Zihao", "Wang, Shiyi", "Xu, Caiwen", "Walsh, Simon", "Yang, Guang" ]
Conference
2406.15182
[ "https://github.com/ayanglab/DiffExplainer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
731
null
https://papers.miccai.org/miccai-2024/paper/0494_paper.pdf
@InProceedings{ Cai_Classaware_MICCAI2024, author = { Cai, Zhuotong and Xin, Jingmin and Zeng, Tianyi and Dong, Siyuan and Zheng, Nanning and Duncan, James S. }, title = { { Class-aware Mutual Mixup with Triple Alignments for Semi-Supervised Cross-domain Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Semi-supervised cross-domain segmentation, also referred to as Semi-supervised domain adaptation (SSDA), aims to bridge the domain gap and enhance model performance on the target domain with the limited availability of labeled target samples, lots of unlabeled target samples, and a substantial amount of labeled source samples. However, current SSDA approaches still face challenges in attaining consistent alignment across domains and adequately addressing the segmentation performance for the tail class. In this work, we develop class-aware mutual mixup with triple alignments (CMMTA) for semi-supervised cross-domain segmentation. Specifically, we first propose a class-aware mutual mixup strategy to obtain the maximal diversification of data distribution and enable the model to focus on the tail class. Then, we incorporate our class-aware mutual mixup across three distinct pathways to establish a triple consistent alignment. We further introduce cross knowledge distillation (CKD) with two parallel mean-teacher models for intra-domain and inter-domain alignment, respectively. Experimental results on two public cardiac datasets MM-WHS and MS-CMRSeg demonstrate the superiority of our proposed approach against other state-of-the-art methods under two SSDA settings.
Class-aware Mutual Mixup with Triple Alignments for Semi-Supervised Cross-domain Segmentation
[ "Cai, Zhuotong", "Xin, Jingmin", "Zeng, Tianyi", "Dong, Siyuan", "Zheng, Nanning", "Duncan, James S." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
732
null
https://papers.miccai.org/miccai-2024/paper/1332_paper.pdf
@InProceedings{ Bud_Transferring_MICCAI2024, author = { Budd, Charlie and Vercauteren, Tom }, title = { { Transferring Relative Monocular Depth to Surgical Vision with Temporal Consistency } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Relative monocular depth, inferring depth correct to a shift and scale from a single image, is an active research topic. Recent deep learning models, trained on large and varied meta-datasets, now provide excellent performance in the domain of natural images. However, few datasets exist which provide ground truth depth for endoscopic images, making training such models from scratch unfeasible. This work investigates the transfer of these models into the surgical domain, and presents an effective and simple way to improve on standard supervision through the use of temporal consistency self-supervision. We show temporal consistency significantly improves supervised training alone when transferring to the low-data regime of endoscopy, and outperforms the prevalent self-supervision technique for this task. In addition we show our method drastically outperforms the state-of-the-art method from within the domain of endoscopy. We also release our code, models, and ensembled meta-dataset, Meta-MED, establishing a strong benchmark for future work.
Transferring Relative Monocular Depth to Surgical Vision with Temporal Consistency
[ "Budd, Charlie", "Vercauteren, Tom" ]
Conference
2403.06683
[ "https://github.com/charliebudd/transferring-relative-monocular-depth-to-surgical-vision" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
733
null
https://papers.miccai.org/miccai-2024/paper/2689_paper.pdf
@InProceedings{ Cha_ANovel_MICCAI2024, author = { Chai, Shurong and Jain, Rahul K. and Mo, Shaocong and Liu, Jiaqing and Yang, Yulin and Li, Yinhao and Tateyama, Tomoko and Lin, Lanfen and Chen, Yen-Wei }, title = { { A Novel Adaptive Hypergraph Neural Network for Enhancing Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Medical image segmentation is crucial in the field of medical imaging, assisting healthcare professionals in analyzing images and improving diagnostic performance. Recent advancements in Transformer-based networks, which utilize self-attention mechanism, have proven their effectiveness in various medical problems, including medical imaging. However, existing self-attention mechanism in Transformers only captures pairwise correlations among image patches, neglecting non-pairwise correlations that are essential for performance enhancement. On the other hand, recently, graph-based networks have emerged to capture both pairwise and non-pairwise correlations effectively. Inspired by recent Hypergraph Neural Network (HGNN), we propose a novel hypergraph-based network for medical image segmentation. Our contribution lies in formulating novel and efficient HGNN methods for constructing Hyperedges. To effectively aggregate multiple patches with similar attributes at both feature and local levels, we introduce an improved adaptive technique leveraging the K-Nearest Neighbors (KNN) algorithm to enhance the hypergraph construction process. Additionally, we generalize the concept of Convolutional Neural Networks (CNNs) to hypergraphs. Our method achieves state-of-the-art results on two publicly available segmentation datasets, and visualization results further validate its effectiveness. Our code is released on Github: https://github.com/11yxk/AHGNN.
A Novel Adaptive Hypergraph Neural Network for Enhancing Medical Image Segmentation
[ "Chai, Shurong", "Jain, Rahul K.", "Mo, Shaocong", "Liu, Jiaqing", "Yang, Yulin", "Li, Yinhao", "Tateyama, Tomoko", "Lin, Lanfen", "Chen, Yen-Wei" ]
Conference
[ "https://github.com/11yxk/AHGNN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
734
null
https://papers.miccai.org/miccai-2024/paper/1768_paper.pdf
@InProceedings{ Liu_Structural_MICCAI2024, author = { Liu, Kang and Ma, Zhuoqi and Kang, Xiaolu and Zhong, Zhusi and Jiao, Zhicheng and Baird, Grayson and Bai, Harrison and Miao, Qiguang }, title = { { Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
The automated generation of imaging reports proves invaluable in alleviating the workload of radiologists. A clinically applicable reports generation algorithm should demonstrate its effectiveness in producing reports that accurately describe radiology findings and attend to patient-specific indications. In this paper, we introduce a novel method, Structural Entities extraction and patient indications Incorporation (SEI) for chest X-ray report generation. Specifically, we employ a structural entities extraction (SEE) approach to eliminate presentation-style vocabulary in reports and improve the quality of factual entity sequences. This reduces the noise in the following cross-modal alignment module by aligning X-ray images with factual entity sequences in reports, thereby enhancing the precision of cross-modal alignment and further aiding the model in gradient-free retrieval of similar historical cases. Subsequently, we propose a cross-modal fusion network to integrate information from X-ray images, similar historical cases, and patient-specific indications. This process allows the text decoder to attend to discriminative features of X-ray images, assimilate historical diagnostic information from similar cases, and understand the examination intention of patients. This, in turn, assists in triggering the text decoder to produce high-quality reports. Experiments conducted on MIMIC-CXR validate the superiority of SEI over state-of-the-art approaches on both natural language generation and clinical efficacy metrics. The code is available at https://github.com/mk-runner/SEI.
Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation
[ "Liu, Kang", "Ma, Zhuoqi", "Kang, Xiaolu", "Zhong, Zhusi", "Jiao, Zhicheng", "Baird, Grayson", "Bai, Harrison", "Miao, Qiguang" ]
Conference
2405.14905
[ "https://github.com/mk-runner/SEI" ]
https://huggingface.co/papers/2405.14905
0
0
0
8
[ "MK-runner/SEI" ]
[]
[]
[ "MK-runner/SEI" ]
[]
[]
1
Poster
735
null
https://papers.miccai.org/miccai-2024/paper/3843_paper.pdf
@InProceedings{ Has_SemiSupervised_MICCAI2024, author = { Hasan, Mahmudul and Hu, Xiaoling and Abousamra, Shahira and Prasanna, Prateek and Saltz, Joel and Chen, Chao }, title = { { Semi-Supervised Contrastive VAE for Disentanglement of Digital Pathology Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Despite the strong prediction power of deep learning models, their interpretability remains an important concern. Disentanglement models increase interpretability by decomposing the latent space into interpretable subspaces. In this paper, we propose the first disentanglement method for pathology images. We focus on the task of detecting tumor-infiltrating lymphocytes (TIL). We propose different ideas including cascading disentanglement, novel architecture and reconstruction branches. We achieve superior performance on complex pathology images, thus improving the interpretability and even generalization power of TIL detection deep learning models.
Semi-Supervised Contrastive VAE for Disentanglement of Digital Pathology Images
[ "Hasan, Mahmudul", "Hu, Xiaoling", "Abousamra, Shahira", "Prasanna, Prateek", "Saltz, Joel", "Chen, Chao" ]
Conference
2410.02012
[ "https://github.com/Shauqi/SS-cVAE" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
736
null
https://papers.miccai.org/miccai-2024/paper/3024_paper.pdf
@InProceedings{ Zhe_Reducing_MICCAI2024, author = { Zheng, Zixuan and Shi, Yilei and Li, Chunlei and Hu, Jingliang and Zhu, Xiao Xiang and Mou, Lichao }, title = { { Reducing Annotation Burden: Exploiting Image Knowledge for Few-Shot Medical Video Object Segmentation via Spatiotemporal Consistency Relearning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Few-shot video object segmentation aims to reduce annotation costs; however, existing methods still require abundant dense frame annotations for training, which are scarce in the medical domain. We investigate an extremely low-data regime that utilizes annotations from only a few video frames and leverages existing labeled images to minimize costly video annotations. Specifically, we propose a two-phase framework. First, we learn a few-shot segmentation model using labeled images. Subsequently, to improve performance without full supervision, we introduce a spatiotemporal consistency relearning approach on medical videos that enforces consistency between consecutive frames. Constraints are also enforced between the image model and relearning model at both feature and prediction levels. Experiments demonstrate the superiority of our approach over state-of-the-art few-shot segmentation methods. Our model bridges the gap between abundant annotated medical images and scarce, sparsely labeled medical videos to achieve strong video segmentation performance in this low data regime. Code is available at https://github.com/MedAITech/RAB.
Reducing Annotation Burden: Exploiting Image Knowledge for Few-Shot Medical Video Object Segmentation via Spatiotemporal Consistency Relearning
[ "Zheng, Zixuan", "Shi, Yilei", "Li, Chunlei", "Hu, Jingliang", "Zhu, Xiao Xiang", "Mou, Lichao" ]
Conference
[ "https://github.com/MedAITech/RAB" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
737
null
https://papers.miccai.org/miccai-2024/paper/0605_paper.pdf
@InProceedings{ Ada_Physics_MICCAI2024, author = { Adams-Tew, Samuel I. and Odéen, Henrik and Parker, Dennis L. and Cheng, Cheng-Chieh and Madore, Bruno and Payne, Allison and Joshi, Sarang }, title = { { Physics informed neural networks for estimation of tissue properties from multi-echo configuration state MRI } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
This work investigates the use of configuration state imaging together with deep neural networks to develop quantitative MRI techniques for deployment in an interventional setting. A physics modeling technique for inhomogeneous fields and heterogeneous tissues is presented and used to evaluate the theoretical capability of neural networks to estimate parameter maps from configuration state signal data. All tested normalization strategies achieved similar performance in estimating T2 and T2*. Varying network architecture and data normalization had substantial impacts on estimated flip angle and T1, highlighting their importance in developing neural networks to solve these inverse problems. The developed signal modeling technique provides an environment that will enable the development and evaluation of physics-informed machine learning techniques for MR parameter mapping and facilitate the development of quantitative MRI techniques to inform clinical decisions during MR-guided treatments.
Physics informed neural networks for estimation of tissue properties from multi-echo configuration state MRI
[ "Adams-Tew, Samuel I.", "Odéen, Henrik", "Parker, Dennis L.", "Cheng, Cheng-Chieh", "Madore, Bruno", "Payne, Allison", "Joshi, Sarang" ]
Conference
[ "https://github.com/fuslab-uofu/mri-signal-model" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
738
null
https://papers.miccai.org/miccai-2024/paper/3702_paper.pdf
@InProceedings{ Uro_Knowledgegrounded_MICCAI2024, author = { Urooj Khan, Aisha and Garrett, John and Bradshaw, Tyler and Salkowski, Lonie and Jeong, Jiwoong and Tariq, Amara and Banerjee, Imon }, title = { { Knowledge-grounded Adaptation Strategy for Vision-language Models: Building a Unique Case-set for Screening Mammograms for Residents Training } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
A visual-language model (VLM) pre-trained on natural images and text pairs poses a significant barrier when applied to medical contexts due to domain shift. Yet, adapting or fine-tuning these VLMs for medical use presents considerable hurdles, including domain misalignment, limited access to extensive datasets, and high-class imbalances. Hence, there is a pressing need for strategies to effectively adapt these VLMs to the medical domain, as such adaptations would prove immensely valuable in healthcare applications. In this study, we propose a framework designed to adeptly tailor VLMs to the medical domain, employing selective sampling and hard-negative mining techniques for enhanced performance in retrieval tasks. We validate the efficacy of our proposed approach by implementing it across two distinct VLMs: the in-domain VLM (MedCLIP) and out-of-domain VLMs (ALBEF). We assess the performance of these models both in their original off-the-shelf state and after undergoing our proposed training strategies, using two extensive datasets containing mammograms and their corresponding reports. Our evaluation spans zero-shot, few-shot, and supervised scenarios. Through our approach, we observe a notable enhancement in Recall@K performance for the image-text retrieval task.
Knowledge-grounded Adaptation Strategy for Vision-language Models: Building a Unique Case-set for Screening Mammograms for Residents Training
[ "Urooj Khan, Aisha", "Garrett, John", "Bradshaw, Tyler", "Salkowski, Lonie", "Jeong, Jiwoong", "Tariq, Amara", "Banerjee, Imon" ]
Conference
[ "https://github.com/aurooj/VLM_SS.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
739
null
https://papers.miccai.org/miccai-2024/paper/0563_paper.pdf
@InProceedings{ Xie_pFLFE_MICCAI2024, author = { Xie, Luyuan and Lin, Manqing and Liu, Siyuan and Xu, ChenMing and Luan, Tianyu and Li, Cong and Fang, Yuejian and Shen, Qingni and Wu, Zhonghai }, title = { { pFLFE: Cross-silo Personalized Federated Learning via Feature Enhancement on Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
In medical image segmentation, personalized cross-silo federated learning (FL) is becoming popular for utilizing varied data across healthcare settings to overcome data scarcity and privacy concerns. However, existing methods often suffer from client drift, leading to inconsistent performance and delayed training. We propose a new framework, Personalized Federated Learning via Feature Enhancement (pFLFE), designed to mitigate these challenges. pFLFE consists of two main stages: feature enhancement and supervised learning. The first stage improves differentiation between foreground and background features, and the second uses these enhanced features for learning from segmentation masks. We also design an alternative training approach that requires fewer communication rounds without compromising segmentation quality, even with limited communication resources. Through experiments on three medical segmentation tasks, we demonstrate that pFLFE outperforms the state-of-the-art methods.
pFLFE: Cross-silo Personalized Federated Learning via Feature Enhancement on Medical Image Segmentation
[ "Xie, Luyuan", "Lin, Manqing", "Liu, Siyuan", "Xu, ChenMing", "Luan, Tianyu", "Li, Cong", "Fang, Yuejian", "Shen, Qingni", "Wu, Zhonghai" ]
Conference
2407.00462
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
740
null
https://papers.miccai.org/miccai-2024/paper/1246_paper.pdf
@InProceedings{ Cai_Masked_MICCAI2024, author = { Cai, Yuxin and Zhang, Jianhai and He, Lei and Ganesh, Aravind and Qiu, Wu }, title = { { Masked Residual Diffusion Probabilistic Model with Regional Asymmetry Prior for Generating Perfusion Maps from Multi-phase CTA } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Multiphase CT angiography (mCTA) has become an important diagnostic tool for acute ischemic stroke (AIS), offering insights into occlusion sites and collateral circulation. However, its broader application is hindered by the need for specialized interpretation, contrasting with the intuitive nature of CT perfusion (CTP). In this work, we propose a novel diffusion based generative model to generate CTP-like perfusion maps, enhancing AIS diagnosis in resource-limited settings. Unlike traditional diffusion models that restore images by predicting the added noise, our approach uses a masked residual diffusion probabilistic model (MRDPM) to recover the residuals between the predicted and target image within brain regions of interests for more detailed generation. To target denoising efforts on relevant regions, noise is selectively added into the brain area only during diffusion. Furthermore, a Multi-scale Asymmetry Prior module and a Brain Region-Aware Network are proposed to incorporate anatomical prior information into the MRDPM to generate finer details while ensuring consistency. Experimental evaluations with 514 patient images demonstrate that our proposed method is able to generate high quality CTP-like perfusion maps, outperforming several other generative models regarding the metrics of MAE, LPIPS, SSIM, and PSNR. The code is publicly available at https://github.com/UniversalCAI/MRDPM-with-RAP.
Masked Residual Diffusion Probabilistic Model with Regional Asymmetry Prior for Generating Perfusion Maps from Multi-phase CTA
[ "Cai, Yuxin", "Zhang, Jianhai", "He, Lei", "Ganesh, Aravind", "Qiu, Wu" ]
Conference
[ "https://github.com/UniversalCAI/MRDPM-with-RAP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
741
null
https://papers.miccai.org/miccai-2024/paper/2957_paper.pdf
@InProceedings{ Lu_Spot_MICCAI2024, author = { Lu, Zilin and Xie, Yutong and Zeng, Qingjie and Lu, Mengkang and Wu, Qi and Xia, Yong }, title = { { Spot the Difference: Difference Visual Question Answering with Residual Alignment } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Difference Visual Question Answering (DiffVQA) introduces a new task aimed at understanding and responding to questions regarding the disparities observed between two images. Unlike traditional medical VQA tasks, DiffVQA closely mirrors the diagnostic procedures of radiologists, who frequently conduct longitudinal comparisons of images taken at different time points for a given patient. This task accentuates the discrepancies between images captured at distinct temporal intervals.To better address the variations, this paper proposes a novel Residual Alignment model (ReAl) tailored for DiffVQA. ReAl is designed to produce flexible and accurate answers by analyzing the discrepancies in chest X-ray images of the same patient across different time points. Compared to the previous method, ReAl additionally aid a residual input branch, where the residual of two images is fed into this branch. Additionally, a Residual Feature Alignment (RFA) module is introduced to ensure that ReAl effectively captures and learns the disparities between corresponding images. Experimental evaluations conducted on the MIMIC-Diff-VQA dataset demonstrate the superiority of ReAl over previous state-of-the-art methods, consistently achieving better performance. Ablation experiments further validate the effectiveness of the RFA module in enhancing the model’s attention to differences. The code implementation of the proposed approach will be made available.
Spot the Difference: Difference Visual Question Answering with Residual Alignment
[ "Lu, Zilin", "Xie, Yutong", "Zeng, Qingjie", "Lu, Mengkang", "Wu, Qi", "Xia, Yong" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
742
null
https://papers.miccai.org/miccai-2024/paper/1756_paper.pdf
@InProceedings{ Qiu_Towards_MICCAI2024, author = { Qiu, Xinmei and Wang, Fan and Sun, Yongheng and Lian, Chunfeng and Ma, Jianhua }, title = { { Towards Graph Neural Networks with Domain-Generalizable Explainability for fMRI-Based Brain Disorder Diagnosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Graph neural networks (GNNs) represent a cutting-edge methodology in diagnosing brain disorders via fMRI data. Explainability and generalizability are two critical issues of GNNs for fMRI-based diagnoses, considering the high complexity of functional brain networks and the strong variations in fMRI data across different clinical centers. Although there have been many studies on GNNs’ explainability and generalizability, yet few have addressed both aspects simultaneously. In this paper, we unify these two issues and revisit the domain generalization (DG) of fMRI-based diagnoses from the view of explainability. That is, we aim to learn domain-generalizable explanation factors to enhance center-agnostic graph representation learning and therefore brain disorder diagnoses. To this end, a specialized meta-learning framework coupled with explainability-generalizable (XG) regularizations is designed to learn diagnostic GNN models (termed XG-GNN) from fMRI BOLD signals. Our XG-GNN features the ability to build nonlinear functional networks in a task-oriented fashion. More importantly, the group-wise differences of such learned individual networks can be stably captured and maintained to unseen fMRI centers to jointly boost the DG of diagnostic explainability and accuracy. Experimental results on the ABIDE dataset demonstrate the effectiveness of our XG-GNN. Our source code will be publicly released.
Towards Graph Neural Networks with Domain-Generalizable Explainability for fMRI-Based Brain Disorder Diagnosis
[ "Qiu, Xinmei", "Wang, Fan", "Sun, Yongheng", "Lian, Chunfeng", "Ma, Jianhua" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
743
null
https://papers.miccai.org/miccai-2024/paper/1074_paper.pdf
@InProceedings{ Xu_DiRecT_MICCAI2024, author = { Xu, Xuanang and Lee, Jungwook and Lampen, Nathan and Kim, Daeseung and Kuang, Tianshu and Deng, Hannah H. and Liebschner, Michael A. K. and Gateno, Jaime and Yan, Pingkun }, title = { { DiRecT: Diagnosis and Reconstruction Transformer for Mandibular Deformity Assessment } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
In the realm of orthognathic surgical planning, the precision of mandibular deformity diagnosis is paramount to ensure favorable treatment outcomes. Traditional methods, reliant on the meticulous identification of bony landmarks via radiographic imaging techniques such as cone beam computed tomography (CBCT), are both resource-intensive and costly. In this paper, we present a novel way to diagnose mandibular deformities in which we harness facial landmarks detectable by off-the-shelf generic models, thus eliminating the necessity for bony landmark identification. We propose the Diagnosis-Reconstruction Transformer (DiRecT), an advanced network that exploits the automatically detected 3D facial landmarks to assess mandibular deformities. DiRecT’s training is augmented with an auxiliary task of landmark reconstruction and is further enhanced by a teacher-student semi-supervised learning framework, enabling effective utilization of both labeled and unlabeled data to learn discriminative representations. Our study encompassed a comprehensive set of experiments utilizing an in-house clinical dataset of 101 subjects, alongside a public non-medical dataset of 1,519 subjects. The experimental results illustrate that our method markedly streamlines the mandibular deformity diagnostic workflow and exhibits promising diagnostic performance when compared with the baseline methods, which demonstrates DiRecT’s potential as an alternative to conventional diagnostic protocols in the field of orthognathic surgery. Source code is publicly available at https://github.com/RPIDIAL/DiRecT.
DiRecT: Diagnosis and Reconstruction Transformer for Mandibular Deformity Assessment
[ "Xu, Xuanang", "Lee, Jungwook", "Lampen, Nathan", "Kim, Daeseung", "Kuang, Tianshu", "Deng, Hannah H.", "Liebschner, Michael A. K.", "Gateno, Jaime", "Yan, Pingkun" ]
Conference
[ "https://github.com/RPIDIAL/DiRecT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
744
null
https://papers.miccai.org/miccai-2024/paper/1885_paper.pdf
@InProceedings{ Son_Progressive_MICCAI2024, author = { Son, Moo Hyun and Bae, Juyoung and Tong, Elizabeth and Chen, Hao }, title = { { Progressive Knowledge Distillation for Automatic Perfusion Parameter Maps Generation from Low Temporal Resolution CT Perfusion Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Perfusion Parameter Maps (PPMs), generated from Computer Tomography Perfusion (CTP) scans, deliver detailed measurements of cerebral blood flow and volume, crucial for the early identification and strategic treatment of cerebrovascular diseases. However, the acquisition of PPMs involves significant challenges. Firstly, the accuracy of these maps heavily relies on the manual selection of Arterial Input Function (AIF) information. Secondly, patients are subjected to considerable radiation exposure during the scanning process. In response, previous researches have attempted to automate AIF selection and reduce radiation exposure of CTP by lowering temporal resolution, utilizing deep learning to predict PPMs from automated AIF selection and temporal resolutions as low as 1/3. However, the effectiveness of these approaches remains marginally significant. In this paper, we push the limits and propose a novel framework, Progressive Knowledge Distillation (PKD), to generate accurate PPMs from 1/16 standard temporal resolution CTP scans. PKD uses a series of teacher networks, each trained on different temporal resolutions, for knowledge distillation. Initially, the student network learns from a teacher with low temporal resolution; as the student is trained, the teacher is scaled to a higher temporal resolution. This progressive approach aims to reduce the large initial knowledge gap between the teacher and the student. Experimental results demonstrate that PKD can generate PPMs comparable to full-resolution ground truth, outperforming current deep learning frameworks.
Progressive Knowledge Distillation for Automatic Perfusion Parameter Maps Generation from Low Temporal Resolution CT Perfusion Images
[ "Son, Moo Hyun", "Bae, Juyoung", "Tong, Elizabeth", "Chen, Hao" ]
Conference
[ "https://github.com/mhson-kyle/progressive-kd" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
745
null
https://papers.miccai.org/miccai-2024/paper/3209_paper.pdf
@InProceedings{ Rid_HuLP_MICCAI2024, author = { Ridzuan, Muhammad and Shaaban, Mai A. and Saeed, Numan and Sobirov, Ikboljon and Yaqub, Mohammad }, title = { { HuLP: Human-in-the-Loop for Prognosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
This paper introduces HuLP, a Human-in-the-Loop for Prognosis model designed to enhance the reliability and interpretability of prognostic models in clinical contexts, especially when faced with the complexities of missing covariates and outcomes. HuLP offers an innovative approach that enables human expert intervention, empowering clinicians to interact with and correct models’ predictions, thus fostering collaboration between humans and AI models to produce more accurate prognosis. Additionally, HuLP addresses the challenges of missing data by utilizing neural networks and providing a tailored methodology that effectively handles missing data. Traditional methods often struggle to capture the nuanced variations within patient populations, leading to compromised prognostic predictions. HuLP imputes missing covariates based on imaging features, aligning more closely with clinician workflows and enhancing reliability. We conduct our experiments on two real-world, publicly available medical datasets to demonstrate the superiority and competitiveness of HuLP. Our code is available at https://github.com/BioMedIA-MBZUAI/HuLP.
HuLP: Human-in-the-Loop for Prognosis
[ "Ridzuan, Muhammad", "Shaaban, Mai A.", "Saeed, Numan", "Sobirov, Ikboljon", "Yaqub, Mohammad" ]
Conference
2403.13078
[ "https://github.com/BioMedIA-MBZUAI/HuLP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
746
null
https://papers.miccai.org/miccai-2024/paper/3725_paper.pdf
@InProceedings{ Yua_Longitudinally_MICCAI2024, author = { Yuan, Xinrui and Cheng, Jiale and Hu, Dan and Wu, Zhengwang and Wang, Li and Lin, Weili and Li, Gang }, title = { { Longitudinally Consistent Individualized Prediction of Infant Cortical Morphological Development } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Neurodevelopment is exceptionally dynamic and critical during infancy, as many neurodevelopmental disorders emerge from abnormal brain development during this stage. Obtaining a full trajectory of neurodevelopment from existing incomplete longitudinal data can enrich our limited understanding of normal early brain development and help identify neurodevelopmental disorders. Although many regression models and deep learning methods have been proposed for longitudinal prediction based on incomplete datasets, they have two major drawbacks. First, regression models suffered from the strict requirements of input and output time points, which is less useful in practical scenarios. Second, although existing deep learning methods could predict cortical development at multiple ages, they predicted missing data independently with each available scan, yielding inconsistent predictions for a target time point given multiple inputs, which ignores longitudinal dependencies and introduces ambiguity in practical applications. To this end, we emphasize temporal consistency and develop a novel, flexible framework named longitudinally consistent triplet disentanglement autoencoder to predict an individualized longitudinal cortical developmental trajectory based on each available input by encouraging the similarity among trajectories with a dynamic time-warping loss. Specifically, to achieve individualized prediction, we employ a surfaced-based autoencoder, which decomposes the encoded latent features into identity-related and age-related features with an age estimation task and identity similarity loss as supervisions. These identity-related features are further combined with age conditions in the latent space to generate longitudinal developmental trajectories with the decoder. Experiments on predicting longitudinal infant cortical property maps validate the superior longitudinal consistency and exactness of our results compared to baselines’.
Longitudinally Consistent Individualized Prediction of Infant Cortical Morphological Development
[ "Yuan, Xinrui", "Cheng, Jiale", "Hu, Dan", "Wu, Zhengwang", "Wang, Li", "Lin, Weili", "Li, Gang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
747
null
https://papers.miccai.org/miccai-2024/paper/0345_paper.pdf
@InProceedings{ Din_Physicalpriorsguided_MICCAI2024, author = { Ding, Zhengyao and Hu, Yujian and Zhang, Hongkun and Wu, Fei and Yang, Shifeng and Du, Xiaolong and Xiang, Yilang and Li, Tian and Chu, Xuesen and Huang, Zhengxing }, title = { { Physical-priors-guided Aortic Dissection Detection using Non-Contrast-Enhanced CT images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Aortic dissection (AD) is a severe cardiovascular emergency requiring prompt and precise diagnosis for better survival chances. Given the limited use of Contrast-Enhanced Computed Tomography (CE-CT) in routine clinical screenings, this study presents a new method that enhances the diagnostic process using Non-Contrast-Enhanced CT (NCE-CT) images. In detail, we integrate biomechanical and hemodynamic physical priors into a 3D U-Net model and utilize a transformer encoder to extract superior global features, along with a cGAN-inspired discriminator for the generation of realistic CE-CT-like images. The proposed model not only innovates AD detection on NCE-CT but also provides a safer alternative for patients contraindicated for contrast agents. Comparative evaluations and ablation studies against existing methods demonstrate the superiority of our model in terms of recall, AUC, and F1 score metrics standing at 0.882, 0.855, and 0.829, respectively. Incorporating physical priors into diagnostics offers a significant, nuanced, and non-invasive advancement, seamlessly integrating medical imaging with the dynamic aspects of human physiology. Our code is available at https://github.com/Yukui-1999/PIAD.
Physical-priors-guided Aortic Dissection Detection using Non-Contrast-Enhanced CT images
[ "Ding, Zhengyao", "Hu, Yujian", "Zhang, Hongkun", "Wu, Fei", "Yang, Shifeng", "Du, Xiaolong", "Xiang, Yilang", "Li, Tian", "Chu, Xuesen", "Huang, Zhengxing" ]
Conference
[ "https://github.com/Yukui-1999/PIAD" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
748
null
https://papers.miccai.org/miccai-2024/paper/0877_paper.pdf
@InProceedings{ Li_KARGEN_MICCAI2024, author = { Li, Yingshu and Wang, Zhanyu and Liu, Yunyi and Wang, Lei and Liu, Lingqiao and Zhou, Luping }, title = { { KARGEN: Knowledge-enhanced Automated Radiology Report Generation Using Large Language Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Harnessing the robust capabilities of Large Language Models (LLMs) for narrative generation, logical reasoning, and common-sense knowledge integration, this study delves into utilizing LLMs to enhance automated radiology report generation (R2Gen). Despite the wealth of knowledge within LLMs, efficiently triggering relevant knowledge within these large models for specific tasks like R2Gen poses a critical research challenge. This paper presents KARGEN, a Knowledge-enhanced Automated radiology Report GENeration framework based on LLMs. Utilizing a frozen LLM to generate reports, the framework integrates a knowledge graph to unlock chest disease-related knowledge within the LLM to enhance the clinical utility of generated reports. This is achieved by leveraging the knowledge graph to distill disease-related features in a designed way. Since a radiology report encompasses both normal and disease-related findings, the extracted graph-enhanced disease-related features are integrated with regional image features, attending to both aspects. We explore two fusion methods to automatically prioritize and select the most relevant features. The fused features are employed by LLM to generate reports that are more sensitive to diseases and of improved quality. Our approach demonstrates promising results on the MIMIC-CXR and IU-Xray datasets. Our code will be available on GitHub.
KARGEN: Knowledge-enhanced Automated Radiology Report Generation Using Large Language Models
[ "Li, Yingshu", "Wang, Zhanyu", "Liu, Yunyi", "Wang, Lei", "Liu, Lingqiao", "Zhou, Luping" ]
Conference
2409.05370
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
749
null
https://papers.miccai.org/miccai-2024/paper/0441_paper.pdf
@InProceedings{ Hun_CrossSlice_MICCAI2024, author = { Hung, Alex Ling Yu and Zheng, Haoxin and Zhao, Kai and Pang, Kaifeng and Terzopoulos, Demetri and Sung, Kyunghyun }, title = { { Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Current deep learning-based models typically analyze medical images in either 2D or 3D albeit disregarding volumetric information or suffering sub-optimal performance due to the anisotropic resolution of MR data. Furthermore, providing an accurate uncertainty estimation is beneficial to clinicians, as it indicates how confident a model is about its prediction. We propose a novel 2.5D cross-slice attention model that utilizes both global and local information, along with an evidential critical loss, to perform evidential deep learning for the detection in MR images of prostate cancer, one of the most common cancers and a leading cause of cancer-related death in men. We perform extensive experiments with our model on two different datasets and achieve state-of-the-art performance in prostate cancer detection along with improved epistemic uncertainty estimation. The implementation of the model is available at https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss.
Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection
[ "Hung, Alex Ling Yu", "Zheng, Haoxin", "Zhao, Kai", "Pang, Kaifeng", "Terzopoulos, Demetri", "Sung, Kyunghyun" ]
Conference
2407.01146
[ "https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
750
null
https://papers.miccai.org/miccai-2024/paper/1149_paper.pdf
@InProceedings{ Yan_Cardiovascular_MICCAI2024, author = { Yang, Zefan and Zhang, Jiajin and Wang, Ge and Kalra, Mannudeep K. and Yan, Pingkun }, title = { { Cardiovascular Disease Detection from Multi-View Chest X-rays with BI-Mamba } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Accurate prediction of Cardiovascular disease (CVD) risk in medical imaging is central to effective patient health management. Previous studies have demonstrated that imaging features in computed tomography (CT) can help predict CVD risk. However, CT entails notable radiation exposure, which may result in adverse health effects for patients. In contrast, chest X-ray emits significantly lower levels of radiation, offering a safer option. This rationale motivates our investigation into the feasibility of using chest X-ray for predicting CVD risk. Convolutional Neural Networks (CNNs) and Transformers are two established network architectures for computer-aided diagnosis. However, they struggle to model very high resolution chest X-ray due to the lack of large context modeling power or quadratic time complexity. Inspired by state space sequence models (SSMs), a new class of network architectures with competitive sequence modeling power as Transfomers and linear time complexity, we propose Bidirectional Image Mamba (BI-Mamba) to complement the unidirectional SSMs with opposite directional information. BI-Mamba utilizes parallel forward and backwark blocks to encode longe-range dependencies of multi-view chest X-rays. We conduct extensive experiments on images from 10,395 subjects in National Lung Screening Trail (NLST). Results show that BI-Mamba outperforms ResNet-50 and ViT-S with comparable parameter size, and saves significant amount of GPU memory during training. Besides, BI-Mamba achieves promising performance compared with previous state of the art in CT, unraveling the potential of chest X-ray for CVD risk prediction.
Cardiovascular Disease Detection from Multi-View Chest X-rays with BI-Mamba
[ "Yang, Zefan", "Zhang, Jiajin", "Wang, Ge", "Kalra, Mannudeep K.", "Yan, Pingkun" ]
Conference
2405.18533
[ "https://github.com/RPIDIAL/BI-Mamba" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
751
null
https://papers.miccai.org/miccai-2024/paper/3064_paper.pdf
@InProceedings{ Zhu_Stealing_MICCAI2024, author = { Zhu, Meilu and Yang, Qiushi and Gao, Zhifan and Liu, Jun and Yuan, Yixuan }, title = { { Stealing Knowledge from Pre-trained Language Models for Federated Classifier Debiasing } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Federated learning (FL) has shown great potential in medical image computing since it provides a decentralized learning paradigm that allows multiple clients to train a model collaboratively without privacy leakage. However, current studies have shown that heterogeneous data of clients causes biased classifiers of local models during training, leading to the performance degradation of a federation system. In experiments, we surprisingly found that continuously freezing local classifiers can significantly improve the performance of the baseline FL method (FedAvg) for heterogeneous data. This observation motivates us to pre-construct a high-quality initial classifier for local models and freeze it during local training to avoid classifier biases. With this insight, we propose a novel approach named Federated Classifier deBiasing (FedCB) to solve the classifier biases problem in heterogeneous federated learning. The core idea behind FedCB is to exploit linguistic knowledge from pre-trained language models (PLMs) to construct high-quality local classifiers. Specifically, FedCB first collects the class concepts from clients and then uses a set of prompts to contextualize them, yielding language descriptions of these concepts. These descriptions are fed into a pre-trained language model to obtain their text embeddings. The generated embeddings are sent to clients to estimate the distribution of each category in the semantic space. Regarding these distributions as the local classifiers, we perform the alignment between the image representations and the corresponding semantic distribution by minimizing an upper bound of the expected cross-entropy loss. Extensive experiments on public datasets demonstrate the superior performance of FedCB compared to state-of-the-art methods. The source code is available at https://github.com/CUHK-AIM-Group/FedCB.
Stealing Knowledge from Pre-trained Language Models for Federated Classifier Debiasing
[ "Zhu, Meilu", "Yang, Qiushi", "Gao, Zhifan", "Liu, Jun", "Yuan, Yixuan" ]
Conference
[ "https://github.com/CUHK-AIM-Group/FedCB" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
752
null
https://papers.miccai.org/miccai-2024/paper/3580_paper.pdf
@InProceedings{ Sin_CoBooM_MICCAI2024, author = { Singh, Azad and Mishra, Deepak }, title = { { CoBooM: Codebook Guided Bootstrapping for Medical Image Representation Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Self-supervised learning (SSL) has emerged as a promising paradigm for medical image analysis by harnessing unannotated data. Despite their potential, the existing SSL approaches overlook the high anatomical similarity inherent in medical images. This makes it challenging for SSL methods to capture diverse semantic content in medical images consistently. This work introduces a novel and generalized solution that implicitly exploits anatomical similarities by integrating codebooks in SSL. The codebook serves as a concise and informative dictionary of visual patterns, which not only aids in capturing nuanced anatomical details but also facilitates the creation of robust and generalized feature representations. In this context, we propose CoBooM, a novel framework for self-supervised medical image learning by integrating continuous and discrete representations. The continuous component ensures the preservation of fine-grained details, while the discrete aspect facilitates coarse-grained feature extraction through the structured embedding space. To understand the effectiveness of CoBooM, we conduct a comprehensive evaluation of various medical datasets encompassing chest X-rays and fundus images. The experimental results reveal a significant performance gain in classification and segmentation tasks.
CoBooM: Codebook Guided Bootstrapping for Medical Image Representation Learning
[ "Singh, Azad", "Mishra, Deepak" ]
Conference
2408.04262
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
753
null
https://papers.miccai.org/miccai-2024/paper/1629_paper.pdf
@InProceedings{ Hua_LIDIA_MICCAI2024, author = { Huang, Wei and Liu, Wei and Zhang, Xiaoming and Yin, Xiaoli and Han, Xu and Li, Chunli and Gao, Yuan and Shi, Yu and Lu, Le and Zhang, Ling and Zhang, Lei and Yan, Ke }, title = { { LIDIA: Precise Liver Tumor Diagnosis on Multi-Phase Contrast-Enhanced CT via Iterative Fusion and Asymmetric Contrastive Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
The early detection and precise diagnosis of liver tumors are tasks of critical clinical value, yet they pose significant challenges due to the high heterogeneity and variability of liver tumors. In this work, a precise LIver tumor DIAgnosis network on multi-phase contrast-enhanced CT, named LIDIA, is proposed for real-world scenario. To fully utilize all available phases in contrast-enhanced CT, LIDIA first employs the iterative fusion module to aggregate variable numbers of image phases, thereby capturing the features of lesions at different phases for better tumor diagnosis. To effectively mitigate the high heterogeneity problem of liver tumors, LIDIA incorporates asymmetric contrastive learning to enhance the discriminability between different classes. To evaluate our method, we constructed a large-scale dataset comprising 1,921 patients and 8,138 lesions. LIDIA has achieved an average AUC of 93.6% across eight different types of lesions, demonstrating its effectiveness.
LIDIA: Precise Liver Tumor Diagnosis on Multi-Phase Contrast-Enhanced CT via Iterative Fusion and Asymmetric Contrastive Learning
[ "Huang, Wei", "Liu, Wei", "Zhang, Xiaoming", "Yin, Xiaoli", "Han, Xu", "Li, Chunli", "Gao, Yuan", "Shi, Yu", "Lu, Le", "Zhang, Ling", "Zhang, Lei", "Yan, Ke" ]
Conference
2407.13217
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
754
null
https://papers.miccai.org/miccai-2024/paper/0372_paper.pdf
@InProceedings{ Che_Detecting_MICCAI2024, author = { Chen, Jianan and Ramanathan, Vishwesh and Xu, Tony and Martel, Anne L. }, title = { { Detecting noisy labels with repeated cross-validations } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Machine learning models experience deteriorated performance when trained in the presence of noisy labels. This is particularly problematic for medical tasks, such as survival prediction, which typically face high label noise complexity with few clear-cut solutions. Inspired by the large fluctuations across folds in the cross-validation performance of survival analyses, we design Monte-Carlo experiments to show that such fluctuation could be caused by label noise. We propose two novel and straightforward label noise detection algorithms that effectively identify noisy examples by pinpointing the samples that more frequently contribute to inferior cross-validation results. We first introduce Repeated Cross-Validation (ReCoV), a parameter-free label noise detection algorithm that is robust to model choice. We further develop fastReCoV, a less robust but more tractable and efficient variant of ReCoV suitable for deep learning applications. Through extensive experiments, we show that ReCoV and fastReCoV achieve state-of-the-art label noise detection performance in a wide range of modalities, models and tasks, including survival analysis, which has yet to be addressed in the literature. Our code and data are publicly available at https://github.com/GJiananChen/ReCoV.
Detecting noisy labels with repeated cross-validations
[ "Chen, Jianan", "Ramanathan, Vishwesh", "Xu, Tony", "Martel, Anne L." ]
Conference
[ "https://github.com/GJiananChen/ReCoV" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
755
null
https://papers.miccai.org/miccai-2024/paper/0962_paper.pdf
@InProceedings{ Yu_PatchSlide_MICCAI2024, author = { Yu, Jiahui and Wang, Xuna and Ma, Tianyu and Li, Xiaoxiao and Xu, Yingke }, title = { { Patch-Slide Discriminative Joint Learning for Weakly-Supervised Whole Slide Image Representation and Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
In computational pathology, Multiple Instance Learning (MIL) is widely applied for classifying Giga-pixel whole slide images (WSIs) with only im-age-level labels. Due to the size and prominence of positive areas varying significantly across different WSIs, it is difficult for existing methods to learn task-specific features accurately. Additionally, subjective label noise usually affects deep learning frameworks, further hindering the mining of discriminative features. To address this problem, we propose an effective theory that optimizes patch and WSI feature extraction jointly, enhancing feature discriminability. Powered by this theory, we develop an angle-guided MIL framework called PSJA-MIL, effectively leveraging features at both levels. We also focus on eliminating noise between instances and em-phasizing feature enhancement within WSIs. We evaluate our approach on Camelyon17 and TCGA-Liver datasets, comparing it against state-of-the-art methods. The experimental results show significant improvements in accu-racy and generalizability, surpassing the latest methods by more than 2%. Code will be available at: https://github.com/sm8754/PSJAMIL.
Patch-Slide Discriminative Joint Learning for Weakly-Supervised Whole Slide Image Representation and Classification
[ "Yu, Jiahui", "Wang, Xuna", "Ma, Tianyu", "Li, Xiaoxiao", "Xu, Yingke" ]
Conference
[ "https://github.com/sm8754/PSJAMIL" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
756
null
https://papers.miccai.org/miccai-2024/paper/2874_paper.pdf
@InProceedings{ Sir_CenterlineDiameters_MICCAI2024, author = { Sirazitdinov, Ilyas and Dylov, Dmitry V. }, title = { { Centerline-Diameters Data Structure for Interactive Segmentation of Tube-shaped Objects } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Interactive segmentation techniques are in high demand in medical imaging, where the user-machine interactions are to address the imperfections of a model and to speed up the manual annotation. All recently proposed interactive approaches have kept the segmentation mask at the core, an inefficient trait if complex elongated shapes, such as wires, catheters, or veins, need to be segmented. Herein, we propose a new data structure and the corresponding click encoding scheme for the interactive segmentation of such elongated objects, without the masks. Our data structure is based on the set of centerline and diameters, providing a good trade-off between the filament-free contouring and the pixel-wise accuracy of the prediction. Given a simple, intuitive, and interpretable setup, the new data structure can be readily integrated into existing interactive segmentation frameworks.
Centerline-Diameters Data Structure for Interactive Segmentation of Tube-shaped Objects
[ "Sirazitdinov, Ilyas", "Dylov, Dmitry V." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
757
null
https://papers.miccai.org/miccai-2024/paper/0173_paper.pdf
@InProceedings{ Yan_Advancing_MICCAI2024, author = { Yang, Yanwu and Chen, Hairui and Hu, Jiesi and Guo, Xutao and Ma, Ting }, title = { { Advancing Brain Imaging Analysis Step-by-step via Progressive Self-paced Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Recent advancements in deep learning have shifted the development of brain imaging analysis. However, several challenges remain, such as heterogeneity, individual variations, and the contradiction between the high dimensionality and small size of brain imaging datasets. These issues complicate the learning process, preventing models from capturing intrinsic, meaningful patterns and potentially leading to suboptimal performance due to biases and overfitting. Curriculum learning (CL) presents a promising solution by organizing training examples from simple to complex, mimicking the human learning process, and potentially fostering the development of more robust and accurate models. Despite its potential, the inherent limitations posed by small initial training datasets present significant challenges, including overfitting and poor generalization. In this paper, we introduce the Progressive Self-Paced Distillation (PSPD) framework, employing an adaptive and progressive pacing and distillation mechanism. This allows for dynamic curriculum adjustments based on the states of both past and present models. The past model serves as a teacher, guiding the current model with gradually refined curriculum knowledge and helping prevent the loss of previously acquired knowledge. We validate PSPD’s efficacy and adaptability across various convolutional neural networks using the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, underscoring its superiority in enhancing model performance and generalization capabilities
Advancing Brain Imaging Analysis Step-by-step via Progressive Self-paced Learning
[ "Yang, Yanwu", "Chen, Hairui", "Hu, Jiesi", "Guo, Xutao", "Ma, Ting" ]
Conference
2407.16128
[ "https://github.com/Hrychen7/PSPD" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
758
null
https://papers.miccai.org/miccai-2024/paper/1765_paper.pdf
@InProceedings{ Sha_Confidence_MICCAI2024, author = { Sharma, Saurabh and Kumar, Atul and Chandra, Joydeep }, title = { { Confidence Matters: Enhancing Medical Image Classification Through Uncertainty-Driven Contrastive Self-Distillation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
The scarcity of data in medical image classification using deep learning often leads to overfitting the training data. Research indicates that self-distillation techniques, particularly those employing mean teacher ensembling, can alleviate this issue. However, directly transferring knowledge distillation (KD) from computer vision to medical image classification yields subpar results due to higher intra-class variance and class imbalance in medical images. This can cause supervised and contrastive learning-based solutions to become biased towards the majority class, resulting in misclassification. To address this, we propose UDCD, an uncertainty-driven contrastive learning-based self-distillation framework that regulates the transfer of contrastive and supervised knowledge, ensuring only relevant knowledge is transferred from the teacher to the student for fine-grained knowledge transfer. By controlling the outcome of the transferable contrastive and teacher’s supervised knowledge based on confidence levels, our framework better classifies images under higher intra- and inter-relation constraints with class imbalance raised due to data scarcity, distilling only useful knowledge to the student. Extensive experiments conducted on benchmark datasets such as HAM10000 and APTOS validate the superiority of our proposed method. The code is available at https://github.com/philsaurabh/UDCD_MICCAI.
Confidence Matters: Enhancing Medical Image Classification Through Uncertainty-Driven Contrastive Self-Distillation
[ "Sharma, Saurabh", "Kumar, Atul", "Chandra, Joydeep" ]
Conference
[ "https://github.com/philsaurabh/UDCD_MICCAI" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
759
null
https://papers.miccai.org/miccai-2024/paper/3080_paper.pdf
@InProceedings{ Zha_Knowledgedriven_MICCAI2024, author = { Zhang, Yupei and Wang, Xiaofei and Meng, Fangliangzi and Tang, Jin and Li, Chao }, title = { { Knowledge-driven Subspace Fusion and Gradient Coordination for Multi-modal Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Most recently, molecular pathology has played a crucial role in cancer diagnosis and prognosis assessment. Deep learning-based methods have been proposed for integrating multi-modal genomic and histology data for efficient molecular pathology analysis. However, current multi-modal approaches simply treat each modality equally, ignoring the modal unique information and the complex correlation across modalities, which hinders the effective multi-modal feature representation for downstream tasks. Besides, considering the intrinsic complexity in tumour ecosystem, where both tumour cells and tumor microenvironment (TME) contribute to the cancer status, it is challenging to utilize a single embedding space to model the mixed genomic profiles of the tumour ecosystem. To tackle these challenges, in this paper, we propose a biologically interpretative and robust multi-modal learning framework to efficiently integrate histology images and genomics data. Specifically, to enhance cross-modal interactions, we design a knowledge-driven subspace fusion scheme, consisting a cross-modal deformable attention module and a gene-guided consistency strategy, which Additionally, in pursuit of dynamically optimizing the subspace knowledge, we further propose a novel gradient coordinatio n learning strategy. Extensive experiments on two public datasets demonstrate the effectiveness of our proposed method, outperforming state-of-the-art techniques in three downstream tasks of glioma diagnosis, tumour grading, and survival analysis.
Knowledge-driven Subspace Fusion and Gradient Coordination for Multi-modal Learning
[ "Zhang, Yupei", "Wang, Xiaofei", "Meng, Fangliangzi", "Tang, Jin", "Li, Chao" ]
Conference
2406.13979
[ "https://github.com/helenypzhang/Subspace-Multimodal-Learning" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
760
null
https://papers.miccai.org/miccai-2024/paper/2861_paper.pdf
@InProceedings{ Gam_Disentangled_MICCAI2024, author = { Gamgam, Gurur and Kabakcioglu, Alkan and Yüksel Dal, Demet and Acar, Burak }, title = { { Disentangled Attention Graph Neural Network for Alzheimer’s Disease Diagnosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Neurodegenerative disorders, notably Alzheimer’s Disease type Dementia (ADD), are recognized for their imprint on brain connectivity. Recent investigations employing Graph Neural Networks (GNNs) have demonstrated considerable promise in diagnosing ADD. Among the various GNN architectures, attention-based GNNs have gained prominence due to their capacity to emphasize diagnostically significant alterations in neural connectivity while suppressing irrelevant ones. Nevertheless, a notable limitation observed in attention-based GNNs pertains to the homogeneity of attention coefficients across different attention heads, suggesting a tendency for the GNN to overlook spatially localized critical alterations at the subnetwork scale (mesoscale). In response to this challenge, we propose a novel Disentangled Attention GNN (DAGNN) model trained to discern attention coefficients across different heads. We show that DAGNN can generate uncorrelated latent representations across heads, potentially learning localized representations at mesoscale. We empirically show that these latent representations are superior to state-of-the-art GNN based representations in ADD diagnosis while providing insight to spatially localized changes in connectivity.
Disentangled Attention Graph Neural Network for Alzheimer’s Disease Diagnosis
[ "Gamgam, Gurur", "Kabakcioglu, Alkan", "Yüksel Dal, Demet", "Acar, Burak" ]
Conference
[ "https://github.com/gururgg/DAGNN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
761
null
https://papers.miccai.org/miccai-2024/paper/2773_paper.pdf
@InProceedings{ Bui_VisualTextual_MICCAI2024, author = { Bui, Phuoc-Nguyen and Le, Duc-Tai and Choo, Hyunseung }, title = { { Visual-Textual Matching Attention for Lesion Segmentation in Chest Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Lesion segmentation in chest images is crucial for AI-assisted diagnostic systems of pulmonary conditions. The multi-modal approach, which combines image and text description, has achieved notable performance in medical image segmentation. However, the existing methods mainly focus on improving the decoder using the text information while the encoder remains unexplored. In this study, we introduce a Multi-Modal Input UNet model, namely MMI-UNet, which utilizes visual-textual matching (VTM) features for infected areas segmentation in chest X-ray images. These VTM features, which contain visual features that are relevant to the text description, are created by a combination of self-attention and cross-attention mechanisms in a novel Image-Text Matching (ITM) module integrated into the encoder. Empirically, extensive evaluations on the QaTa-Cov19 and MosMedData+ datasets demonstrate MMI-UNet’s state-of-the-art performance over both uni-modal and previous multi-modal methods. Furthermore, our method also outperforms the best uni-modal method even with 15% of the training data. These findings highlight the interpretability of our vision-language model, advancing the explainable diagnosis of pulmonary diseases and reducing the labeling cost for segmentation tasks in the medical field. The source code is made publicly available on GitHub.
Visual-Textual Matching Attention for Lesion Segmentation in Chest Images
[ "Bui, Phuoc-Nguyen", "Le, Duc-Tai", "Choo, Hyunseung" ]
Conference
[ "https://github.com/nguyenpbui/MMI-UNet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
762
null
https://papers.miccai.org/miccai-2024/paper/4034_paper.pdf
@InProceedings{ Lee_Convolutional_MICCAI2024, author = { Lee, DongEon and Park, Chunsu and Lee, SeonYeong and Lee, SiYeoul and Kim, MinWoo }, title = { { Convolutional Implicit Neural Representation of pathology whole-slide images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
This study explored the application of implicit neural representations (INRs) to enhance digital histopathological imaging. Traditional imaging methods rely on discretizing the image space into grids, managed through a pyramid file structure to accommodate the large size of whole slide images (WSIs); however, the continuous mapping capability of INRs, utilizing a multi-layer perceptron (MLP) to encode images directly from coordinates, presents a transformative approach. This method promises to streamline WSI management by eliminating the need for down-sampled versions, allowing instantaneous access to any image region at the desired magnification, thereby optimizing memory usage and reducing data storage requirements. Despite their potential, INRs face challenges in accurately representing high spatial frequency components that are pivotal in histopathology. To address this gap, we introduce a novel INR framework that integrates auxiliary convolutional neural networks (CNN) with a standard MLP model. This dual-network approach not only facilitates pixel-level analysis, but also enhances the representation of local spatial variations, which is crucial for accurately rendering the complex patterns found in WSIs. Our experimental findings indicated a substantial improvement in the fidelity of histopathological image representation, as evidenced by a 3-6 dB increase in the peak signal-to-noise ratio compared to existing methods. This advancement underscores the potential of INRs to revolutionize digital histopathology, offering a pathway towards more efficient diagnostic imaging techniques. Our code is available at https://pnu-amilab.github.io/CINR/
Convolutional Implicit Neural Representation of pathology whole-slide images
[ "Lee, DongEon", "Park, Chunsu", "Lee, SeonYeong", "Lee, SiYeoul", "Kim, MinWoo" ]
Conference
[ "https://github.com/pnu-amilab/CINR" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
763
null
https://papers.miccai.org/miccai-2024/paper/1633_paper.pdf
@InProceedings{ Luo_Rethinking_MICCAI2024, author = { Luo, Xiangde and Li, Zihan and Zhang, Shaoting and Liao, Wenjun and Wang, Guotai }, title = { { Rethinking Abdominal Organ Segmentation (RAOS) in the clinical scenario: A robustness evaluation benchmark with challenging cases } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Deep learning has enabled great strides in abdominal multi-organ segmentation, even surpassing junior oncologists on common cases or organs. However, robustness on corner cases and complex organs remains a challenging open problem for clinical adoption. To investigate model robustness, we collected and annotated the RAOS dataset comprising 413 CT scans (~80k 2D images, ~8k 3D organ annotations) from 413 patients each with 17 (female) or 19 (male) labelled organs, manually delineated by oncologists. We grouped scans based on clinical information into 1) diagnosis/radiotherapy (317 volumes), 2) partial excision without the whole organ missing (22 volumes), and 3) excision with the whole organ missing (74 volumes). RAOS provides a potential benchmark for evaluating model robustness including organ hallucination. It also includes some organs that can be very hard to access on public datasets like the rectum, colon, intestine, prostate and seminal vesicles. We benchmarked several state-of-the-art methods in these three clinical groups to evaluate performance and robustness. We also assessed cross-generalization between RAOS and three public datasets. This dataset and comprehensive analysis establish a potential baseline for future robustness research.
Rethinking Abdominal Organ Segmentation (RAOS) in the clinical scenario: A robustness evaluation benchmark with challenging cases
[ "Luo, Xiangde", "Li, Zihan", "Zhang, Shaoting", "Liao, Wenjun", "Wang, Guotai" ]
Conference
2406.13674
[ "https://github.com/Luoxd1996/RAOS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
764
null
https://papers.miccai.org/miccai-2024/paper/3740_paper.pdf
@InProceedings{ Ars_Singlesource_MICCAI2024, author = { Arslan, Mazlum Ferhat and Guo, Weihong and Li, Shuo }, title = { { Single-source Domain Generalization in Deep Learning Segmentation via Lipschitz Regularization } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Deep learning methods have proven useful in medical image segmentation when deployed on independent and identically distributed (iid) data. However, their effectiveness in generalizing to previously unseen domains, where data may deviate from the iid assumption, remains an open problem. In this paper, we consider the single-source domain generalization scenario where models are trained on data from a single domain and are expected to be robust under domain shifts. Our approach focuses on leveraging the spectral properties of images to enhance generalization performance. Specifically, we argue that the high frequency regime contains domain-specific information in the form of device-specific noise and exemplify this case via data from multiple domains. Overcoming this challenge is non-trivial since crucial segmentation information such as edges is also encoded in this regime. We propose a simple regularization method, Lipschitz regularization via frequency spectrum (LRFS), that limits the sensitivity of a model’s latent representations to the high frequency components in the source domain while encouraging the sensitivity to middle frequency components. This regularization approach frames the problem as approximating and controlling the Lipschitz constant for high frequency components. LRFS can be seamlessly integrated into existing approaches. Our experimental results indicate that LRFS can significantly improve the generalization performance of a variety of models.
Single-source Domain Generalization in Deep Learning Segmentation via Lipschitz Regularization
[ "Arslan, Mazlum Ferhat", "Guo, Weihong", "Li, Shuo" ]
Conference
[ "https://github.com/kaptres/LRFS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
765
null
https://papers.miccai.org/miccai-2024/paper/1240_paper.pdf
@InProceedings{ Kru_Cryotrack_MICCAI2024, author = { Krumb, Henry J. and Mehtali, Jonas and Verde, Juan and Mukhopadhyay, Anirban and Essert, Caroline }, title = { { Cryotrack: Planning and Navigation for Computer Assisted Cryoablation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Needle guidance in thermal ablation procedures is challenging due to the absence of a free line-of-sight. To date, the needle trajectory is manually planned on a pre-operative CT slice, and then the entry point is transferred with a ruler on patient and needle. Usually, the needle is inserted in multiple strokes with interleaved control CTs, increasing the number of exchanges between OR and control room and exposure of the patient to radiation. This procedure is not only tedious, but also introduces a navigation error of several centimeters if the entry point was not chosen precisely. In this paper, we present Cryotrack, a pre- and intra-operative planning assistant for needle guidance in cryoablation. Cryotrack computes possible insertion areas under the use of a pre-operative CT and its segmentation, considering obstacles (bones) and risk structures. During the intervention, cryotrack supports the clinician by supplying intraoperative guidance with a user-friendly 3D interface. Our system is evaluated in a phantom study with an experienced surgeon and two novice operators, showing that Cryotrack reduces the overall time of the intervention to a fourth while being on par with the traditional planning in terms of safety and accuracy, and being usable by novices.
Cryotrack: Planning and Navigation for Computer Assisted Cryoablation
[ "Krumb, Henry J.", "Mehtali, Jonas", "Verde, Juan", "Mukhopadhyay, Anirban", "Essert, Caroline" ]
Conference
[ "https://github.com/Cryotrack" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
766
null
https://papers.miccai.org/miccai-2024/paper/1535_paper.pdf
@InProceedings{ Kon_Aframework_MICCAI2024, author = { Konuk, Emir and Welch, Robert and Christiansen, Filip and Epstein, Elisabeth and Smith, Kevin }, title = { { A framework for assessing joint human-AI systems based on uncertainty estimation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
We investigate the role of uncertainty quantification in aiding medical decision-making. Existing evaluation metrics fail to capture the practical utility of joint human-AI decision-making systems. To address this, we introduce a novel framework to assess such systems and use it to benchmark a diverse set of confidence and uncertainty estimation methods. Our results show that certainty measures enable joint human-AI systems to outperform both standalone humans and AIs, and that for a given system there exists an optimal balance in the number of cases to refer to humans, beyond which the system’s performance degrades.
A framework for assessing joint human-AI systems based on uncertainty estimation
[ "Konuk, Emir", "Welch, Robert", "Christiansen, Filip", "Epstein, Elisabeth", "Smith, Kevin" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
767
null
https://papers.miccai.org/miccai-2024/paper/0748_paper.pdf
@InProceedings{ Zha_CentertoEdge_MICCAI2024, author = { Zhao, Jianfeng and Li, Shuo }, title = { { Center-to-Edge Denoising Diffusion Probabilistic Models with Cross-domain Attention for Undersampled MRI Reconstruction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Integrating dual-domain (i.e. frequency domain and spatial domain) information for magnetic resonance imaging (MRI) reconstruction from undersampled measurements greatly improves imaging efficiency. However, it is still a challenging task using the denoising diffusion probabilistic models (DDPM)-based method, due to the lack of an effective fusion module to integrate dual-domain information, and there is no work exploring the effect that comes from denoising diffusion strategy on dual-domain. In this study, we propose a novel center-to-edge DDPM (C2E-DDPM) for fully-sampled MRI reconstruction from undersampled measurements (i.e. undersampled k-space and undersampled MR image) by improving the learning ability in the frequency domain and cross-domain information attention. Different from previous work, C2E-DDPM provides a C2E denoising diffusion strategy for facilitating frequency domain learning and designs an attention-guided cross-domain junction for integrating dual-domain information. Experiments indicated that our proposed C2E-DDPM achieves state-of-the-art performances in the dataset fastMRI (i.e. The scores of PSNR/SSIM of 33.26/88.43 for 4x acceleration and 31.67/81.94 for 8x acceleration).
Center-to-Edge Denoising Diffusion Probabilistic Models with Cross-domain Attention for Undersampled MRI Reconstruction
[ "Zhao, Jianfeng", "Li, Shuo" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
768
null
https://papers.miccai.org/miccai-2024/paper/3732_paper.pdf
@InProceedings{ Zhe_FewShot_MICCAI2024, author = { Zheng, Meng and Planche, Benjamin and Gao, Zhongpai and Chen, Terrence and Radke, Richard J. and Wu, Ziyan }, title = { { Few-Shot 3D Volumetric Segmentation with Multi-Surrogate Fusion } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Conventional 3D medical image segmentation methods typically require learning heavy 3D networks (e.g., 3D-UNet), as well as large amounts of in-domain data with accurate pixel/voxel-level labels to avoid overfitting. These solutions are thus extremely time- and labor-expensive, but also may easily fail to generalize to unseen objects during training. To alleviate this issue, we present MSFSeg, a novel few-shot 3D segmentation framework with a lightweight multi-surrogate fusion (MSF). MSFSeg is able to automatically segment unseen 3D objects/organs (during training) provided with one or a few annotated 2D slices or 3D sequence segments, via learning dense query-support organ/lesion anatomy correlations across patient populations. Our proposed MSF module mines comprehensive and diversified morphology correlations between unlabeled and the few labeled slices/sequences through multiple designated surrogates, making it able to generate accurate cross-domain 3D segmentation masks given annotated slices or sequences. We demonstrate the effectiveness of our proposed framework by showing superior performance on conventional few-shot segmentation benchmarks compared to prior art, and remarkable cross-domain cross-volume segmentation performance on proprietary 3D segmentation datasets for challenging entities, i.e. tubular structures, with only limited 2D or 3D labels.
Few-Shot 3D Volumetric Segmentation with Multi-Surrogate Fusion
[ "Zheng, Meng", "Planche, Benjamin", "Gao, Zhongpai", "Chen, Terrence", "Radke, Richard J.", "Wu, Ziyan" ]
Conference
2408.14427
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
769
null
https://papers.miccai.org/miccai-2024/paper/2707_paper.pdf
@InProceedings{ Jia_Hierarchical_MICCAI2024, author = { Jiang, Yu and He, Zhibin and Peng, Zhihao and Yuan, Yixuan }, title = { { Hierarchical Graph Learning with Small-World Brain Connectomes for Cognitive Prediction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Functional magnetic resonance imaging (fMRI) is capable of assessing an individual’s cognitive abilities by measuring blood oxygen level dependence. Due to the complexity of brain structure and function, exploring the relationship between cognitive ability and brain functional connectivity is extremely challenging. Recently, graph neural networks have been employed to extract functional connectivity features for predicting cognitive scores. Nevertheless, these methods have two main limitations: 1) Ignore the hierarchical nature of brain: discarding fine-grained information within each brain region, and overlooking supplementary information on the functional hierarchy of the brain at multiple scales; 2) Ignore the small-world nature of brain: current methods for generating functional connectivity produce regular networks with relatively low information transmission efficiency. To address these issues, we propose a \textit{Hierarchical Graph Learning with Small-World Brain Connectomes} (SW-HGL) framework for cognitive prediction. This framework consists of three modules: the pyramid information extraction module (PIE), the small-world brain connectomes construction module (SW-BCC), and the hierarchical graph learning module (HGL). Specifically, PIE identifies representative vertices at both micro-scale (community level) and macro-scale (region level) through community clustering and graph pooling. SW-BCC simulates the small-world nature of brain by rewiring regular networks and establishes functional connections at both region and community levels. MSFEF is a dual-branch network used to extract and fuse micro-scale and macro-scale features for cognitive score prediction. Compared to state-of-the-art methods, our SW-HGL consistently achieves outstanding performance on HCP dataset.
Hierarchical Graph Learning with Small-World Brain Connectomes for Cognitive Prediction
[ "Jiang, Yu", "He, Zhibin", "Peng, Zhihao", "Yuan, Yixuan" ]
Conference
[ "https://github.com/CUHK-AIM-Group/SW-HGL" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
770
null
https://papers.miccai.org/miccai-2024/paper/1293_paper.pdf
@InProceedings{ Zha_TextPolyp_MICCAI2024, author = { Zhao, Yiming and Zhou, Yi and Zhang, Yizhe and Wu, Ye and Zhou, Tao }, title = { { TextPolyp: Point-supervised Polyp Segmentation with Text Cues } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Polyp segmentation in colonoscopy images is essential for preventing Colorectal cancer (CRC). Existing polyp segmentation models often struggle with costly pixel-wise annotations. Conversely, datasets can be annotated quickly and affordably using weak labels like points. However, utilizing sparse annotations for model training remains challenging due to the limited information. In this study, we propose a TextPolyp approach to address this issue by leveraging only point annotations and text cues for effective weakly-supervised polyp segmentation. Specifically, we utilize the Grounding DINO algorithm and Segment Anything Model (SAM) to generate initial pseudo-labels, which are then refined with point annotations. Subsequently, we employ a SAM-based mutual learning strategy to effectively enhance segmentation results from SAM. Additionally, we propose a Discrepancy-aware Weight Scheme (DWS) to adaptively reduce the impact of unreliable predictions from SAM. Our TextPolyp model is versatile and can seamlessly integrate with various backbones and segmentation methods. More importantly, the proposed strategies are used exclusively during training, incurring no additional computational cost during inference. Extensive experiments confirm the effectiveness of our TextPolyp approach.
TextPolyp: Point-supervised Polyp Segmentation with Text Cues
[ "Zhao, Yiming", "Zhou, Yi", "Zhang, Yizhe", "Wu, Ye", "Zhou, Tao" ]
Conference
[ "https://github.com/taozh2017/TextPolyp" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
771
null
https://papers.miccai.org/miccai-2024/paper/2259_paper.pdf
@InProceedings{ Han_MeshBrush_MICCAI2024, author = { Han, John J. and Acar, Ayberk and Kavoussi, Nicholas and Wu, Jie Ying }, title = { { MeshBrush: Painting the Anatomical Mesh with Neural Stylization for Endoscopy } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Style transfer is a promising approach to close the sim-to-real gap in medical endoscopy. Rendering synthetic endoscopic videos by traversing pre-operative scans (such as MRI or CT) can generate structurally accurate simulations as well as ground truth camera poses and depth maps. Although image-to-image (I2I) translation models such as CycleGAN can imitate realistic endoscopic images from these simulations, they are unsuitable for video-to-video synthesis due to the lack of temporal consistency, resulting in artifacts between frames. We propose MeshBrush, a neural mesh stylization method to synthesize temporally consistent videos with differentiable rendering. MeshBrush uses the underlying geometry of patient imaging data while leveraging existing I2I methods. With learned per-vertex textures, the stylized mesh guarantees consistency while producing high-fidelity outputs. We demonstrate that mesh stylization is a promising approach for creating realistic simulations for downstream tasks such as training networks and preoperative planning. Although our method is tested and designed for ureteroscopy, its components are transferable to general endoscopic and laparoscopic procedures. The code will be made public on GitHub.
MeshBrush: Painting the Anatomical Mesh with Neural Stylization for Endoscopy
[ "Han, John J.", "Acar, Ayberk", "Kavoussi, Nicholas", "Wu, Jie Ying" ]
Conference
2404.02999
[ "https://github.com/juseonghan/MeshBrush" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
772
null
https://papers.miccai.org/miccai-2024/paper/0242_paper.pdf
@InProceedings{ Xu_Transforming_MICCAI2024, author = { Xu, Huan and Wu, Jinlin and Cao, Guanglin and Chen, Zhen and Lei, Zhen and Liu, Hongbin }, title = { { Transforming Surgical Interventions with Embodied Intelligence for Ultrasound Robotics } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Ultrasonography has revolutionized non-invasive diagnostic methodologies, significantly enhancing patient outcomes across various medical domains. Despite its advancements, integrating ultrasound technology with robotic systems for automated scans presents challenges, including limited command understanding and dynamic execution capabilities. To address these challenges, this paper introduces a novel Ultrasound Embodied Intelligence system that synergistically combines ultrasound robots with large language models (LLMs) and domain-specific knowledge augmentation, enhancing ultrasound robots’ intelligence and operational efficiency. Our approach employs a dual strategy: firstly, integrating LLMs with ultrasound robots to interpret doctors’ verbal instructions into precise motion planning through a comprehensive understanding of ultrasound domain knowledge, including APIs and operational manuals; secondly, incorporating a dynamic execution mechanism, allowing for real-time adjustments to scanning plans based on patient movements or procedural errors. We demonstrate the effectiveness of our system through extensive experiments, including ablation studies and comparisons across various models, showcasing significant improvements in executing medical procedures from verbal commands. Our findings suggest that the proposed system improves the efficiency and quality of ultrasound scans and paves the way for further advancements in autonomous medical scanning technologies, with the potential to transform non-invasive diagnostics and streamline medical workflows. The source code is available at https://github.com/seanxuu/EmbodiedUS.
Transforming Surgical Interventions with Embodied Intelligence for Ultrasound Robotics
[ "Xu, Huan", "Wu, Jinlin", "Cao, Guanglin", "Chen, Zhen", "Lei, Zhen", "Liu, Hongbin" ]
Conference
2406.12651
[ "https://github.com/seanxuu/EmbodiedUS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
773
null
https://papers.miccai.org/miccai-2024/paper/3954_paper.pdf
@InProceedings{ Wan_CarDcros_MICCAI2024, author = { Wang, Yuli and Hsu, Wen-Chi and Shi, Victoria and Lin, Gigin and Lin, Cheng Ting and Feng, Xue and Bai, Harrison }, title = { { Car-Dcros: A Dataset and Benchmark for Enhancing Cardiovascular Artery Segmentation through Disconnected Components Repair and Open Curve Snake } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
The segmentation of cardiovascular arteries in 3D medical images holds significant promise for assessing vascular health. Despite the progress in current methodologies, there remain significant challenges, especially in the precise segmentation of smaller vascular structures and those affected by arterial plaque, which often present as disconnected in images. Addressing these issues, we introduce an innovative refinement method that utilizes a data-driven strategy to correct the appearance of disconnected arterial structures. Initially, we create a synthetic dataset designed to mimic the appearance of disconnected cardiovascular structures. Our method then re-frames the segmentation issue as a task of detecting disconnected points, employing a neural network trained to identify points that can link the disconnected components. We further integrate an open curve active contour model, which facilitates the seamless connection of these points while ensuring smoothness. The effectiveness and clinical relevance of our methodology are validated through an application on an actual dataset from a medical institution.
Car-Dcros: A Dataset and Benchmark for Enhancing Cardiovascular Artery Segmentation through Disconnected Components Repair and Open Curve Snake
[ "Wang, Yuli", "Hsu, Wen-Chi", "Shi, Victoria", "Lin, Gigin", "Lin, Cheng Ting", "Feng, Xue", "Bai, Harrison" ]
Conference
[ "https://github.com/YuliWanghust/CTA_repairment" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
774
null
https://papers.miccai.org/miccai-2024/paper/0803_paper.pdf
@InProceedings{ Liu_Affinity_MICCAI2024, author = { Liu, Mengjun and Song, Zhiyun and Chen, Dongdong and Wang, Xin and Zhuang, Zixu and Fei, Manman and Zhang, Lichi and Wang, Qian }, title = { { Affinity Learning Based Brain Function Representation for Disease Diagnosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Resting-state functional magnetic resonance imaging (rs-fMRI) serves as a potent means to quantify brain functional connectivity (FC), which holds potential in diagnosing diseases. However, conventional FC measures may fall short in encapsulating the intricate functional dynamics of the brain; for instance, FC computed via Pearson correlation merely captures linear statistical dependencies among signals from different brain regions. In this study, we propose an affinity learning framework for modeling FC, leveraging a pre-training model to discern informative function representation among brain regions. Specifically, we employ randomly sampled patches and encode them to generate region embeddings, which are subsequently utilized by the proposed affinity learning module to deduce function representation between any pair of regions via an affinity encoder and a signal reconstruction decoder. Moreover, we integrate supervision from large language model (LLM) to incorporate prior brain function knowledge. We evaluate the efficacy of our framework across two datasets. The results from downstream brain disease diagnosis tasks underscore the effectiveness and generalizability of the acquired function representation. In summary, our approach furnishes a novel perspective on brain function representation in connectomics. Our code is available at https://github.com/mjliu2020/ALBFR.
Affinity Learning Based Brain Function Representation for Disease Diagnosis
[ "Liu, Mengjun", "Song, Zhiyun", "Chen, Dongdong", "Wang, Xin", "Zhuang, Zixu", "Fei, Manman", "Zhang, Lichi", "Wang, Qian" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
775
null
https://papers.miccai.org/miccai-2024/paper/1182_paper.pdf
@InProceedings{ Xia_FedIA_MICCAI2024, author = { Xiang, Yangyang and Wu, Nannan and Yu, Li and Yang, Xin and Cheng, Kwang-Ting and Yan, Zengqiang }, title = { { FedIA: Federated Medical Image Segmentation with Heterogeneous Annotation Completeness } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Federated learning has emerged as a compelling paradigm for medical image segmentation, particularly in light of increasing privacy concerns. However, most of the existing research relies on relatively stringent assumptions regarding the uniformity and completeness of annotations across clients. Contrary to this, this paper highlights a prevalent challenge in medical practice: incomplete annotations. Such annotations can introduce incorrectly labeled pixels, potentially undermining the performance of neural networks in supervised learning. To tackle this issue, we introduce a novel solution, named FedIA. Our insight is to conceptualize incomplete annotations as noisy data (i.e., low-quality data), with a focus on mitigating their adverse effects. We begin by evaluating the completeness of annotations at the client level using a designed indicator. Subsequently, we enhance the influence of clients with more comprehensive annotations and implement corrections for incomplete ones, thereby ensuring that models are trained on accurate data. Our method’s effectiveness is validated through its superior performance on two extensively used medical image segmentation datasets, outperforming existing solutions. The code is available at https://github.com/HUSTxyy/FedIA.
FedIA: Federated Medical Image Segmentation with Heterogeneous Annotation Completeness
[ "Xiang, Yangyang", "Wu, Nannan", "Yu, Li", "Yang, Xin", "Cheng, Kwang-Ting", "Yan, Zengqiang" ]
Conference
2407.02280
[ "https://github.com/HUSTxyy/FedIA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
776
null
https://papers.miccai.org/miccai-2024/paper/1282_paper.pdf
@InProceedings{ Yan_EndoFinder_MICCAI2024, author = { Yang, Ruijie and Zhu, Yan and Fu, Peiyao and Zhang, Yizhe and Wang, Zhihua and Li, Quanlin and Zhou, Pinghong and Yang, Xian and Wang, Shuo }, title = { { EndoFinder: Online Image Retrieval for Explainable Colorectal Polyp Diagnosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Determining the necessity of resecting malignant polyps during colonoscopy screen is crucial for patient outcomes, yet challenging due to the time-consuming and costly nature of histopathology examination. While deep learning-based classification models have shown promise in achieving optical biopsy with endoscopic images, they often suffer from a lack of explainability. To overcome this limitation, we introduce EndoFinder, a content-based image retrieval framework to find the ‘digital twin’ polyp in the reference database given a newly detected polyp. The clinical semantics of the new polyp can be inferred referring to the matched ones. EndoFinder pioneers a polyp-aware image encoder that is pre-trained on a large polyp dataset in a self-supervised way, merging masked image modeling with contrastive learning. This results in a generic embedding space ready for different downstream clinical tasks based on image retrieval. We validate the framework on polyp re-identification and optical biopsy tasks, with extensive experiments demonstrating that EndoFinder not only achieves explainable diagnostics but also matches the performance of supervised classification models. EndoFinder’s reliance on image retrieval has the potential to support diverse downstream decision-making tasks during real-time colonoscopy procedures.
EndoFinder: Online Image Retrieval for Explainable Colorectal Polyp Diagnosis
[ "Yang, Ruijie", "Zhu, Yan", "Fu, Peiyao", "Zhang, Yizhe", "Wang, Zhihua", "Li, Quanlin", "Zhou, Pinghong", "Yang, Xian", "Wang, Shuo" ]
Conference
2407.11401
[ "https://github.com/ku262/EndoFinder" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
777
null
https://papers.miccai.org/miccai-2024/paper/0362_paper.pdf
@InProceedings{ Yu_SliceConsistent_MICCAI2024, author = { Yu, Qinji and Wang, Yirui and Yan, Ke and Lu, Le and Shen, Na and Ye, Xianghua and Ding, Xiaowei and Jin, Dakai }, title = { { Slice-Consistent Lymph Nodes Detection Transformer in CT Scans via Cross-slice Query Contrastive Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Lymph node (LN) assessment is an indispensable yet very challenging task in the daily clinical workload of radiology and oncology offering valuable insights for cancer staging and treatment planning. Finding scatteredly distributed, low-contrast clinically relevant LNs in 3D CT is difficult even for experienced physicians along with high inter-observer variations. Previous CNN-based lesion and LN detectors often take a 2.5D approach by using a 2D network architecture with multi-slice inputs, which utilizes the pretrained 2D model weights and shows better accuracy as compared to direct 3D detectors. However, slice-based 2.5D detectors fail to place explicit constraints on the inter-slice consistency, where a single 3D LN can be falsely predicted as two or more LN instances or multiple LNs are erroneously merged into one large LN. These will adversely affect the downstream LN metastasis diagnostic task as the 3D size information is one of the most important malignant indicators. In this work, we propose an effective and accurate 2.5D LN detection transformer that explicitly considers the inter-slice consistency within a LN. It first enhances a detection transformer by utilizing an efficient multi-scale 2.5D fusion scheme to leverage pre-trained 2D weights. Then, we introduce a novel cross-slice query contrastive learning module, which pulls the query embeddings of the same 3D LN instance closer and pushes the embeddings of adjacent similar anatomies (hard negatives) farther. Trained and tested on 3D CT scans of 670 patients (with 7252 labeled LN instances) of different body parts (neck, chest, and upper abdomen) and pathologies, our method significantly improves the performance of previous leading detection methods by at least 3\% average recall at the same FP rates in both internal and external testing.
Slice-Consistent Lymph Nodes Detection Transformer in CT Scans via Cross-slice Query Contrastive Learning
[ "Yu, Qinji", "Wang, Yirui", "Yan, Ke", "Lu, Le", "Shen, Na", "Ye, Xianghua", "Ding, Xiaowei", "Jin, Dakai" ]
Conference
[ "https://github.com/CSCYQJ/MICCAI24-Slice-Consistent-Lymph-Nodes-DETR" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
778
null
https://papers.miccai.org/miccai-2024/paper/1213_paper.pdf
@InProceedings{ Xia_ANew_MICCAI2024, author = { Xia, Wenyao and Fan, Victoria and Peters, Terry and Chen, Elvis C. S. }, title = { { A New Benchmark In Vivo Paired Dataset for Laparoscopic Image De-smoking } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
The single greatest obstacle in developing effective algorithms for removing surgical smoke in laparoscopic surgery is the lack of a paired dataset featuring real smoky and smoke-free surgical scenes. Consequently, existing de-smoking algorithms are developed and evaluated based on atmospheric scattering models, synthetic data, and non-reference image enhancement metrics, which do not adequately capture the complexity and essence of in vivo surgical scenes with smoke. To bridge this gap, we propose creating a paired dataset by identifying video sequences with relatively stationary scenes from existing laparoscopic surgical recordings where smoke emerges. In addition, we developed an approach to facilitate robust motion tracking through smoke to compensate for patients’ involuntary movements. As a result, we obtained 21 video sequences from 63 laparoscopic prostatectomy procedure recordings, comprising 961 pairs of smoky images and their corresponding smoke-free ground truth. Using this unique dataset, we compared a representative set of current de-smoking methods, confirming their efficacy and revealing their limitations, thereby offering insights for future directions. The dataset is available at https://github.com/wxia43/DesmokeData.
A New Benchmark In Vivo Paired Dataset for Laparoscopic Image De-smoking
[ "Xia, Wenyao", "Fan, Victoria", "Peters, Terry", "Chen, Elvis C. S." ]
Conference
[ "https://github.com/wxia43/DesmokeData" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
779
null
https://papers.miccai.org/miccai-2024/paper/3303_paper.pdf
@InProceedings{ Rei_DataDriven_MICCAI2024, author = { Reithmeir, Anna and Felsner, Lina and Braren, Rickmer F. and Schnabel, Julia A. and Zimmer, Veronika A. }, title = { { Data-Driven Tissue- and Subject-Specific Elastic Regularization for Medical Image Registration } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Physics-inspired regularization is desired for intra-patient image registration since it can effectively capture the biomechanical characteristics of anatomical structures. However, a major challenge lies in the reliance on physical parameters: Parameter estimations vary widely across the literature, and the physical properties themselves are inherently subject-specific. In this work, we introduce a novel data-driven method that leverages hypernetworks to learn the tissue-dependent elasticity parameters of an elastic regularizer. Notably, our approach facilitates the estimation of patient-specific parameters without the need to retrain the network. We evaluate our method on three publicly available 2D and 3D lung CT and cardiac MR datasets. We find that with our proposed subject-specific tissue-dependent regularization, a higher registration quality is achieved across all datasets compared to using a global regularizer. The code is available at https://github.com/compai-lab/2024-miccai-reithmeir.
Data-Driven Tissue- and Subject-Specific Elastic Regularization for Medical Image Registration
[ "Reithmeir, Anna", "Felsner, Lina", "Braren, Rickmer F.", "Schnabel, Julia A.", "Zimmer, Veronika A." ]
Conference
2407.04355
[ "https://github.com/compai-lab/2024-miccai-reithmeir" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
780
null
https://papers.miccai.org/miccai-2024/paper/1153_paper.pdf
@InProceedings{ Sim_MultiModal_MICCAI2024, author = { Sim, Jaeyoon and Lee, Minjae and Wu, Guorong and Kim, Won Hwa }, title = { { Multi-Modal Graph Neural Network with Transformer-Guided Adaptive Diffusion for Preclinical Alzheimer Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
The graphical representation of the brain offers critical insights into diagnosing and prognosing neurodegenerative disease via relationships between regions of interest (ROIs). Despite recent emergence of various Graph Neural Networks (GNNs) to effectively capture the relational information, there remain inherent limitations in interpreting the brain networks. Specifically, convolutional approaches ineffectively aggregate information from distant neighborhoods, while attention-based methods exhibit deficiencies in capturing node-centric information, particularly in retaining critical characteristics from pivotal nodes. These shortcomings reveal challenges for identifying disease-specific variation from diverse features from different modalities. In this regard, we propose an integrated framework guiding diffusion process at each node by a downstream transformer where both short- and long-range properties of graphs are aggregated via diffusion-kernel and multi-head attention respectively. We demonstrate the superiority of our model by improving performance of pre-clinical Alzheimer’s disease (AD) classification with various modalities. Also, our model adeptly identifies key ROIs that are closely associated with the preclinical stages of AD, marking a significant potential for early diagnosis and prevision of the disease.
Multi-Modal Graph Neural Network with Transformer-Guided Adaptive Diffusion for Preclinical Alzheimer Classification
[ "Sim, Jaeyoon", "Lee, Minjae", "Wu, Guorong", "Kim, Won Hwa" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
781
null
https://papers.miccai.org/miccai-2024/paper/2364_paper.pdf
@InProceedings{ Hou_AClinicaloriented_MICCAI2024, author = { Hou, Qingshan and Cheng, Shuai and Cao, Peng and Yang, Jinzhu and Liu, Xiaoli and Tham, Yih Chung and Zaiane, Osmar R. }, title = { { A Clinical-oriented Multi-level Contrastive Learning Method for Disease Diagnosis in Low-quality Medical Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Representation learning offers a conduit to elucidate distinctive features within the latent space and interpret the deep models. However, the randomness of lesion distribution and the complexity of low-quality factors in medical images pose great challenges for models to extract key lesion features. Disease diagnosis methods guided by contrastive learning (CL) have shown significant advantages in lesion feature representation. Nevertheless, the effectiveness of CL is highly dependent on the quality of the positive and negative sample pairs. In this work, we propose a clinical-oriented multi-level CL framework that aims to enhance the model’s capacity to extract lesion features and discriminate between lesion and low-quality factors, thereby enabling more accurate disease diagnosis from low-quality medical images. Specifically, we first construct multi-level positive and negative pairs to enhance the model’s comprehensive recognition capability of lesion features by integrating information from different levels and qualities of medical images. Moreover, to improve the quality of the learned lesion embeddings, we introduce a dynamic hard sample mining method based on self-paced learning. The proposed CL framework is validated on two public medical image datasets, EyeQ and Chest X-ray, demonstrating superior performance compared to other state-of-the-art disease diagnostic methods.
A Clinical-oriented Multi-level Contrastive Learning Method for Disease Diagnosis in Low-quality Medical Images
[ "Hou, Qingshan", "Cheng, Shuai", "Cao, Peng", "Yang, Jinzhu", "Liu, Xiaoli", "Tham, Yih Chung", "Zaiane, Osmar R." ]
Conference
2404.04887
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
782
null
https://papers.miccai.org/miccai-2024/paper/1324_paper.pdf
@InProceedings{ Zha_Exploiting_MICCAI2024, author = { Zhao, Xiangyu and Ouyang, Xi and Zhang, Lichi and Xue, Zhong and Shen, Dinggang }, title = { { Exploiting Latent Classes for Medical Image Segmentation from Partially Labeled Datasets } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Notable progress has been made in medical image segmentation models due to the availability of massive training data. Nevertheless, a majority of open-source datasets are only partially labeled, and not all expected organs or tumors are annotated in these images. While previous attempts have been made to only learn segmentation from labeled regions of interest (ROIs), they do not consider the latent classes, i.e., existing but unlabeled ROIs, in the images during the training stage. Moreover, since these methods rely exclusively on labeled ROIs and those unlabeled regions are viewed as background, they need large-scale and diverse datasets to achieve a variety of ROI segmentation. In this paper, we propose a framework that utilizes latent classes for segmentation from partially labeled datasets, aiming to improve segmentation performance, especially for ROIs with only a small number of annotations. Specifically, we first introduce an ROI-aware network to detect the presence of unlabeled ROIs in images and form the latent classes, which are utilized to guide the segmentation learning. Additionally, ROIs with ambiguous existence are constrained by the consistency loss between the predictions of the student and the teacher networks. By regularizing ROIs with different certainty levels under different scenarios, our method can significantly improve the robustness and reliance of segmentation on large-scale datasets. Experimental results on a public benchmark for partially labeled segmentation demonstrate that our proposed method surpasses previous attempts and has great potential to form a large-scale foundation segmentation model.
Exploiting Latent Classes for Medical Image Segmentation from Partially Labeled Datasets
[ "Zhao, Xiangyu", "Ouyang, Xi", "Zhang, Lichi", "Xue, Zhong", "Shen, Dinggang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
783
null
https://papers.miccai.org/miccai-2024/paper/0689_paper.pdf
@InProceedings{ Wu_Evaluating_MICCAI2024, author = { Wu, Jiaqi and Peng, Wei and Li, Binxu and Zhang, Yu and Pohl, Kilian M. }, title = { { Evaluating the Quality of Brain MRI Generators } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Deep learning models generating structural brain MRIs have the potential to significantly accelerate discovery of neuroscience studies. However, their use has been limited in part by the way their quality is evaluated. Most evaluations of generative models focus on metrics originally designed for natural images (such as structural similarity index and Fr'echet inception distance). As we show in a comparison of 6 state-of-the-art generative models trained and tested on over 3000 MRIs, these metrics are sensitive to the experimental setup and inadequately assess how well brain MRIs capture macrostructural properties of brain regions (a.k.a., anatomical plausibility). This shortcoming of the metrics results in inconclusive findings even when qualitative differences between the outputs of models are evident. We therefore propose a framework for evaluating models generating brain MRIs, which requires uniform processing of the real MRIs, standardizing the implementation of the models, and automatically segmenting the MRIs generated by the models. The segmentations are used for quantifying the plausibility of anatomy displayed in the MRIs. To ensure meaningful quantification, it is crucial that the segmentations are highly reliable. Our framework rigorously checks this reliability, a step often overlooked by prior work. Only 3 of the 6 generative models produced MRIs, of which at least 95$\%$ had highly reliable segmentations. More importantly, the assessment of each model by our framework is in line with qualitative assessments, reinforcing the validity of our approach. The code of this framework is available via \url{https://github.com/jiaqiw01/MRIAnatEval.git}.
Evaluating the Quality of Brain MRI Generators
[ "Wu, Jiaqi", "Peng, Wei", "Li, Binxu", "Zhang, Yu", "Pohl, Kilian M." ]
Conference
2409.08463
[ "https://github.com/jiaqiw01/MRIAnatEval.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
784
null
https://papers.miccai.org/miccai-2024/paper/4136_paper.pdf
@InProceedings{ Cai_Survival_MICCAI2024, author = { Cai, Shangyan and Huang, Weitian and Yi, Weiting and Zhang, Bin and Liao, Yi and Wang, Qiu and Cai, Hongmin and Chen, Luonan and Su, Weifeng }, title = { { Survival analysis of histopathological image based on a pretrained hypergraph model of spatial transcriptomics data } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Survival analysis is critical for clinical decision-making and prognosis in breast cancer treatment. Recent multimodal approaches leverage histopathology images and bulk RNA-seq to improve survival prediction performance, but these approaches fail to explore spatial distribution at the cellular level. In this work, we present a multimodal hypergraph neural network for survival analysis (MHNN-surv) that introduces a pre-trained model for spatial transcriptomic prediction. The method is characterized by making full use of histopathological images to reveal both morphological and genetic information, thus improving the interpretation of heterogeneity. Specifically, MHNN-surv first slices Whole-Slide Imaging (WSI) into patch images, followed by extracting image features and predicting spatial transcriptomic, respectively. Subsequently, an image-based hypergraph is constructed based on three-dimensional nearest-neighbor relationships, while a gene-based hypergraph is formed based on gene expression similarity. By fusing the dual hypergraphs, MHNN-surv performs an in-depth survival analysis on breast cancer using the Cox proportional hazards model. The experimental results demonstrate that MHNN-surv outperforms the state-of-the-art multimodal models in survival analysis.
Survival analysis of histopathological image based on a pretrained hypergraph model of spatial transcriptomics data
[ "Cai, Shangyan", "Huang, Weitian", "Yi, Weiting", "Zhang, Bin", "Liao, Yi", "Wang, Qiu", "Cai, Hongmin", "Chen, Luonan", "Su, Weifeng" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
785
null
https://papers.miccai.org/miccai-2024/paper/0918_paper.pdf
@InProceedings{ Qi_Cardiac_MICCAI2024, author = { Qi, Ronghui and Li, Xiaohu and Xu, Lei and Zhang, Jie and Zhang, Yanping and Xu, Chenchu }, title = { { Cardiac Physiology Knowledge-driven Diffusion Model for Contrast-free Synthesis Myocardial Infarction Enhancement } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Contrast-free AI myocardial infarction enhancement (MIE) synthesis technology has a significant impact on clinics due to its ability to eliminate contrast agents (CAs) administration in the current MI diagnosis. In this paper, we propose a novel cardiac physiology knowledge-driven diffusion model (CPKDM) that, for the first time, integrates cardiac physiology knowledge into cardiac MR data to guide the synthesis of high-quality MIE, thereby enhancing the generalization performance of MIE synthesis. The combining helps the model understand the principles behind the data mapping between non-enhanced image inputs and enhanced image outputs, informing the model on how and why to synthesize MIE. CPKDM leverages cardiac mechanics knowledge and MR imaging atlas knowledge to respectively guide the learning of kinematic features in CINE sequences and morphological features in T1 sequences. Moreover, CPKDM proposes a kinematics-morphology diffusion integration model to progressively fuse kinematic and morphological features for precise MIE synthesis. Evaluation on 195 patients including chronic MI and normal controls, CPKDM significantly improves performance (SSIM by at least 4%) when comparing with the five most recent state-of-the-art methods. These results demonstrate that our CPKDM exhibits superiority and offers a promising alternative for clinical diagnostics.
Cardiac Physiology Knowledge-driven Diffusion Model for Contrast-free Synthesis Myocardial Infarction Enhancement
[ "Qi, Ronghui", "Li, Xiaohu", "Xu, Lei", "Zhang, Jie", "Zhang, Yanping", "Xu, Chenchu" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
786
null
https://papers.miccai.org/miccai-2024/paper/0368_paper.pdf
@InProceedings{ Gal_Federated_MICCAI2024, author = { Galati, Francesco and Cortese, Rosa and Prados, Ferran and Lorenzi, Marco and Zuluaga, Maria A. }, title = { { Federated Multi-Centric Image Segmentation with Uneven Label Distribution } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
While federated learning is the state-of-the-art methodology for collaborative learning, its adoption for training segmentation models often relies on the assumption of uniform label distributions across participants, and is generally sensitive to the large variability of multi-centric imaging data. To overcome these issues, we propose a novel federated image segmentation approach adapted to complex non-iid setting typical of real-life conditions. We assume that labeled dataset is not available to all clients, and that clients data exhibit differences in distribution due to three factors: different scanners, imaging modalities and imaged organs. Our proposed framework collaboratively builds a multimodal data factory that embeds a shared, disentangled latent representation across participants. In a second asynchronous stage, this setup enables local domain adaptation without exchanging raw data or annotations, facilitating target segmentation. We evaluate our method across three distinct scenarios, including multi-scanner cardiac magnetic resonance segmentation, multi-modality skull stripping, and multi-organ vascular segmentation. The results obtained demonstrate the quality and robustness of our approach as compared to the state-of-the-art methods.
Federated Multi-Centric Image Segmentation with Uneven Label Distribution
[ "Galati, Francesco", "Cortese, Rosa", "Prados, Ferran", "Lorenzi, Marco", "Zuluaga, Maria A." ]
Conference
[ "https://github.com/i-vesseg/RobustMedSeg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
787
null
https://papers.miccai.org/miccai-2024/paper/0103_paper.pdf
@InProceedings{ Wen_Learning_MICCAI2024, author = { Weng, Weihao and Zhu, Xin }, title = { { Learning Representations by Maximizing Mutual Information Across Views for Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
We propose a method that leverages multiple identical network structures to generate and process diverse augmented views of the same medical image sample. By employing contrastive learning, we maximize mutual information among features extracted from different views, ensuring the networks learn robust and high-level semantic representations. Results from testing on four public and one private endoscopic surgical tool segmentation datasets indicate that the proposed method outperformed state-of-the-art semi-supervised and fully supervised segmentation methods. After trained by 5% labeled training data, the proposed method achieved an improvement of 11.5%, 8.4%, 6.5%, and 5.8% on RoboTool, Kvasir-instrument, ART-NET, and FEES, respectively. Ablation studies were also performed to measure the effectiveness of each proposed module. Code is available at \href{https://github.com/on1kou95/Mutual-Exemplar}{Mutual-Exemplar}.
Learning Representations by Maximizing Mutual Information Across Views for Medical Image Segmentation
[ "Weng, Weihao", "Zhu, Xin" ]
Conference
[ "https://github.com/on1kou95/Mutual-Exemplar" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
788
null
https://papers.miccai.org/miccai-2024/paper/0745_paper.pdf
@InProceedings{ Li_CausCLIP_MICCAI2024, author = { Li, Yiran and Cui, Xiaoxiao and Cao, Yankun and Zhang, Yuezhong and Wang, Huihui and Cui, Lizhen and Liu, Zhi and Li, Shuo }, title = { { CausCLIP: Causality-Adapting Visual Scoring of Visual Language Models for Few-Shot Learning in Portable Echocardiography Quality Assessment } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
How do we transfer Vision Language Models (VLMs), pre-trained in the source domain of conventional echocardiography (Echo), to the target domain of few-shot portable Echo (fine-tuning)? Learning image causality is crucial for few-shot learning in portable echocardiography quality assessment (PEQA), due to the domain-invariant causal and topological consistency. However, the lack of significant domain shifts and well-labeled data in PEQA present challenges to get reliable measurements of image causality. We investigate the challenging problem of this task, i.e., learning a consistent representation of domain-invariant causal semantic features. We propose a novel VLMs based PEQA network, Causality-Adapting Visual Scoring CLIP (CausCLIP), embedding causal diposition to measure image causality for domain-invariant representation. Specifically, Causal-Aware Visual Adapter (CVA) identifies hidden asymmetric causal relationships and learns interpretable domain-invariant causal semantic consistency, thereby improving adaptability. Visual-Consistency Contrastive Learning (VCL) focuses on the most discriminative regions by registing visual-causal similarity, enhancing discriminability. Multi-granular Image-Text Adaptive Constraints (MAC) adaptively integrate task-specific semantic multi-granular information, enhancing robustness in multi-task learning. Experimental results show that CausCLIP outperforms state-of-the-art methods, achieving absolute improvements of 4.1%, 9.5%, and 8.5% in view category, quality score, and distortion metrics, respectively.
CausCLIP: Causality-Adapting Visual Scoring of Visual Language Models for Few-Shot Learning in Portable Echocardiography Quality Assessment
[ "Li, Yiran", "Cui, Xiaoxiao", "Cao, Yankun", "Zhang, Yuezhong", "Wang, Huihui", "Cui, Lizhen", "Liu, Zhi", "Li, Shuo" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
789
null
https://papers.miccai.org/miccai-2024/paper/3822_paper.pdf
@InProceedings{ Hua_Hard_MICCAI2024, author = { Huang, Wentao and Hu, Xiaoling and Abousamra, Shahira and Prasanna, Prateek and Chen, Chao }, title = { { Hard Negative Sample Mining for Whole Slide Image Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Weakly supervised whole slide image (WSI) classification is challenging due to the lack of patch-level labels and high computational costs. State-of-the-art methods use self-supervised patch-wise feature representations for multiple instance learning (MIL). Recently, methods have been proposed to fine-tune the feature representation on the downstream task using pseudo labeling, but mostly focusing on selecting high-quality positive patches. In this paper, we propose to mine hard negative samples during fine-tuning. This allows us to obtain better feature representations and reduce the training cost. Furthermore, we propose a novel patch-wise ranking loss in MIL to better exploit these hard negative samples. Experiments on two public datasets demonstrate the efficacy of these proposed ideas. Our codes are available at https://github.com/winston52/HNM-WSI.
Hard Negative Sample Mining for Whole Slide Image Classification
[ "Huang, Wentao", "Hu, Xiaoling", "Abousamra, Shahira", "Prasanna, Prateek", "Chen, Chao" ]
Conference
2410.02212
[ "https://github.com/winston52/HNM-WSI" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
790
null
https://papers.miccai.org/miccai-2024/paper/0811_paper.pdf
@InProceedings{ Yan_NeuroLink_MICCAI2024, author = { Yan, Haiyang and Zhai, Hao and Guo, Jinyue and Li, Linlin and Han, Hua }, title = { { NeuroLink: Bridging Weak Signals in Neuronal Imaging with Morphology Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Reconstructing neurons from large-scale optical microscope images is a challenging task due to the complexity of neuronal structures and extremely weak signals in certain regions. Traditional segmentation models, built on vanilla convolutions and voxel-wise losses, struggle to model long-range relationships in sparse volumetric data. As a result, weak signals in the feature space get mixed with noise, leading to interruptions in segmentation and premature termination in neuron tracing results. To address this issue, we propose NeuroLink to add continuity constraints to the network and implicitly model neuronal morphology by utilizing multi-task learning methods. Specifically, we introduce the Dynamic Snake Convolution to extract more effective features for the sparse tubular structure of neurons and propose a easily implementable morphology-based loss function to penalize discontinuous predictions. In addition, we guide the network to leverage the morphological information of the neuron for predicting direction and distance transformation maps of neurons. Our method achieved higher recall and precision on the low-contrast Zebrafish dataset and the publicly available BigNeuron dataset. Our code is available at https://github.com/Qingjia0226/NeuroLink.
NeuroLink: Bridging Weak Signals in Neuronal Imaging with Morphology Learning
[ "Yan, Haiyang", "Zhai, Hao", "Guo, Jinyue", "Li, Linlin", "Han, Hua" ]
Conference
[ "https://github.com/Qingjia0226/NeuroLink" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
791
null
https://papers.miccai.org/miccai-2024/paper/1874_paper.pdf
@InProceedings{ Li_Exploring_MICCAI2024, author = { Li, Lanting and Zhang, Liuzeng and Cao, Peng and Yang, Jinzhu and Wang, Fei and Zaiane, Osmar R. }, title = { { Exploring Spatio-Temporal Interpretable Dynamic Brain Function with Transformer for Brain Disorder Diagnosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
The dynamic variation in the spatio-temporal organizational patterns of brain functional modules (BFMs) associated with brain disorders remains unclear. To solve this issue, we propose an end-to-end transformer-based framework for sufficiently learning the spatio-temporal characteristics of BFMs and exploring the interpretable variation related to brain disorders. Specifically, the proposed model incorporates a supervisory guidance spatio-temporal clustering strategy for automatically identifying the BFMs with the dynamic temporal-varying weights and a multi-channel self-attention mechanism with topology-aware projection for sufficiently exploring the temporal variation and spatio-temporal representation. The experimental results on the diagnosis of Major Depressive Disorder (MDD) and Bipolar Disorder (BD) indicate that our model achieves state-of-the-art performance. Moreover, our model is capable of identifying the spatio-temporal patterns of brain activity and providing evidence associated with brain disorders. Our code is available at https://github.com/llt1836/BISTformer.
Exploring Spatio-Temporal Interpretable Dynamic Brain Function with Transformer for Brain Disorder Diagnosis
[ "Li, Lanting", "Zhang, Liuzeng", "Cao, Peng", "Yang, Jinzhu", "Wang, Fei", "Zaiane, Osmar R." ]
Conference
[ "https://github.com/llt1836/BISTformer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
792
null
https://papers.miccai.org/miccai-2024/paper/2286_paper.pdf
@InProceedings{ Wan_Groupwise_MICCAI2024, author = { Wang, Fanwen and Luo, Yihao and Wen, Ke and Huang, Jiahao and Ferreira, Pedro F. and Luo, Yaqing and Wu, Yinzhe and Munoz, Camila and Pennell, Dudley J. and Scott, Andrew D. and Nielles-Vallespin, Sonia and Yang, Guang }, title = { { Groupwise Deformable Registration of Diffusion Tensor Cardiovascular Magnetic Resonance: Disentangling Diffusion Contrast, Respiratory and Cardiac Motions } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Diffusion tensor based cardiovascular magnetic resonance (DT-CMR) offers a non-invasive method to visualize the myocardial microstructure. With the assumption that the heart is stationary, frames are acquired with multiple repetitions for different diffusion encoding directions. However, motion from poor breath-holding and imprecise cardiac triggering complicates DT-CMR analysis, further challenged by its inherently low SNR, varied contrasts, and diffusion-induced textures. Our solution is a novel framework employing groupwise registration with an implicit template to isolate respiratory and cardiac motions, while a tensor-embedded branch preserves diffusion contrast textures. We’ve devised a loss refinement tailored for non-linear least squares fitting and low SNR conditions. Additionally, we introduce new physics-based and clinical metrics for performance evaluation. Access code and supplementary materials at: https://github.com/ayanglab/DTCMR-Reg
Groupwise Deformable Registration of Diffusion Tensor Cardiovascular Magnetic Resonance: Disentangling Diffusion Contrast, Respiratory and Cardiac Motions
[ "Wang, Fanwen", "Luo, Yihao", "Wen, Ke", "Huang, Jiahao", "Ferreira, Pedro F.", "Luo, Yaqing", "Wu, Yinzhe", "Munoz, Camila", "Pennell, Dudley J.", "Scott, Andrew D.", "Nielles-Vallespin, Sonia", "Yang, Guang" ]
Conference
2406.13788
[ "https://github.com/ayanglab/DTCMR-Reg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
793
null
https://papers.miccai.org/miccai-2024/paper/1928_paper.pdf
@InProceedings{ Nav_Ensembled_MICCAI2024, author = { Naval Marimont, Sergio and Siomos, Vasilis and Baugh, Matthew and Tzelepis, Christos and Kainz, Bernhard and Tarroni, Giacomo }, title = { { Ensembled Cold-Diffusion Restorations for Unsupervised Anomaly Detection } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Unsupervised Anomaly Detection (UAD) methods aim to identify anomalies in test samples comparing them with a normative distribution learned from a dataset known to be anomaly-free. Approaches based on generative models offer interpretability by generating anomaly-free versions of test images, but are typically unable to identify subtle anomalies. Alternatively, approaches using feature modelling or self-supervised methods, such as the ones relying on synthetically generated anomalies, do not provide out-of-the-box interpretability. In this work, we present a novel method that combines the strengths of both strategies: a generative cold-diffusion pipeline (i.e., a diffusion-like pipeline which uses corruptions not based on noise) that is trained with the objective of turning synthetically-corrupted images back to their normal, original appearance. To support our pipeline we introduce a novel synthetic anomaly generation procedure, called DAG, and a novel anomaly score which ensembles restorations conditioned with different degrees of abnormality. Our method surpasses the prior state-of-the art for unsupervised anomaly detection in three different Brain MRI datasets.
Ensembled Cold-Diffusion Restorations for Unsupervised Anomaly Detection
[ "Naval Marimont, Sergio", "Siomos, Vasilis", "Baugh, Matthew", "Tzelepis, Christos", "Kainz, Bernhard", "Tarroni, Giacomo" ]
Conference
2407.06635
[ "https://github.com/snavalm/disyre" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
794
null
https://papers.miccai.org/miccai-2024/paper/1797_paper.pdf
@InProceedings{ Wu_Gazedirected_MICCAI2024, author = { Wu, Shaoxuan and Zhang, Xiao and Wang, Bin and Jin, Zhuo and Li, Hansheng and Feng, Jun }, title = { { Gaze-directed Vision GNN for Mitigating Shortcut Learning in Medical Image } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Deep neural networks have demonstrated remarkable performance in medical image analysis. However, its susceptibility to spurious correlations due to shortcut learning raises concerns about network interpretability and reliability. Furthermore, shortcut learning is exacerbated in medical contexts where disease indicators are often subtle and sparse. In this paper, we propose a novel gaze-directed Vision GNN (called GD-ViG) to leverage the visual patterns of radiologists from gaze as expert knowledge, directing the network toward disease-relevant regions, and thereby mitigating shortcut learning. GD-ViG consists of a gaze map generator (GMG) and a gaze-directed classifier (GDC). Combining the global modelling ability of GNNs with the locality of CNNs, GMG generates the gaze map based on radiologists’ visual patterns. Notably, it eliminates the need for real gaze data during inference, enhancing the network’s practical applicability. Utilizing gaze as the expert knowledge, the GDC directs the construction of graph structures by incorporating both feature distances and gaze distances, enabling the network to focus on disease-relevant foregrounds. Thereby avoiding shortcut learning and improving the network’s interpretability. The experiments on two public medical image datasets demonstrate that GD-ViG outperforms the state-of-the-art methods, and effectively mitigates shortcut learning. Our code is available at https://github.com/SX-SS/GD-ViG.
Gaze-directed Vision GNN for Mitigating Shortcut Learning in Medical Image
[ "Wu, Shaoxuan", "Zhang, Xiao", "Wang, Bin", "Jin, Zhuo", "Li, Hansheng", "Feng, Jun" ]
Conference
2406.14050
[ "https://github.com/SX-SS/GD-ViG" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
795
null
https://papers.miccai.org/miccai-2024/paper/0889_paper.pdf
@InProceedings{ Su_Design_MICCAI2024, author = { Su, Tongkun and Li, Jun and Zhang, Xi and Jin, Haibo and Chen, Hao and Wang, Qiong and Lv, Faqin and Zhao, Baoliang and Hu, Ying }, title = { { Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-training } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Multimodal pre-training demonstrates its potential in the medical domain, which learns medical visual representations from paired medical reports. However, many pre-training tasks require extra annotations from clinicians, and most of them fail to explicitly guide the model to learn the desired features of different pathologies. In this paper, we utilize Visual Question Answering (VQA) for multimodal pre-training to guide the framework focusing on targeted pathological features. We leverage descriptions in medical reports to design multi-granular question-answer pairs associated with different diseases, which assist the framework in pre-training without requiring extra annotations from experts. We also propose a novel pre-training framework with a quasi-textual feature transformer, a module designed to transform visual features into a quasi-textual space closer to the textual domain via a contrastive learning strategy. This narrows the vision-language gap and facilitates modality alignment. Our framework is applied to four downstream tasks: report generation, classification, segmentation, and detection across five datasets. Extensive experiments demonstrate the superiority of our framework compared to other state-of-the-art methods. Our code is available at https://github.com/MoramiSu/QFT.
Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-training
[ "Su, Tongkun", "Li, Jun", "Zhang, Xi", "Jin, Haibo", "Chen, Hao", "Wang, Qiong", "Lv, Faqin", "Zhao, Baoliang", "Hu, Ying" ]
Conference
2404.00226
[ "https://github.com/MoramiSu/QFT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
796
null
https://papers.miccai.org/miccai-2024/paper/2068_paper.pdf
@InProceedings{ Che_Vestibular_MICCAI2024, author = { Chen, Yunjie and Wolterink, Jelmer M. and Neve, Olaf M. and Romeijn, Stephan R. and Verbist, Berit M. and Hensen, Erik F. and Tao, Qian and Staring, Marius }, title = { { Vestibular schwannoma growth prediction from longitudinal MRI by time-conditioned neural fields } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Vestibular schwannomas (VS) are benign tumors that are generally managed by active surveillance with MRI examination. To further assist clinical decision-making and avoid overtreatment, an accurate prediction of tumor growth based on longitudinal imaging is highly desirable. In this paper, we introduce DeepGrowth, a deep learning method that incorporates neural fields and recurrent neural networks for prospective tumor growth prediction. In the proposed method, each tumor is represented as a signed distance function (SDF) conditioned on a low-dimensional latent code. Unlike previous studies, we predict the latent codes of the future tumor and generate the tumor shapes from it using a multilayer perceptron (MLP). To deal with irregular time intervals, we introduce a time-conditioned recurrent module based on a ConvLSTM and a novel temporal encoding strategy, which enables the proposed model to output varying tumor shapes over time. The experiments on an in-house longitudinal VS dataset showed that the proposed model significantly improved the performance (>=1.6% Dice score and >=0.20 mm 95% Hausdorff distance), in particular for top 20% tumors that grow or shrink the most (>=4.6% Dice score and >= 0.73 mm 95% Hausdorff distance). Our code is available at https://github.com/cyjdswx/DeepGrowth.
Vestibular schwannoma growth prediction from longitudinal MRI by time-conditioned neural fields
[ "Chen, Yunjie", "Wolterink, Jelmer M.", "Neve, Olaf M.", "Romeijn, Stephan R.", "Verbist, Berit M.", "Hensen, Erik F.", "Tao, Qian", "Staring, Marius" ]
Conference
[ "https://github.com/cyjdswx/DeepGrowth" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
797
null
https://papers.miccai.org/miccai-2024/paper/3934_paper.pdf
@InProceedings{ Yu_Gyri_MICCAI2024, author = { Yu, Xiaowei and Zhang, Lu and Cao, Chao and Chen, Tong and Lyu, Yanjun and Zhang, Jing and Liu, Tianming and Zhu, Dajiang }, title = { { Gyri vs. Sulci: Core-Periphery Organization in Functional Brain Networks } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
The human cerebral cortex is highly convoluted into convex gyri and concave sulci. It has been demonstrated that gyri and sulci are significantly different in their anatomy, connectivity, and function: besides exhibiting opposite shape patterns, long-distance axonal fibers connected to gyri are much denser than those connected to sulci, and neural signals on gyri are more complex in low-frequency while sulci are more complex in high-frequency. Although accumulating evidence shows significant differences between gyri and sulci, their primary roles in brain function have not been elucidated yet. To solve this fundamental problem, we design a novel Twin-Transformer framework to unveil the unique functional roles of gyri and sulci as well as their relationship in the whole brain function. Our Twin-Transformer framework adopts two structure-identical (twin) Transformers to disentangle spatial-temporal patterns of functional brain networks: one focuses on the spatial patterns and the other is on temporal patterns. The spatial transformer takes the spatially divided patches and generates spatial patterns, while the temporal transformer takes the temporally split patches and produces temporal patterns. We validated our Twin-Transformer on the HCP task-fMRI dataset, for the first time, to elucidate the different roles of gyri and sulci in brain function. Our results suggest that gyri and sulci could work together in a core-periphery network manner, that is, gyri could serve as core networks for information gathering and distributing, while sulci could serve as periphery networks for specific local information processing. These findings have shed new light on our fundamental understanding of the brain’s basic structural and functional mechanisms.
Gyri vs. Sulci: Core-Periphery Organization in Functional Brain Networks
[ "Yu, Xiaowei", "Zhang, Lu", "Cao, Chao", "Chen, Tong", "Lyu, Yanjun", "Zhang, Jing", "Liu, Tianming", "Zhu, Dajiang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
798
null
https://papers.miccai.org/miccai-2024/paper/1056_paper.pdf
@InProceedings{ Yu_LanguageEnhanced_MICCAI2024, author = { Yu, Jianxun and Hu, Qixin and Jiang, Meirui and Wang, Yaning and Wong, Chin Ting and Wang, Jing and Zhang, Huimao and Dou, Qi }, title = { { Language-Enhanced Local-Global Aggregation Network for Multi-Organ Trauma Detection } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Abdominal trauma is one of the leading causes of death in the elderly population and increasingly poses a global challenge. However, interpreting CT scans for abdominal trauma is considerably challenging for deep learning models. Trauma may exist in various organs presenting different shapes and morphologies. In addition, a thorough comprehension of visual cues and various types of trauma is essential, demanding a high level of domain expertise. To address these issues, this paper introduces a language-enhanced local-global aggregation network that aims to fully utilize both global contextual information and local organ-specific information inherent in images for accurate trauma detection. Furthermore, the network is enhanced by text embedding from Large Language Models (LLM). This LLM-based text embedding possesses substantial medical knowledge, enabling the model to capture anatomical relationships of intra-organ and intra-trauma connections. We have conducted experiments on one public dataset of RSNA Abdominal Trauma Detection (ATD) and one in-house dataset. Compared with existing state-of-the-art methods, the F1-score of organ-level trauma detection improves from 51.4% to 62.5% when evaluated on the public dataset and from 61.9% to 65.2% on the private cohort, demonstrating the efficacy of our proposed approach for multi-organ trauma detection. Code is available at: https://github.com/med-air/TraumaDet
Language-Enhanced Local-Global Aggregation Network for Multi-Organ Trauma Detection
[ "Yu, Jianxun", "Hu, Qixin", "Jiang, Meirui", "Wang, Yaning", "Wong, Chin Ting", "Wang, Jing", "Zhang, Huimao", "Dou, Qi" ]
Conference
[ "https://github.com/med-air/TraumaDet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
799