bibtex_url
null | proceedings
stringlengths 58
58
| bibtext
stringlengths 511
974
| abstract
stringlengths 92
2k
| title
stringlengths 30
207
| authors
sequencelengths 1
22
| id
stringclasses 1
value | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 14
values | n_linked_authors
int64 -1
1
| upvotes
int64 -1
1
| num_comments
int64 -1
0
| n_authors
int64 -1
10
| Models
sequencelengths 0
4
| Datasets
sequencelengths 0
1
| Spaces
sequencelengths 0
0
| old_Models
sequencelengths 0
4
| old_Datasets
sequencelengths 0
1
| old_Spaces
sequencelengths 0
0
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values | unique_id
int64 0
855
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://papers.miccai.org/miccai-2024/paper/0333_paper.pdf | @InProceedings{ Zha_MediCLIP_MICCAI2024,
author = { Zhang, Ximiao and Xu, Min and Qiu, Dehui and Yan, Ruixin and Lang, Ning and Zhou, Xiuzhuang },
title = { { MediCLIP: Adapting CLIP for Few-shot Medical Image Anomaly Detection } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | In the field of medical decision-making, precise anomaly detection in medical imaging plays a pivotal role in aiding clinicians. However, previous work is reliant on large-scale datasets for training anomaly detection models, which increases the development cost. This paper first focuses on the task of medical image anomaly detection in the few-shot setting, which is critically significant for the medical field where data collection and annotation are both very expensive. We propose an innovative approach, MediCLIP, which adapts the CLIP model to few-shot medical image anomaly detection through self-supervised fine-tuning. Although CLIP, as a vision-language model, demonstrates outstanding zero-/few-shot performance on various downstream tasks, it still falls short in the anomaly detection of medical images. To address this, we design a series of medical image anomaly synthesis tasks to simulate common disease patterns in medical imaging, transferring the powerful generalization capabilities of CLIP to the task of medical image anomaly detection. When only few-shot normal medical images are provided, MediCLIP achieves state-of-the-art performance in anomaly detection and location compared to other methods. Extensive experiments on three distinct medical anomaly detection tasks have demonstrated the superiority of our approach. The code is available at https://github.com/cnulab/MediCLIP. | MediCLIP: Adapting CLIP for Few-shot Medical Image Anomaly Detection | [
"Zhang, Ximiao",
"Xu, Min",
"Qiu, Dehui",
"Yan, Ruixin",
"Lang, Ning",
"Zhou, Xiuzhuang"
] | Conference | 2405.11315 | [
"https://github.com/cnulab/MediCLIP"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 400 |
|
null | https://papers.miccai.org/miccai-2024/paper/1007_paper.pdf | @InProceedings{ Lin_Stable_MICCAI2024,
author = { Lin, Tianyu and Chen, Zhiguang and Yan, Zhonghao and Yu, Weijiang and Zheng, Fudan },
title = { { Stable Diffusion Segmentation for Biomedical Images with Single-step Reverse Process } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Diffusion models have demonstrated their effectiveness across various generative tasks. However, when applied to medical image segmentation, these models encounter several challenges, including significant resource and time requirements. They also necessitate a multi-step reverse process and multiple samples to produce reliable predictions. To address these challenges, we introduce the first latent diffusion segmentation model, named SDSeg, built upon stable diffusion (SD). SDSeg incorporates a straightforward latent estimation strategy to facilitate a single-step reverse process and utilizes latent fusion concatenation to remove the necessity for multiple samples. Extensive experiments indicate that SDSeg surpasses existing state-of-the-art methods on five benchmark datasets featuring diverse imaging modalities. Remarkably, SDSeg is capable of generating stable predictions with a solitary reverse step and sample, epitomizing the model’s stability as implied by its name.
The code is available at https://github.com/lin-tianyu/Stable-Diffusion-Seg. | Stable Diffusion Segmentation for Biomedical Images with Single-step Reverse Process | [
"Lin, Tianyu",
"Chen, Zhiguang",
"Yan, Zhonghao",
"Yu, Weijiang",
"Zheng, Fudan"
] | Conference | 2406.18361 | [
"https://github.com/lin-tianyu/Stable-Diffusion-Seg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 401 |
|
null | https://papers.miccai.org/miccai-2024/paper/1081_paper.pdf | @InProceedings{ Ace_The_MICCAI2024,
author = { Acebes, Cesar and Moustafa, Abdel Hakim and Camara, Oscar and Galdran, Adrian },
title = { { The Centerline-Cross Entropy Loss for Vessel-Like Structure Segmentation: Better Topology Consistency Without Sacrificing Accuracy } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Achieving accurate vessel segmentation in medical images is crucial for various clinical applications, but current methods often struggle to balance topological consistency (preserving vessel network structure) with segmentation accuracy (overlap with ground-truth).
Although various strategies have been proposed to address this challenge, they typically necessitate significant modifications to network architecture, more annotations, or entail prohibitive computational costs, providing only partial topological improvements.
The clDice loss was recently proposed as an elegant and efficient alternative to preserve topology in tubular structure segmentation. However, segmentation accuracy is penalized and it lacks robustness to noisy annotations, mirroring the limitations of the conventional Dice loss. This work introduces the centerline-Cross Entropy (clCE) loss function, a novel approach which capitalizes on the robustness of Cross-Entropy loss and the topological focus of centerline-Dice loss, promoting optimal vessel overlap while maintaining faithful network structure. Extensive evaluations on diverse publicly available datasets (2D/3D, retinal/coronary) demonstrate clCE’s effectiveness. Compared to existing losses, clCE achieves superior overlap with ground truth while simultaneously improving vascular connectivity. This paves the way for more accurate and clinically relevant vessel segmentation, particularly in complex 3D scenarios.
We share an implementation of the clCE loss function in https://github.com/cesaracebes/centerline_CE. | The Centerline-Cross Entropy Loss for Vessel-Like Structure Segmentation: Better Topology Consistency Without Sacrificing Accuracy | [
"Acebes, Cesar",
"Moustafa, Abdel Hakim",
"Camara, Oscar",
"Galdran, Adrian"
] | Conference | [
"https://github.com/cesaracebes/centerline_CE"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 402 |
||
null | https://papers.miccai.org/miccai-2024/paper/2377_paper.pdf | @InProceedings{ Suk_LaBGATr_MICCAI2024,
author = { Suk, Julian and Imre, Baris and Wolterink, Jelmer M. },
title = { { LaB-GATr: geometric algebra transformers for large biomedical surface and volume meshes } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Many anatomical structures can be described by surface or volume meshes. Machine learning is a promising tool to extract information from these 3D models. However, high-fidelity meshes often contain hundreds of thousands of vertices, which creates unique challenges in building deep neural network architectures. Furthermore, patient-specific meshes may not be canonically aligned which limits the generalisation of machine learning algorithms. We propose LaB-GATr, a transfomer neural network with geometric tokenisation that can effectively learn with large-scale (bio-)medical surface and volume meshes through sequence compression and interpolation. Our method extends the recently proposed geometric algebra transformer (GATr) and thus respects all Euclidean symmetries, i.e. rotation, translation and reflection, effectively mitigating the problem of canonical alignment between patients. LaB-GATr achieves state-of-the-art results on three tasks in cardiovascular hemodynamics modelling and neurodevelopmental phenotype prediction, featuring meshes of up to 200,000 vertices. Our results demonstrate that LaB-GATr is a powerful architecture for learning with high-fidelity meshes which has the potential to enable interesting downstream applications. Our implementation is publicly available. | LaB-GATr: geometric algebra transformers for large biomedical surface and volume meshes | [
"Suk, Julian",
"Imre, Baris",
"Wolterink, Jelmer M."
] | Conference | 2403.07536 | [
"https://github.com/sukjulian/lab-gatr"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 403 |
|
null | https://papers.miccai.org/miccai-2024/paper/2579_paper.pdf | @InProceedings{ Ma_Symmetry_MICCAI2024,
author = { Ma, Yang and Wang, Dongang and Liu, Peilin and Masters, Lynette and Barnett, Michael and Cai, Weidong and Wang, Chenyu },
title = { { Symmetry Awareness Encoded Deep Learning Framework for Brain Imaging Analysis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | The heterogeneity of neurological conditions, ranging from structural anomalies to functional impairments, presents a significant challenge in medical imaging analysis tasks. Moreover, the limited availability of well-annotated datasets constrains the development of robust analysis models. Against this backdrop, this study introduces a novel approach leveraging the inherent anatomical symmetrical features of the human brain to enhance the subsequent detection and segmentation analysis for brain diseases. A novel Symmetry-Aware Cross-Attention (SACA) module is proposed to encode symmetrical features of left and right hemispheres, and a proxy task to detect symmetrical features as the Symmetry-Aware Head (SAH) is proposed, which guides the pretraining of the whole network on a vast 3D brain imaging dataset comprising both healthy and diseased brain images across various MRI and CT. Through meticulous experimentation on downstream tasks, including both classification and segmentation for brain diseases, our model demonstrates superior performance over state-of-the-art methodologies, particularly highlighting the significance of symmetry-aware learning. Our findings advocate for the effectiveness of incorporating symmetry awareness into pretraining and set a new benchmark for medical imaging analysis, promising significant strides toward accurate and efficient diagnostic processes. Code is available at https://github.com/bitMyron/sa-swin. | Symmetry Awareness Encoded Deep Learning Framework for Brain Imaging Analysis | [
"Ma, Yang",
"Wang, Dongang",
"Liu, Peilin",
"Masters, Lynette",
"Barnett, Michael",
"Cai, Weidong",
"Wang, Chenyu"
] | Conference | 2407.08948 | [
"https://github.com/bitMyron/sa-swin"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 404 |
|
null | https://papers.miccai.org/miccai-2024/paper/1155_paper.pdf | @InProceedings{ Pan_SinoSynth_MICCAI2024,
author = { Pang, Yunkui and Liu, Yilin and Chen, Xu and Yap, Pew-Thian and Lian, Jun },
title = { { SinoSynth: A Physics-based Domain Randomization Approach for Generalizable CBCT Image Enhancement } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Cone Beam Computed Tomography (CBCT) finds diverse applications in medicine. Ensuring high image quality in CBCT scans is essential for accurate diagnosis and treatment delivery. Yet, the susceptibility of CBCT images to noise and artifacts undermines both their usefulness and reliability. Existing methods typically address CBCT artifacts through image-to-image translation approaches. These methods, however, are limited by the artifact types present in the training data, which may not cover the complete spectrum of CBCT degradations stemming from variations in imaging protocols. Gathering additional data to encompass all possible scenarios can often pose a challenge. To address this, we present SinoSynth, a physics-based degradation model that simulates various CBCT-specific artifacts to generate a diverse set of synthetic CBCT images from high-quality CT images, without requiring pre-aligned data. Through extensive experiments, we demonstrate that several different generative networks trained on our synthesized data achieve remarkable results on heterogeneous multi-institutional datasets, outperforming even the same networks trained on actual data. We further show that our degradation model conveniently provides an avenue to enforce anatomical constraints in conditional generative models, yielding high-quality and structure-preserving synthetic CT images. | SinoSynth: A Physics-based Domain Randomization Approach for Generalizable CBCT Image Enhancement | [
"Pang, Yunkui",
"Liu, Yilin",
"Chen, Xu",
"Yap, Pew-Thian",
"Lian, Jun"
] | Conference | 2409.18355 | [
"https://github.com/Pangyk/SinoSynth"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 405 |
|
null | https://papers.miccai.org/miccai-2024/paper/2281_paper.pdf | @InProceedings{ Lit_TADM_MICCAI2024,
author = { Litrico, Mattia and Guarnera, Francesco and Giuffrida, Mario Valerio and Ravì, Daniele and Battiato, Sebastiano },
title = { { TADM: Temporally-Aware Diffusion Model for Neurodegenerative Progression on Brain MRI } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Generating realistic images to accurately predict changes in the structure of brain MRI can be a crucial tool for clinicians. Such applications can help assess patients’ outcomes and analyze how diseases progress at the individual level. However, existing methods developed for this task present some limitations. Some approaches attempt to model the distribution of MRI scans directly by conditioning the model on patients’ ages, but they fail to explicitly capture the relationship between structural changes in the brain and time intervals, especially on age-unbalanced datasets. Other approaches simply rely on interpolation between scans, which limits their clinical application as they do not predict future MRIs. To address these challenges, we propose a Temporally-Aware Diffusion Model (TADM), which introduces a novel approach to accurately infer progression in brain MRIs. TADM learns the distribution of structural changes in terms of intensity differences between scans and combines the prediction of these changes with the initial baseline scans to generate future MRIs. Furthermore, during training, we propose to leverage a pre-trained Brain-Age Estimator (BAE) to refine the model’s training process, enhancing its ability to produce accurate MRIs that match the expected age gap between baseline and generated scans. Our assessment, conducted on 634 subjects from the OASIS-3 dataset, uses similarity metrics and region sizes computed by comparing predicted and real follow-up scans on 3 relevant brain regions. TADM achieves large improvements over existing approaches, with an average decrease of 24% in region size error and an improvement of 4% in similarity metrics. These evaluations demonstrate the improvement of our model in mimicking temporal brain neurodegenerative progression compared to existing methods. We believe that our approach will significantly benefit clinical applications, such as predicting patient outcomes or improving treatments for patients. | TADM: Temporally-Aware Diffusion Model for Neurodegenerative Progression on Brain MRI | [
"Litrico, Mattia",
"Guarnera, Francesco",
"Giuffrida, Mario Valerio",
"Ravì, Daniele",
"Battiato, Sebastiano"
] | Conference | 2406.12411 | [
"https://github.com/MattiaLitrico/TADM-Temporally-Aware-Diffusion-Model-for-Neurodegenerative-Progression-on-Brain-MRI"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 406 |
|
null | https://papers.miccai.org/miccai-2024/paper/0376_paper.pdf | @InProceedings{ Sar_VisionBased_MICCAI2024,
author = { Sarwin, Gary and Carretta, Alessandro and Staartjes, Victor and Zoli, Matteo and Mazzatenta, Diego and Regli, Luca and Serra, Carlo and Konukoglu, Ender },
title = { { Vision-Based Neurosurgical Guidance: Unsupervised Localization and Camera-Pose Prediction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Localizing oneself during endoscopic procedures can be problematic due to the lack of distinguishable textures and landmarks, as well as difficulties due to the endoscopic device such as a limited field of view and challenging lighting conditions. Expert knowledge shaped by years of experience is required for localization within the human body during endoscopic procedures. In this work, we present a deep learning method based on anatomy recognition, that constructs a surgical path in an unsupervised manner from surgical videos, modelling relative location and variations due to different viewing angles. At inference time, the model can map unseen video frames on the path and estimate the viewing angle, aiming to provide guidance, for instance, to reach a particular destination. We test the method on a dataset consisting of surgical videos of pituitary surgery, i.e. transsphenoidal adenomectomy, as well as on a synthetic dataset. An online tool that lets researchers upload their surgical videos to obtain anatomy detections and the weights of the trained YOLOv7 model are available at: https://surgicalvision.bmic.ethz.ch. | Vision-Based Neurosurgical Guidance: Unsupervised Localization and Camera-Pose Prediction | [
"Sarwin, Gary",
"Carretta, Alessandro",
"Staartjes, Victor",
"Zoli, Matteo",
"Mazzatenta, Diego",
"Regli, Luca",
"Serra, Carlo",
"Konukoglu, Ender"
] | Conference | 2405.09355 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 407 |
|
null | https://papers.miccai.org/miccai-2024/paper/3400_paper.pdf | @InProceedings{ Chr_Confidence_MICCAI2024,
author = { Christodoulou, Evangelia and Reinke, Annika and Houhou, Rola and Kalinowski, Piotr and Erkan, Selen and Sudre, Carole H. and Burgos, Ninon and Boutaj, Sofiène and Loizillon, Sophie and Solal, Maëlys and Rieke, Nicola and Cheplygina, Veronika and Antonelli, Michela and Mayer, Leon D. and Tizabi, Minu D. and Cardoso, M. Jorge and Simpson, Amber and Jäger, Paul F. and Kopp-Schneider, Annette and Varoquaux, Gaël and Colliot, Olivier and Maier-Hein, Lena },
title = { { Confidence intervals uncovered: Are we ready for real-world medical imaging AI? } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Medical imaging is spearheading the AI transformation of healthcare. Performance reporting is key to determine which methods should be translated into clinical practice. Frequently, broad conclusions are simply derived from mean performance values. In this paper, we argue that this common practice is often a misleading simplification as it ignores performance variability. Our contribution is threefold. (1) Analyzing all MICCAI segmentation papers (n = 221) published in 2023, we first observe that more than 50% of papers do not assess performance variability at all. Moreover, only one (0.5%) paper reported confidence intervals (CIs) for model performance. (2) To address the reporting bottleneck, we show that the unreported standard deviation (SD) in segmentation papers can be approximated by a second-order polynomial function of the mean Dice similarity coefficient (DSC). Based on external validation data from 56 previous MICCAI challenges, we demonstrate that this approximation can accurately reconstruct the CI of a method using information provided in publications. (3) Finally, we reconstructed 95% CIs around the mean DSC of MICCAI 2023 segmentation papers. The median CI width was 0.03 which is three times larger than the median performance gap between the first and second ranked method. For more than 60% of papers, the mean performance of the second-ranked method was within the CI of the first-ranked method. We conclude that current publications typically do not provide sufficient evidence to support which models could potentially be translated into clinical practice. | Confidence intervals uncovered: Are we ready for real-world medical imaging AI? | [
"Christodoulou, Evangelia",
"Reinke, Annika",
"Houhou, Rola",
"Kalinowski, Piotr",
"Erkan, Selen",
"Sudre, Carole H.",
"Burgos, Ninon",
"Boutaj, Sofiène",
"Loizillon, Sophie",
"Solal, Maëlys",
"Rieke, Nicola",
"Cheplygina, Veronika",
"Antonelli, Michela",
"Mayer, Leon D.",
"Tizabi, Minu D.",
"Cardoso, M. Jorge",
"Simpson, Amber",
"Jäger, Paul F.",
"Kopp-Schneider, Annette",
"Varoquaux, Gaël",
"Colliot, Olivier",
"Maier-Hein, Lena"
] | Conference | 2409.17763 | [
"https://github.com/IMSY-DKFZ/CI_uncovered"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 408 |
|
null | https://papers.miccai.org/miccai-2024/paper/2078_paper.pdf | @InProceedings{ Che_Pathological_MICCAI2024,
author = { Chen, Fuqiang and Zhang, Ranran and Zheng, Boyun and Sun, Yiwen and He, Jiahui and Qin, Wenjian },
title = { { Pathological Semantics-Preserving Learning for H&E-to-IHC Virtual Staining } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Conventional hematoxylin-eosin (H&E) staining is limited to revealing cell morphology and distribution, whereas immunohistochemical (IHC) staining provides precise and specific visualization of protein activation at the molecular level. Virtual staining technology has emerged as a solution for highly efficient IHC examination, which directly transforms H&E-stained images to IHC-stained images. However, virtual staining is challenged by the insufficient mining of pathological semantics and the spatial misalignment of pathological semantics. To address these issues, we propose the Pathological Semantics-Preserving Learning method for Virtual Staining (PSPStain), which directly incorporates the molecular-level semantic information and enhances semantics interaction despite any spatial inconsistency. Specifically, PSPStain comprises two novel learning strategies: 1) Protein-Aware Learning Strategy (PALS) with Focal Optical Density (FOD) map maintains the coherence of protein expression level, which represents molecular-level semantic information; 2) Prototype-Consistent Learning Strategy (PCLS), which enhances cross-image semantic interaction by prototypical consistency learning. We evaluate PSPStain on two public datasets using five metrics: three clinically relevant metrics and two for image quality. Extensive experiments indicate that PSPStain outperforms current state-of-the-art H&E-to-IHC virtual staining methods and demonstrates a high pathological correlation between the staging of real and virtual stains. Code is available at https://github.com/ccitachi/PSPStain. | Pathological Semantics-Preserving Learning for H E-to-IHC Virtual Staining | [
"Chen, Fuqiang",
"Zhang, Ranran",
"Zheng, Boyun",
"Sun, Yiwen",
"He, Jiahui",
"Qin, Wenjian"
] | Conference | [
"https://github.com/ccitachi/PSPStain"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 409 |
||
null | https://papers.miccai.org/miccai-2024/paper/1620_paper.pdf | @InProceedings{ Lin_Revisiting_MICCAI2024,
author = { Lin, Xian and Wang, Zhehao and Yan, Zengqiang and Yu, Li },
title = { { Revisiting Self-Attention in Medical Transformers via Dependency Sparsification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Vision transformer (ViT), powered by token-to-token self-attention, has demonstrated superior performance across various vision tasks. The large and even global receptive field obtained via dense self-attention, allows it to build stronger representations than CNN. However, compared to natural images, both the amount and the signal-to-noise ratio of medical images are small, often resulting in poor convergence of vanilla self-attention and further introducing non-negligible noise from extensive unrelated tokens. Besides, token-to-token self-attention requires heavy memory and computation consumption, hindering its deployment onto various computing platforms. In this paper, we propose a dynamic self-attention sparsification method for medical transformers by merging similar feature tokens for dependency distillation under the guidance of feature prototypes. Specifically, we first generate feature prototypes with genetic relationships by simulating the process of cell division, where the number of prototypes is much smaller than that of feature tokens. Then, in each self-attention layer, key and value tokens are grouped based on their distance from feature prototypes. Tokens in the same group, together with the corresponding feature prototype, would be merged into a new prototype according to both feature importance and grouping confidence. Finally, query tokens build pair-wise dependency with such newly-updated prototypes for fewer but global and more efficient interactions. Extensive experiments on three publicly available datasets demonstrate the effectiveness of our solution, working as a plug-and-play module for joint complexity reduction and performance improvement of various medical transformers. Code is available at https://github.com/xianlin7/DMA. | Revisiting Self-Attention in Medical Transformers via Dependency Sparsification | [
"Lin, Xian",
"Wang, Zhehao",
"Yan, Zengqiang",
"Yu, Li"
] | Conference | [
"https://github.com/xianlin7/DMA"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 410 |
||
null | https://papers.miccai.org/miccai-2024/paper/1834_paper.pdf | @InProceedings{ Yan_Generalized_MICCAI2024,
author = { Yan, Zipei and Liang, Zhile and Liu, Zhengji and Wang, Shuai and Chun, Rachel Ka-Man and Li, Jizhou and Kee, Chea-su and Liang, Dong },
title = { { Generalized Robust Fundus Photography-based Vision Loss Estimation for High Myopia } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | High myopia significantly increases the risk of irreversible vision loss. Traditional perimetry-based visual field (VF) assessment provides systematic quantification of visual loss but it is subjective and time-consuming. Consequently, machine learning models utilizing fundus photographs to estimate VF have emerged as promising alternatives. However, due to the high variability and the limited availability of VF data, existing VF estimation models fail to generalize well, particularly when facing out-of-distribution data across diverse centers and populations. To tackle this challenge, we propose a novel, parameter-efficient framework to enhance the generalized robustness of VF estimation on both in- and out-of-distribution data. Specifically, we design a Refinement-by-Denoising (RED) module for feature refinement and adaptation from pretrained vision models, aiming to learn high-entropy feature representations and to mitigate the domain gap effectively and efficiently. Through independent validation on two distinct real-world datasets from separate centers, our method significantly outperforms existing approaches in RMSE, MAE and correlation coefficient for both internal and external validation. Our proposed framework benefits both in- and out-of-distribution VF estimation, offering significant clinical implications and potential utility in real-world ophthalmic practices. | Generalized Robust Fundus Photography-based Vision Loss Estimation for High Myopia | [
"Yan, Zipei",
"Liang, Zhile",
"Liu, Zhengji",
"Wang, Shuai",
"Chun, Rachel Ka-Man",
"Li, Jizhou",
"Kee, Chea-su",
"Liang, Dong"
] | Conference | 2407.03699 | [
"https://github.com/yanzipei/VF_RED"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 411 |
|
null | https://papers.miccai.org/miccai-2024/paper/1379_paper.pdf | @InProceedings{ Gao_Loose_MICCAI2024,
author = { Gao, Tianhong and Song, Jie and Yu, Xiaotian and Zhang, Shengxuming and Liang, Wenjie and Zhang, Hongbin and Li, Ziqian and Zhang, Wenzhuo and Zhang, Xiuming and Zhong, Zipeng and Song, Mingli and Feng, Zunlei },
title = { { Loose Lesion Location Self-supervision Enhanced Colorectal Cancer Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Early diagnosis of colorectal cancer (CRC) is crucial for improving survival and quality of life. While computed tomography (CT) is a key diagnostic tool, manually screening colon tumors is time-consuming and repetitive for radiologists. Recently, deep learning has shown promise in medical image analysis, but its clinical application is limited by the model’s unexplainability and the need for a large number of finely annotated samples. In this paper, we propose a loose lesion location self-supervision enhanced CRC diagnosis framework to reduce the requirement of fine sample annotations and improve the reliability of prediction results. For both non-contrast and contrast CT, despite potential deviations in imaging positions, the lesion location should be nearly consistent in images of both modalities at the same sequence position. In addition, lesion location in two successive slices is relatively close for the same modality. Therefore, a self-supervision mechanism is devised to enforce lesion location consistency at both temporal and modality levels of CT, reducing the need for fine annotations and enhancing the interpretability of diagnostics. Furthermore, this paper introduces a mask correction loopback strategy to reinforce the interdependence between category label and lesion location, ensuring the reliability of diagnosis. To verify our method’s effectiveness, we collect data from 3,178 CRC patients and 887 healthy controls. Experiment results show that the proposed method not only provides reliable lesion localization but also enhances the classification performance by 1-2%, offering an effective diagnostic tool for CRC. Code is available at https://github.com/Gaotianhong/LooseLocationSS. | Loose Lesion Location Self-supervision Enhanced Colorectal Cancer Diagnosis | [
"Gao, Tianhong",
"Song, Jie",
"Yu, Xiaotian",
"Zhang, Shengxuming",
"Liang, Wenjie",
"Zhang, Hongbin",
"Li, Ziqian",
"Zhang, Wenzhuo",
"Zhang, Xiuming",
"Zhong, Zipeng",
"Song, Mingli",
"Feng, Zunlei"
] | Conference | [
"https://github.com/Gaotianhong/LooseLocationSS"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 412 |
||
null | https://papers.miccai.org/miccai-2024/paper/0747_paper.pdf | @InProceedings{ Shi_CS3_MICCAI2024,
author = { Shi, Yi and Tian, Xu-Peng and Wang, Yun-Kai and Zhang, Tie-Yi and Yao, Bing and Wang, Hui and Shao, Yong and Wang, Cen-Cen and Zeng, Rong and Zhan, De-Chuan },
title = { { CS3: Cascade SAM for Sperm Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Automated sperm morphology analysis plays a crucial role in the assessment of male fertility, yet its efficacy is often compromised by the challenges in accurately segmenting sperm images. Existing segmentation techniques, including the Segment Anything Model (SAM), are notably inadequate in addressing the complex issue of sperm overlap—a frequent occurrence in clinical samples. Our exploratory studies reveal that modifying image characteristics by removing sperm heads and easily segmentable areas, alongside enhancing the visibility of overlapping regions, markedly enhances SAM’s efficiency in segmenting intricate sperm structures. Motivated by these findings, we present the Cascade SAM for Sperm Segmentation (CS3), an unsupervised approach specifically designed to tackle the issue of sperm overlap. This method employs a cascade application of SAM to segment sperm heads, simple tails, and complex tails in stages. Subsequently, these segmented masks are meticulously matched and joined to construct complete sperm masks. In collaboration with leading medical institutions, we have compiled a dataset comprising approximately 2,000 unlabeled sperm images to fine-tune our method, and secured expert annotations for an additional 240 images to facilitate comprehensive model assessment. Experimental results demonstrate superior performance of CS3 compared to existing methods. | CS3: Cascade SAM for Sperm Segmentation | [
"Shi, Yi",
"Tian, Xu-Peng",
"Wang, Yun-Kai",
"Zhang, Tie-Yi",
"Yao, Bing",
"Wang, Hui",
"Shao, Yong",
"Wang, Cen-Cen",
"Zeng, Rong",
"Zhan, De-Chuan"
] | Conference | 2407.03772 | [
"https://github.com/shiy19/CS3"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 413 |
|
null | https://papers.miccai.org/miccai-2024/paper/2109_paper.pdf | @InProceedings{ Jou_HyperSpace_MICCAI2024,
author = { Joutard, Samuel and Pietsch, Maximilian and Prevost, Raphael },
title = { { HyperSpace: Hypernetworks for spacing-adaptive image segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Medical images are often acquired in different settings, requiring harmonization to adapt to the operating point of algorithms. Specifically, to standardize the physical spacing of imaging voxels in heterogeneous inference settings, images are typically resampled before being processed by deep learning models. However, down-sampling results in loss of information, whereas upsampling introduces redundant information leading to inefficient resource utilization. To overcome these issues, we propose to condition segmentation models on the voxel spacing using hypernetworks. Our approach allows processing images at their native resolutions or at resolutions adjusted to the hardware and time constraints at inference time. Our experiments across multiple datasets demonstrate that our approach achieves competitive performance compared to resolution-specific models, while offering greater flexibility for the end user. This also simplifies model development, deployment and maintenance. Our code will be made available at \url{https://github.com/anonymous}. | HyperSpace: Hypernetworks for spacing-adaptive image segmentation | [
"Joutard, Samuel",
"Pietsch, Maximilian",
"Prevost, Raphael"
] | Conference | 2407.03681 | [
"https://github.com/ImFusionGmbH/HyperSpace"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 414 |
|
null | https://papers.miccai.org/miccai-2024/paper/2092_paper.pdf | @InProceedings{ Hu_SALI_MICCAI2024,
author = { Hu, Qiang and Yi, Zhenyu and Zhou, Ying and Peng, Fang and Liu, Mei and Li, Qiang and Wang, Zhiwei },
title = { { SALI: Short-term Alignment and Long-term Interaction Network for Colonoscopy Video Polyp Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Colonoscopy videos provide richer information in polyp segmentation for rectal cancer diagnosis. However, the endoscope’s fast moving and close-up observing make the current methods suffer from large spatial incoherence and continuous low-quality frames, and thus yield limited segmentation accuracy. In this context, we focus on robust video polyp segmentation by enhancing the adjacent feature consistency and rebuilding the reliable polyp representation. To achieve this goal, we in this paper propose SALI network, a hybrid of Short-term Alignment Module (SAM) and Long-term Interaction Module (LIM). The SAM learns spatial-aligned features of adjacent frames via deformable convolution and further harmonizes them to capture more stable short-term polyp representation. In case of low-quality frames, the LIM stores the historical polyp representations as a long-term memory bank, and explores the retrospective relations to interactively rebuild more reliable polyp features for the current segmentation. Combing SAM and LIM, the SALI network of video segmentation shows a great robustness to the spatial variations and low-visual cues. Benchmark on the large-scale SUN-SEC verifies the superiority of SALI over the current state-of-the-arts by improving Dice by 2.1%, 2.5%, 4.1% and 1.9%, for the four test sub-sets, respectively. Codes are at https://github.com/Scatteredrain/SALI. | SALI: Short-term Alignment and Long-term Interaction Network for Colonoscopy Video Polyp Segmentation | [
"Hu, Qiang",
"Yi, Zhenyu",
"Zhou, Ying",
"Peng, Fang",
"Liu, Mei",
"Li, Qiang",
"Wang, Zhiwei"
] | Conference | 2406.13532 | [
"https://github.com/Scatteredrain/SALI"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 415 |
|
null | https://papers.miccai.org/miccai-2024/paper/1533_paper.pdf | @InProceedings{ Sto_Towards_MICCAI2024,
author = { Stolte, Skylar E. and Indahlastari, Aprinda and Albizu, Alejandro and Woods, Adam J. and Fang, Ruogu },
title = { { Towards tDCS Digital Twins using Deep Learning-based Direct Estimation of Personalized Electrical Field Maps from T1-Weighted MRI } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Transcranial Direct Current Stimulation (tDCS) is a non-invasive brain stimulation method that applies neuromodulatory effects to the brain via low-intensity, direct current. It has shown possible positive effects in areas such as depression, substance use disorder, anxiety, and pain. Unfortunately, mixed trial results have delayed the field’s progress. Electrical current field approximation provides a way for tDCS researchers to estimate how an individual will respond to specific tDCS parameters. Publicly available physics-based stimulators have led to much progress; however, they can be error-prone, susceptible to quality issues (e.g., poor segmentation), and take multiple hours to run. Digital functional twins provide a method of estimating brain function in response to stimuli using computational methods. We seek to implement this idea for individualized tDCS. Hence, this work provides a proof-of-concept for generating electrical field maps for tDCS directly from T1-weighted magnetic resonance images (MRIs). Our deep learning method employs special loss regularizations to improve the model’s generalizability and
calibration across individual scans and electrode montages. Users may enter a desired electrode montage in addition to the unique MRI for a custom output. Our dataset includes 442 unique individual heads from individuals across the adult lifespan. The pipeline can generate results on the scale of minutes, unlike physics-based systems that can take 1-3 hours. Overall, our methods will help streamline the process of individual current dose estimations for improved tDCS interventions. | Towards tDCS Digital Twins using Deep Learning-based Direct Estimation of Personalized Electrical Field Maps from T1-Weighted MRI | [
"Stolte, Skylar E.",
"Indahlastari, Aprinda",
"Albizu, Alejandro",
"Woods, Adam J.",
"Fang, Ruogu"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 416 |
||
null | https://papers.miccai.org/miccai-2024/paper/3730_paper.pdf | @InProceedings{ Pér_MuST_MICCAI2024,
author = { Pérez, Alejandra and Rodríguez, Santiago and Ayobi, Nicolás and Aparicio, Nicolás and Dessevres, Eugénie and Arbeláez, Pablo },
title = { { MuST: Multi-Scale Transformers for Surgical Phase Recognition } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Phase recognition in surgical videos is crucial for enhancing computer-aided surgical systems as it enables automated understanding of sequential procedural stages. Existing methods often rely on fixed temporal windows for video analysis to identify dynamic surgical phases. Thus, they struggle to simultaneously capture short-, mid-, and long-term information necessary to fully understand complex surgical procedures. To address these issues, we propose Multi-Scale Transformers for Surgical Phase Recognition (MuST), a novel Transformer-based approach that combines a Multi-Term Frame encoder with a Temporal Consistency Module to capture information across multiple temporal scales of a surgical video. Our Multi-Term Frame Encoder computes interdependencies across a hierarchy of temporal scales by sampling sequences at increasing strides around the frame of interest. Furthermore, we employ a long-term Transformer encoder over the frame embeddings to further enhance long-term reasoning. MuST achieves higher performance than previous state-of-the-art methods on three different public benchmarks. | MuST: Multi-Scale Transformers for Surgical Phase Recognition | [
"Pérez, Alejandra",
"Rodríguez, Santiago",
"Ayobi, Nicolás",
"Aparicio, Nicolás",
"Dessevres, Eugénie",
"Arbeláez, Pablo"
] | Conference | 2407.17361 | [
"https://github.com/BCV-Uniandes/MuST"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 417 |
|
null | https://papers.miccai.org/miccai-2024/paper/0850_paper.pdf | @InProceedings{ Pal_Convex_MICCAI2024,
author = { Pal, Jimut B. and Awate, Suyash P. },
title = { { Convex Segments for Convex Objects using DNN Boundary Tracing and Graduated Optimization } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Image segmentation often involves objects of interest that are biologically known to be convex shaped. While typical deep-neural-networks (DNNs) for object segmentation ignore object properties relating to shape, the DNNs that employ shape information fail to enforce hard constraints on shape. We design a brand-new DNN framework that guarantees convexity of the output object-segment by leveraging fundamental geometrical insights into the boundaries of convex-shaped objects. Moreover, we design our framework to build on typical existing DNNs for per-pixel segmentation, while maintaining simplicity in loss-term formulation and maintaining frugality in model size and training time. Results using six publicly available datasets demonstrates that our DNN framework, with little overheads, provides significant benefits in the robust segmentation of convex objects in out-of-distribution images. | Convex Segments for Convex Objects using DNN Boundary Tracing and Graduated Optimization | [
"Pal, Jimut B.",
"Awate, Suyash P."
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 418 |
||
null | https://papers.miccai.org/miccai-2024/paper/2215_paper.pdf | @InProceedings{ Li_Universal_MICCAI2024,
author = { Li, Liu and Wang, Hanchun and Baugh, Matthew and Ma, Qiang and Zhang, Weitong and Ouyang, Cheng and Rueckert, Daniel and Kainz, Bernhard },
title = { { Universal Topology Refinement for Medical Image Segmentation with Polynomial Feature Synthesis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Although existing medical image segmentation methods provide impressive pixel-wise accuracy, they often neglect topological correctness, making their segmentations unusable for many downstream tasks. One option is to retrain such models whilst including a topology-driven loss component. However, this is computationally expensive and often impractical. A better solution would be to have a versatile plug-and-play topology refinement method that is compatible with any domain-specific segmentation pipeline. Directly training a post-processing model to mitigate topological errors often fails as such models tend to be biased towards the topological errors of a target segmentation network. The diversity of these errors is confined to the information provided by a labelled training set, which is especially problematic for small datasets. Our method solves this problem by training a model-agnostic topology refinement network with synthetic segmentations that cover a wide variety of topological errors. Inspired by the Stone-Weierstrass theorem, we synthesize topology-perturbation masks with randomly sampled coefficients of orthogonal polynomial bases, which ensures a complete and unbiased representation. Practically, we verified the efficiency and effectiveness of our methods as being compatible with multiple families of polynomial bases, and show evidence that our universal plug-and-play topology refinement network outperforms both existing topology-driven learning-based and post-processing methods. We also show that combining our method with learning-based models provides an effortless add-on, which can further improve the performance of existing approaches. | Universal Topology Refinement for Medical Image Segmentation with Polynomial Feature Synthesis | [
"Li, Liu",
"Wang, Hanchun",
"Baugh, Matthew",
"Ma, Qiang",
"Zhang, Weitong",
"Ouyang, Cheng",
"Rueckert, Daniel",
"Kainz, Bernhard"
] | Conference | 2409.09796 | [
"https://github.com/smilell/Universal-Topology-Refinement"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 419 |
|
null | https://papers.miccai.org/miccai-2024/paper/0407_paper.pdf | @InProceedings{ Zha_SHAN_MICCAI2024,
author = { Zhang, Ruixuan and Lu, Wenhuan and Guan, Cuntai and Gao, Jie and Wei, Xi and Li, Xuewei },
title = { { SHAN: Shape Guided Network for Thyroid Nodule Ultrasound Cross-Domain Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Segmentation models for thyroid ultrasound images are challenged by domain gaps across multi-center data. Some methods have been proposed to address this issue by enforcing consistency across multi-domains or by simulating domain gaps using augmented single-domain. Among them, single-domain generalization methods offer a more universal solution, but their heavy reliance on the data augmentation causes two issues for ultrasound image segmentation. Firstly, the corruption in data augmentation may affect the distribution of grayscale values with diagnostic significant, leading to a decline in model’s segmentation ability. The second is the real domain gap between ultrasound images is difficult to be simulated, resulting in features still correlate with domain, which in turn prevents the construction of the domain-independent latent space. To address these, given that the shape distribution of nodules is task-relevant but domain-independent, the SHape-prior Affine Network (SHAN) is proposed. SHAN serves shape prior as a stable latent mapping space, learning aspect ratio, size, and location of nodules through affine transformation of prior. Thus, our method enhances the segmentation capability and cross-domain generalization of model without any data augmentation methods. Additionally, SHAN is designed to be a plug-and-play method that can improve the performance of segmentation models with an encoder-decoder structure. Our experiments are performed on the public dataset TN3K and a private dataset TUI with 6 domains. By combining SHAN with several segmentation methods and comparing them with other single-domain generalization methods, it can be proved that SHAN performs optimally on both source and target domain data. | SHAN: Shape Guided Network for Thyroid Nodule Ultrasound Cross-Domain Segmentation | [
"Zhang, Ruixuan",
"Lu, Wenhuan",
"Guan, Cuntai",
"Gao, Jie",
"Wei, Xi",
"Li, Xuewei"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 420 |
||
null | https://papers.miccai.org/miccai-2024/paper/2081_paper.pdf | @InProceedings{ Bu_DnFPlane_MICCAI2024,
author = { Bu, Ran and Xu, Chenwei and Shan, Jiwei and Li, Hao and Wang, Guangming and Miao, Yanzi and Wang, Hesheng },
title = { { DnFPlane For Efficient and High-Quality 4D Reconstruction of Deformable Tissues } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Reconstruction of deformable tissues in robotic surgery from endoscopic stereo videos holds great significance for a variety of clinical applications. Existing methods primarily focus on enhancing inference speed, overlooking depth distortion issues in reconstruction results, particularly in regions occluded by surgical instruments. This may lead to misdiagnosis and surgical misguidance. In this paper, we propose an efficient algorithm designed to address the reconstruction challenges arising from depth distortion in complex scenarios. Unlike previous methods that treat each feature plane equally in the dynamic and static field, our framework guides the static field with the dynamic field, generating a dynamic-mask to filter features at the time level. This allows the network to focus on more active dynamic features, reducing depth distortion. In addition, we design a module to address dynamic blurring. Using the dynamic-mask as a guidance, we iteratively refine color values through Gated Recurrent Units (GRU), improving the clarity of tissues detail in the reconstructed results. Experiments on a public endoscope dataset demonstrate that our method outperforms existing state-of-the-art methods without compromising training time. Furthermore, our approach shows outstanding reconstruction performance in occluded regions, making it a more reliable solution in medical scenarios. Code is available: https://github.com/CUMT-IRSI/DnFPlane.git. | DnFPlane For Efficient and High-Quality 4D Reconstruction of Deformable Tissues | [
"Bu, Ran",
"Xu, Chenwei",
"Shan, Jiwei",
"Li, Hao",
"Wang, Guangming",
"Miao, Yanzi",
"Wang, Hesheng"
] | Conference | [
"https://github.com/CUMT-IRSI/DnFPlane.git"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 421 |
||
null | https://papers.miccai.org/miccai-2024/paper/1812_paper.pdf | @InProceedings{ Du_RETCLIP_MICCAI2024,
author = { Du, Jiawei and Guo, Jia and Zhang, Weihang and Yang, Shengzhu and Liu, Hanruo and Li, Huiqi and Wang, Ningli },
title = { { RET-CLIP: A Retinal Image Foundation Model Pre-trained with Clinical Diagnostic Reports } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | The Vision-Language Foundation model is increasingly investigated in the fields of computer vision and natural language processing, yet its exploration in ophthalmology and broader medical applications remains limited. The challenge is the lack of labeled data for the training of foundation model. To handle this issue, a CLIP-style retinal image foundation model is developed in this paper. Our foundation model, RET-CLIP, is specifically trained on a dataset of 193,865 patients to extract general features of color fundus photographs (CFPs), employing a tripartite optimization strategy to focus on left eye, right eye, and patient level to reflect real-world clinical scenarios. Extensive experiments demonstrate that RET-CLIP outperforms existing benchmarks across eight diverse datasets spanning four critical diagnostic categories: diabetic retinopathy, glaucoma, multiple disease diagnosis, and multi-label classification of multiple diseases, which demonstrate the performance and generality of our foundation model. We will release our pre-trained model publicly in support of further research. | RET-CLIP: A Retinal Image Foundation Model Pre-trained with Clinical Diagnostic Reports | [
"Du, Jiawei",
"Guo, Jia",
"Zhang, Weihang",
"Yang, Shengzhu",
"Liu, Hanruo",
"Li, Huiqi",
"Wang, Ningli"
] | Conference | 2405.14137 | [
"https://github.com/sStonemason/RET-CLIP"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 422 |
|
null | https://papers.miccai.org/miccai-2024/paper/2694_paper.pdf | @InProceedings{ Nam_InstaSAM_MICCAI2024,
author = { Nam, Siwoo and Namgung, Hyun and Jeong, Jaehoon and Luna, Miguel and Kim, Soopil and Chikontwe, Philip and Park, Sang Hyun },
title = { { InstaSAM: Instance-aware Segment Any Nuclei Model with Point Annotations } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Weakly supervised nuclei segmentation methods have been proposed to simplify the demanding labeling process by primarily depending on point annotations. These methods generate pseudo labels for training based on given points, but their accuracy is often limited by inaccurate pseudo labels. Even though there have been attempts to improve performance by utilizing power of foundation model e.g., Segment Anything Model (SAM), these approaches require more precise guidance (e.g., box), and lack of ability to distinguish individual nuclei instances. To this end, we propose InstaSAM, a novel weakly supervised nuclei instance segmentation method that utilizes confidence of prediction as a guide while leveraging the powerful representation of SAM. Specifically, we use point prompts to initially generate rough pseudo instance maps and fine-tune the adapter layers in image encoder. To exclude unreliable instances, we selectively extract segmented cells with high confidence from pseudo instance segmentation and utilize these for the training of binary segmentation and distance maps. Owing to their shared use of the image encoder, the binary map, distance map, and pseudo instance map benefit from complementary updates. Our experimental results demonstrate that our method significantly outperforms state-of-the-art methods and is robust in few-shot, shifted point, and cross-domain settings. The code will be available upon publication. | InstaSAM: Instance-aware Segment Any Nuclei Model with Point Annotations | [
"Nam, Siwoo",
"Namgung, Hyun",
"Jeong, Jaehoon",
"Luna, Miguel",
"Kim, Soopil",
"Chikontwe, Philip",
"Park, Sang Hyun"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 423 |
||
null | https://papers.miccai.org/miccai-2024/paper/0141_paper.pdf | @InProceedings{ Xia_Data_MICCAI2024,
author = { Xiao, Anqi and Han, Keyi and Shi, Xiaojing and Tian, Jie and Hu, Zhenhua },
title = { { Data Augmentation with Multi-armed Bandit on Image Deformations Improves Fluorescence Glioma Boundary Recognition } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | The recognition of glioma boundary is challenging as a diffused growthing malignant tumor. Although fluorescence molecular imaging, especially in the second near-infrared window (NIR-II, 1000-1700 nm), helps improve surgical outcomes, fast and precise recognition remains in demand. Data-driven deep learning technology shows great promise in providing objective, fast, and precise recognition for glioma boundaries, but the lack of data poses challenges for designing effective models. Automatic data augmentation can improve the representation of small-scale datasets without requiring extensive prior information, which is suitable for fluorescence-based glioma boundary recognition. We propose Explore and Exploit Augment (EEA) based on multi-armed bandit for image deformations, enabling dynamic policy adjustment during training. Additionally, images captured in white light and the first near-infrared window (NIR-I, 700-900 nm) are introduced to further enhance performance. Experiments demonstrate that EEA improves the generalization of four types of models for glioma boundary recognition, suggesting significant potential for aiding in medical image classification. Code is available at https://github.com/ainieli/EEA. | Data Augmentation with Multi-armed Bandit on Image Deformations Improves Fluorescence Glioma Boundary Recognition | [
"Xiao, Anqi",
"Han, Keyi",
"Shi, Xiaojing",
"Tian, Jie",
"Hu, Zhenhua"
] | Conference | [
"https://github.com/ainieli/EEA"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 424 |
||
null | https://papers.miccai.org/miccai-2024/paper/0026_paper.pdf | @InProceedings{ Gui_TailEnhanced_MICCAI2024,
author = { Gui, Shuangchun and Wang, Zhenkun },
title = { { Tail-Enhanced Representation Learning for Surgical Triplet Recognition } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Surgical triplets recognition aims to identify instruments, verbs, and targets in a single video frame, while establishing associations among these components. Since this task has severe imbalanced class distribution, precisely identifying tail classes becomes a critical challenge. To cope with this issue, existing methods leverage knowledge distillation to facilitate tail triplet recognition. However, these methods overlook the low inter-triplet feature variance, diminishing the model’s confidence in identifying classes. As a technique for learning discriminative features across instances, contrastive learning (CL) shows great potential in identifying triplets. Under this imbalanced class distribution, directly applying CL presents two problems: 1) multiple activities in one image make instance feature learning to interference from other classes, and 2) limited training samples of tail classes may lead to inadequate semantic capturing. In this paper, we propose a tail-enhanced representation learning (TERL) method to address these problems. TERL employs a disentangle module to acquire instance-level features in a single image. Obtaining these disentangled instances, those from tail classes are selected to conduct CL, which captures discriminative features by enabling a global memory bank. During CL, we further conduct semantic enhancement to each tail class. This generates component class prototypes based on the global bank, thus providing additional component information to tail classes. We evaluate the performance of TERL on the 5-fold cross-validation split of the CholecT45 dataset. The experimental results consistently demonstrate the superiority of TERL over state-of-the-art methods. | Tail-Enhanced Representation Learning for Surgical Triplet Recognition | [
"Gui, Shuangchun",
"Wang, Zhenkun"
] | Conference | [
"https://github.com/CIAM-Group/ComputerVision_Codes/tree/main/TERL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 425 |
||
null | https://papers.miccai.org/miccai-2024/paper/1306_paper.pdf | @InProceedings{ Jai_Follow_MICCAI2024,
author = { Jain, Kshitiz and Rangarajan, Krithika and Arora, Chetan },
title = { { Follow the Radiologist: Clinically Relevant Multi-View Cues for Breast Cancer Detection from Mammograms } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Automated breast cancer detection using deep learning based object detection models have achieved high sensitivity, but often struggles with high false positive rate. While radiologists possess the ability to analyze and identify malignant masses in mammograms using multiple views, it poses a challenge for deep learning based models. Inspired by how object appearance behave across multiple views in natural images, researchers have proposed several techniques to exploit geometric correspondence between location of a tumor in multiple views and reduce false positives. We question clinical relevance of such cues. We show that there is inherent ambiguity in geometric correspondence between the two mammography views, because of which accurate geometric alignment is not possible. Instead, we propose to match morphological cues between the two views. Harnessing recent advances for object detection approaches in computer vision, we adapt a state-of-the-art transformer architecture to use proposed morphological cues. We claim that proposed cues are more agreeable with a clinician’s approach compared to the geometrical alignment. Using our approach, we show a significant improvement of 5% in sensitivity at 0.3 False Positives per Image (FPI) on benchmark INBreast dataset. We also report an improvement of 2% and 1% on in-house and benchmark DDSM dataset respectively. Realizing lack of open source code base in this area impeding reproducible research, we are publicly releasing source code and pretrained models for this work. | Follow the Radiologist: Clinically Relevant Multi-View Cues for Breast Cancer Detection from Mammograms | [
"Jain, Kshitiz",
"Rangarajan, Krithika",
"Arora, Chetan"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 426 |
||
null | https://papers.miccai.org/miccai-2024/paper/0151_paper.pdf | @InProceedings{ Shi_ShapeMambaEM_MICCAI2024,
author = { Shi, Ruohua and Pang, Qiufan and Ma, Lei and Duan, Lingyu and Huang, Tiejun and Jiang, Tingting },
title = { { ShapeMamba-EM: Fine-Tuning Foundation Model with Local Shape Descriptors and Mamba Blocks for 3D EM Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Electron microscopy (EM) imaging offers unparalleled resolution for analyzing neural tissues, crucial for uncovering the intricacies of synaptic connections and neural processes fundamental to understanding behavioral mechanisms. Recently, the foundation models have demonstrated impressive performance across numerous natural and medical image segmentation tasks. However, applying these foundation models to EM segmentation faces significant challenges due to domain disparities. This paper presents ShapeMamba-EM, a specialized fine-tuning method for 3D EM segmentation, which employs adapters for long-range dependency modeling and an encoder for local shape description within the original foundation model. This approach effectively addresses the unique volumetric and morphological complexities of EM data. Tested over a wide range of EM images, covering five segmentation tasks and 10 datasets, ShapeMamba-EM outperforms existing methods, establishing a new standard in EM image segmentation and enhancing the understanding of neural tissue architecture. | ShapeMamba-EM: Fine-Tuning Foundation Model with Local Shape Descriptors and Mamba Blocks for 3D EM Image Segmentation | [
"Shi, Ruohua",
"Pang, Qiufan",
"Ma, Lei",
"Duan, Lingyu",
"Huang, Tiejun",
"Jiang, Tingting"
] | Conference | 2408.14114 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 427 |
|
null | https://papers.miccai.org/miccai-2024/paper/3610_paper.pdf | @InProceedings{ Wu_TLRN_MICCAI2024,
author = { Wu, Nian and Xing, Jiarui and Zhang, Miaomiao },
title = { { TLRN: Temporal Latent Residual Networks For Large Deformation Image Registration } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | This paper presents a novel approach, termed Temporal Latent Residual Network (TLRN), to predict a sequence of deformation fields in time-series image registration. The challenge of registering time-series images often lies in the occurrence of large motions, especially when images differ significantly from a reference (e.g., the start of a cardiac cycle compared to the peak stretching phase). To achieve accurate and robust registration results, we leverage the nature of motion continuity and exploit the temporal smoothness in consecutive image frames. Our proposed TLRN highlights a temporal residual network with residual blocks carefully designed in latent deformation spaces, which are parameterized by time-sequential initial velocity fields. We treat a sequence of residual blocks over time as a dynamic training system, where each block is designed to learn the residual function between desired deformation features and current input accumulated from previous time frames. We validate the effectivenss of TLRN on both synthetic data and real-world cine cardiac magnetic resonance (CMR) image videos. Our experimental results shows that TLRN is able to achieve substantially improved registration accuracy compared to the state-of-the-art. Our code is publicly available at https://github.com/nellie689/TLRN. | TLRN: Temporal Latent Residual Networks For Large Deformation Image Registration | [
"Wu, Nian",
"Xing, Jiarui",
"Zhang, Miaomiao"
] | Conference | 2407.11219 | [
"https://github.com/nellie689/TLRN"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 428 |
|
null | https://papers.miccai.org/miccai-2024/paper/0930_paper.pdf | @InProceedings{ Ouy_Promptbased_MICCAI2024,
author = { Ouyang, Xi and Gu, Dongdong and Li, Xuejian and Zhou, Wenqi and Chen, Qianqian and Zhan, Yiqiang and Zhou, Xiang and Shi, Feng and Xue, Zhong and Shen, Dinggang },
title = { { Prompt-based Segmentation Model of Anatomical Structures and Lesions in CT Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Deep learning models have been successfully developed for various medical image segmentation tasks. However, individual models are commonly developed using specific data along with a substantial amount of annotations, ignoring the internal connections between different tasks. To overcome this limitation, we integrate such a multi-task processing into a general computerized tomography (CT) image segmentation model trained on large-scale data, capable of performing a wide range of segmentation tasks. The rationale is that different segmentation tasks are often correlated, and their joint learning could potentially improve overall segmentation performance. Specifically, the proposed model is designed with a transformer-based encoder-decoder architecture coupled with automatic pathway (AP) modules. It provides a common image encoding and an automatic task-driven decoding pathway for performing different segmentation tasks via specific prompts. As a unified model capable of handling multiple tasks, our model not only improves the performance of seen tasks but also quickly adapts to new unseen tasks with a relatively small number of training samples while maintaining reasonable performance. Furthermore, the modular design of automatic pathway routing allows for parameter pruning for network size reduction during the deployment. | Prompt-based Segmentation Model of Anatomical Structures and Lesions in CT Images | [
"Ouyang, Xi",
"Gu, Dongdong",
"Li, Xuejian",
"Zhou, Wenqi",
"Chen, Qianqian",
"Zhan, Yiqiang",
"Zhou, Xiang",
"Shi, Feng",
"Xue, Zhong",
"Shen, Dinggang"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 429 |
||
null | https://papers.miccai.org/miccai-2024/paper/3963_paper.pdf | @InProceedings{ Kha_Active_MICCAI2024,
author = { Khanal, Bidur and Dai, Tianhong and Bhattarai, Binod and Linte, Cristian },
title = { { Active Label Refinement for Robust Training of Imbalanced Medical Image Classification Tasks in the Presence of High Label Noise } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | The robustness of supervised deep learning-based medical image classification is significantly undermined by label noise in the training data. Although several methods have been proposed to enhance classification performance in the presence of noisy labels, they face some challenges: 1) a struggle with class-imbalanced datasets, leading to the frequent overlooking of minority classes as noisy samples; 2) a singular focus on maximizing performance using noisy datasets, without incorporating experts-in-the-loop for actively cleaning the noisy labels. To mitigate these challenges, we propose a two-phase approach that combines Learning with Noisy Labels (LNL) and active learning. This approach not only improves the robustness of medical image classification in the presence of noisy labels but also iteratively improves the quality of the dataset by relabeling the important incorrect labels, under a limited annotation budget. Furthermore, we introduce a novel Variance of Gradients approach in the LNL phase, which complements the loss-based sample selection by also sampling under-represented examples. Using two imbalanced noisy medical classification datasets, we demonstrate that our proposed technique is superior to its predecessors at handling class imbalance by not misidentifying clean samples from minority classes as mostly noisy samples. Code available at: https://github.com/Bidur-Khanal/imbalanced-medical-active-label-cleaning.git | Active Label Refinement for Robust Training of Imbalanced Medical Image Classification Tasks in the Presence of High Label Noise | [
"Khanal, Bidur",
"Dai, Tianhong",
"Bhattarai, Binod",
"Linte, Cristian"
] | Conference | 2407.05973 | [
"https://github.com/Bidur-Khanal/imbalanced-medical-active-label-cleaning.git"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 430 |
|
null | https://papers.miccai.org/miccai-2024/paper/0339_paper.pdf | @InProceedings{ Liu_CriDiff_MICCAI2024,
author = { Liu, Tingwei and Zhang, Miao and Liu, Leiye and Zhong, Jialong and Wang, Shuyao and Piao, Yongri and Lu, Huchuan },
title = { { CriDiff: Criss-cross Injection Diffusion Framework via Generative Pre-train for Prostate Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Recently, the Diffusion Probabilistic Model (DPM)-based methods have achieved substantial success in the field of medical image segmentation. However, most of these methods fail to enable the diffusion model to learn edge features and non-edge features effectively and to inject them efficiently into the diffusion backbone. Additionally, the domain gap between the images features and the diffusion model features poses a great challenge to prostate segmentation. In this paper, we proposed CriDiff, a two-stage feature injecting framework with a Criss-cross Injection Strategy (CIS) and a Generative Pre-train (GP) approach for prostate segmentation. The CIS maximizes the use of multi-level features by efficiently harnessing the complementarity of high and low-level features. To effectively learn multi-level of edge features and non-edge features, we proposed two parallel conditioners in the CIS: the Boundary Enhance Conditioner (BEC) and the Core Enhance Conditioner (CEC), which discriminatively model the image edge regions and non-edge regions. Moreover, the GP approach eases the inconsistency between the images features and the diffusion model without adding additional parameters. Extensive experiments on four benchmark datasets demonstrate the effectiveness of the proposed method and achieve state-of-the-art performance on four evaluation metrics. | CriDiff: Criss-cross Injection Diffusion Framework via Generative Pre-train for Prostate Segmentation | [
"Liu, Tingwei",
"Zhang, Miao",
"Liu, Leiye",
"Zhong, Jialong",
"Wang, Shuyao",
"Piao, Yongri",
"Lu, Huchuan"
] | Conference | 2406.14186 | [
"https://github.com/LiuTingWed/CriDiff"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 431 |
|
null | https://papers.miccai.org/miccai-2024/paper/0271_paper.pdf | @InProceedings{ Pan_ASA_MICCAI2024,
author = { Pang, Jiaxuan and Ma, DongAo and Zhou, Ziyu and Gotway, Michael B. and Liang, Jianming },
title = { { ASA: Learning Anatomical Consistency, Sub-volume Spatial Relationships and Fine-grained Appearance for CT Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | To achieve superior performance, deep learning relies on co- piousness, high-quality, annotated data, but annotating medical images is tedious, laborious, and time-consuming, demanding specialized expertise, especially for segmentation tasks. Segmenting medical images requires not only macroscopic anatomical patterns but also microscopic textural details. Given the intriguing symmetry and recurrent patterns inherent in medical images, we envision a powerful deep model that exploits high-level context, spatial relationships in anatomy, and low-level, fine- grained, textural features in tissues in a self-supervised manner. To realize this vision, we have developed a novel self-supervised learning (SSL) approach called ASA to learn anatomical consistency, sub-volume spatial relationships, and fine-grained appearance for 3D computed tomography images. The novelty of ASA stems from its utilization of intrinsic properties of medical images, with a specific focus on computed tomography volumes. ASA enhances the model’s capability to learn anatomical features from the image, encompassing global representation, local spatial relationships, and intricate appearance details. Extensive experimental results validate the robustness, effectiveness, and efficiency of the pretrained ASA model. With all code and pretrained models released at GitHub.com/JLiangLab/ASA, we hope ASA serves as an inspiration and a foundation for developing enhanced SSL models with a deep understanding of anatomical structures and their spatial relationships, thereby improving diagnostic accuracy and facilitating advanced medical imaging applications | ASA: Learning Anatomical Consistency, Sub-volume Spatial Relationships and Fine-grained Appearance for CT Images | [
"Pang, Jiaxuan",
"Ma, DongAo",
"Zhou, Ziyu",
"Gotway, Michael B.",
"Liang, Jianming"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 432 |
||
null | https://papers.miccai.org/miccai-2024/paper/1781_paper.pdf | @InProceedings{ Luo_An_MICCAI2024,
author = { Luo, Zihao and Luo, Xiangde and Gao, Zijun and Wang, Guotai },
title = { { An Uncertainty-guided Tiered Self-training Framework for Active Source-free Domain Adaptation in Prostate Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Deep learning models have exhibited remarkable efficacy in accurately delineating the prostate for diagnosis and treatment of prostate diseases, but challenges persist in achieving robust generalization across different medical centers. Source-free Domain Adaptation (SFDA) is a promising technique to adapt deep segmentation models to address privacy and security concerns while reducing domain shifts between source and target domains. However, recent literature indicates that the performance of SFDA remains far from satisfactory due to unpredictable domain gaps. Annotating a few target domain samples is acceptable, as it can lead to significant performance improvement with a low annotation cost. Nevertheless, due to extremely limited annotation budgets, careful consideration is needed in selecting samples for annotation. Inspired by this, our goal is to develop Active Source-free Domain Adaptation (ASFDA) for medical image segmentation. Specifically, we propose a novel Uncertainty-guided Tiered Self-training (UGTST) framework, consisting of efficient active sample selection via entropy-based primary local peak filtering to aggregate global uncertainty and diversity-aware redundancy filter, coupled with a tiered self-learning strategy, achieves stable domain adaptation. Experimental results on cross-center prostate MRI segmentation datasets revealed that our method yielded marked advancements, with a mere 5% annotation, exhibiting an average Dice score enhancement of 9.78% and 7.58% in two target domains compared with state-of-the-art methods, on par with fully supervised learning. Code is available at: https://github.com/HiLab-git/UGTST. | An Uncertainty-guided Tiered Self-training Framework for Active Source-free Domain Adaptation in Prostate Segmentation | [
"Luo, Zihao",
"Luo, Xiangde",
"Gao, Zijun",
"Wang, Guotai"
] | Conference | 2407.02893 | [
"https://github.com/hilab-git/ugtst"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 433 |
|
null | https://papers.miccai.org/miccai-2024/paper/1675_paper.pdf | @InProceedings{ Zho_Weaklysupervised_MICCAI2024,
author = { Zhong, Yuan and Tang, Chenhui and Yang, Yumeng and Qi, Ruoxi and Zhou, Kang and Gong, Yuqi and Heng, Pheng-Ann and Hsiao, Janet H. and Dou, Qi },
title = { { Weakly-supervised Medical Image Segmentation with Gaze Annotations } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Eye gaze that reveals human observational patterns has increasingly been incorporated into solutions for vision tasks. Despite recent explorations on leveraging gaze to aid deep networks, few studies
exploit gaze as an efficient annotation approach for medical image segmentation which typically entails heavy annotating costs. In this paper, we propose to collect dense weak supervision for medical image segmentation with a gaze annotation scheme. To train with gaze, we propose a multi-level framework that trains multiple networks from discriminative human attention, simulated with a set of pseudo-masks derived by applying hierarchical thresholds on gaze heatmaps. Furthermore, to mitigate gaze noise, a cross-level consistency is exploited to regularize overfitting noisy labels, steering models toward clean patterns learned by peer networks. The proposed method is validated on two public medical datasets of polyp and prostate segmentation tasks. We contribute a high-quality gaze dataset entitled GazeMedSeg as an extension to the popular medical segmentation datasets. To the best of our knowledge, this is the first gaze dataset for medical image segmentation. Our experiments demonstrate that gaze annotation outperforms previous label-efficient annotation schemes in terms of both performance and annotation time. Our collected gaze data and code are available at: https://github.com/med-air/GazeMedSeg. | Weakly-supervised Medical Image Segmentation with Gaze Annotations | [
"Zhong, Yuan",
"Tang, Chenhui",
"Yang, Yumeng",
"Qi, Ruoxi",
"Zhou, Kang",
"Gong, Yuqi",
"Heng, Pheng-Ann",
"Hsiao, Janet H.",
"Dou, Qi"
] | Conference | 2407.07406 | [
"https://github.com/med-air/GazeMedSeg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 434 |
|
null | https://papers.miccai.org/miccai-2024/paper/1160_paper.pdf | @InProceedings{ Che_LowRank_MICCAI2024,
author = { Chen, Qian and Zhu, Lei and He, Hangzhou and Zhang, Xinliang and Zeng, Shuang and Ren, Qiushi and Lu, Yanye },
title = { { Low-Rank Mixture-of-Experts for Continual Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | The primary goal of continual learning (CL) task in medical image segmentation field is to solve the “catastrophic forgetting” problem, where the model totally forgets previously learned features when it is extended to new categories (class-level) or tasks (task-level). Due to the privacy protection, the historical data labels are inaccessible. Prevalent continual learning methods primarily focus on generating pseudo-labels for old datasets to force the model to memorize the learned features. However, the incorrect pseudo-labels may corrupt the learned feature and lead to a new problem that the better the model is trained on the old task, the poorer the model performs on the new tasks. To avoid this problem, we propose a network by introducing the data-specific Mixture of Experts (MoE) structure to handle the new tasks or categories, ensuring that the network parameters of previous tasks are unaffected or only minimally impacted. To further overcome the tremendous memory costs caused by introducing additional structures, we propose a Low-Rank strategy which significantly reduces memory cost. Fortunately, for task-level CL, we find that low-rank experts learned in previous tasks do not impair subsequent tasks but can assist. For class-level CL learning, we propose a gating function combined with language features, effectively enabling the model to handle multi-organ segmentation tasks in new and old classes. We validate our method on both class-level and task-level continual learning challenges. Extensive experiments on multiple datasets show our model outperforms all other methods. | Low-Rank Mixture-of-Experts for Continual Medical Image Segmentation | [
"Chen, Qian",
"Zhu, Lei",
"He, Hangzhou",
"Zhang, Xinliang",
"Zeng, Shuang",
"Ren, Qiushi",
"Lu, Yanye"
] | Conference | 2406.13583 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 435 |
|
null | https://papers.miccai.org/miccai-2024/paper/0500_paper.pdf | @InProceedings{ Lu_H2ASeg_MICCAI2024,
author = { Lu, Jinpeng and Chen, Jingyun and Cai, Linghan and Jiang, Songhan and Zhang, Yongbing },
title = { { H2ASeg: Hierarchical Adaptive Interaction and Weighting Network for Tumor Segmentation in PET/CT Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Positron emission tomography (PET) combined with computed tomography (CT) imaging is routinely used in cancer diagnosis and prognosis by providing complementary information. Automatically segmenting tumors in PET/CT images can significantly improve examination efficiency. Traditional multi-modal segmentation solutions mainly rely on concatenation operations for modality fusion, which fail to effectively model the non-linear dependencies between PET and CT modalities. Recent studies have investigated various approaches to optimize the fusion of modality-specific features for enhancing joint representations. However, modality-specific encoders used in these methods operate independently, inadequately leveraging the synergistic relationships inherent in PET and CT modalities, for example, the complementarity between semantics and structure. To address these issues, we propose a Hierarchical Adaptive Interaction and Weighting Network termed H2ASeg to explore the intrinsic cross-modal correlations and transfer potential complementary information. Specifically, we design a Modality-Cooperative Spatial Attention (MCSA) module that performs intra- and inter-modal interactions globally and locally. Additionally, a Target-Aware Modality Weighting (TAMW) module is developed to highlight tumor-related features within multi-modal features, thereby refining tumor segmentation. By embedding these modules across different layers, H2ASeg can hierarchically model cross-modal correlations, enabling a nuanced understanding of both semantic and structural tumor features. Extensive experiments demonstrate the superiority of H2ASeg, outperforming state-of-the-art methods on AutoPet-II and Hecktor2022 benchmarks. The code is released at https://github.com/JinPLu/H2ASeg. | H2ASeg: Hierarchical Adaptive Interaction and Weighting Network for Tumor Segmentation in PET/CT Images | [
"Lu, Jinpeng",
"Chen, Jingyun",
"Cai, Linghan",
"Jiang, Songhan",
"Zhang, Yongbing"
] | Conference | 2403.18339 | [
"https://github.com/JinPLu/H2ASeg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 436 |
|
null | https://papers.miccai.org/miccai-2024/paper/0249_paper.pdf | @InProceedings{ Zha_TARDRL_MICCAI2024,
author = { Zhao, Yunxi and Nie, Dong and Chen, Geng and Wu, Xia and Zhang, Daoqiang and Wen, Xuyun },
title = { { TARDRL: Task-Aware Reconstruction for Dynamic Representation Learning of fMRI } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Existing studies in fMRI analysis leverage mask autoencoder, a self-supervised framework, to build model to learn representations and conduct prediction for various fMRI-related tasks. It involves pretraining the model by reconstructing signals of brain regions that are randomly masked at different time segments and subsequently fine-tuning it for prediction tasks. Though it has shown improved performance in prediction tasks, we argue that directly applying this framework on fMRI data may result in sub-optimal results. Firstly, random masking is ineffective for highly redundant fMRI data. Secondly, the reconstruction process is not task-aware, ignoring a critical phenomenon: the varying contributions of different brain regions to different prediction tasks. In this work, we propose and demonstrate a hypothesis that learning representations by reconstructing signals from important ROIs at different time segments can enhance prediction performance. Specifically, we introduce a novel learning framework, Task-Aware Reconstruction Dynamic Representation Learning (TARDRL), to improve prediction performance through task-aware reconstruction. Our approach incorporates an attention-guided masking strategy, which leverages attention maps from the prediction process to guide signal masking during reconstruction, making the reconstruction task task-aware. Extensive experiments show that our model outperforms state-of-the-art methods on the ABIDE and ADNI datasets, with high interpretability. | TARDRL: Task-Aware Reconstruction for Dynamic Representation Learning of fMRI | [
"Zhao, Yunxi",
"Nie, Dong",
"Chen, Geng",
"Wu, Xia",
"Zhang, Daoqiang",
"Wen, Xuyun"
] | Conference | [
"https://github.com/WENXUYUN/TARDRL/"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 437 |
||
null | https://papers.miccai.org/miccai-2024/paper/0322_paper.pdf | @InProceedings{ Guz_Differentiable_MICCAI2024,
author = { Guzzi, Lisa and Zuluaga, Maria A. and Lareyre, Fabien and Di Lorenzo, Gilles and Goffart, Sébastien and Chierici, Andrea and Raffort, Juliette and Delingette, Hervé },
title = { { Differentiable Soft Morphological Filters for Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Morphological operations such as erosion, dilation, and skeletonization offer valuable tools for processing and analyzing segmentation masks. Several studies have investigated the integration of differentiable morphological operations within deep segmentation neural networks, particularly for the computation of loss functions. However, those methods have shown limitations in terms of reliability, versatility or applicability to different types of operations and image dimensions. In this paper, we present a novel framework that provides differentiable morphological filters on probabilistic maps. Given any morphological filter defined on 2D or 3D binary images, our approach generates a soft version of this filter by translating Boolean expressions into multilinear polynomials. Moreover, using proxy polynomials, these soft filters have the same computational complexity as the original binary filter. We demonstrate on diverse biomedical datasets that our method can be easily integrated into neural networks either as a loss function or as the final morphological layer in a segmentation network. In particular, we show that the proposed filters for mask erosion, dilation or skeletonization lead to competitive solutions compared to the state-of-the-art. | Differentiable Soft Morphological Filters for Medical Image Segmentation | [
"Guzzi, Lisa",
"Zuluaga, Maria A.",
"Lareyre, Fabien",
"Di Lorenzo, Gilles",
"Goffart, Sébastien",
"Chierici, Andrea",
"Raffort, Juliette",
"Delingette, Hervé"
] | Conference | [
"https://github.com/lisaGUZZI/Soft-morph"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 438 |
||
null | https://papers.miccai.org/miccai-2024/paper/0221_paper.pdf | @InProceedings{ Bai_EndoUIC_MICCAI2024,
author = { Bai, Long and Chen, Tong and Tan, Qiaozhi and Nah, Wan Jun and Li, Yanheng and He, Zhicheng and Yuan, Sishen and Chen, Zhen and Wu, Jinlin and Islam, Mobarakol and Li, Zhen and Liu, Hongbin and Ren, Hongliang },
title = { { EndoUIC: Promptable Diffusion Transformer for Unified Illumination Correction in Capsule Endoscopy } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Wireless Capsule Endoscopy (WCE) is highly valued for its non-invasive and painless approach, though its effectiveness is compromised by uneven illumination from hardware constraints and complex internal dynamics, leading to overexposed or underexposed images. While researchers have discussed the challenges of low-light enhancement in WCE, the issue of correcting for different exposure levels remains underexplored. To tackle this, we introduce EndoUIC, a WCE unified illumination correction solution using an end-to-end promptable diffusion transformer (DiT) model. In our work, the illumination prompt module shall navigate the model to adapt to different exposure levels and perform targeted image enhancement, in which the Adaptive Prompt Integration (API) and Global Prompt Scanner (GPS) modules shall further boost the concurrent representation learning between the prompt parameters and features. Besides, the U-shaped restoration DiT model shall capture the long-range dependencies and contextual information for unified illumination restoration. Moreover, we present a novel Capsule-endoscopy Exposure Correction (CEC) dataset, including ground-truth and corrupted image pairs annotated by expert photographers. Extensive experiments against a variety of state-of-the-art (SOTA) methods on four datasets showcase the effectiveness of our proposed method and components in WCE illumination restoration, and the additional downstream experiments further demonstrate its utility for clinical diagnosis and surgical assistance. The code and the proposed dataset are available at github.com/longbai1006/EndoUIC. | EndoUIC: Promptable Diffusion Transformer for Unified Illumination Correction in Capsule Endoscopy | [
"Bai, Long",
"Chen, Tong",
"Tan, Qiaozhi",
"Nah, Wan Jun",
"Li, Yanheng",
"He, Zhicheng",
"Yuan, Sishen",
"Chen, Zhen",
"Wu, Jinlin",
"Islam, Mobarakol",
"Li, Zhen",
"Liu, Hongbin",
"Ren, Hongliang"
] | Conference | 2406.13705 | [
"https://github.com/longbai1006/EndoUIC"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 439 |
|
null | https://papers.miccai.org/miccai-2024/paper/0915_paper.pdf | @InProceedings{ Li_Improved_MICCAI2024,
author = { Li, Chunli and Zhang, Xiaoming and Gao, Yuan and Yin, Xiaoli and Lu, Le and Zhang, Ling and Yan, Ke and Shi, Yu },
title = { { Improved Esophageal Varices Assessment from Non-Contrast CT Scans } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Esophageal varices (EV), a serious health concern resulting from portal hypertension, are traditionally diagnosed through invasive endoscopic procedures. Despite non-contrast computed tomography (NC-CT) imaging being a less expensive and non-invasive imaging modality, it has yet to gain full acceptance as a primary clinical diagnostic tool for EV evaluation. To overcome existing diagnostic challenges, we present the Multi-Organ-cOhesion-Network (MOON), a novel framework enhancing the analysis of critical organ features in NC-CT scans for effective assessment of EV. Drawing inspiration from the thorough assessment practices of radiologists, MOON establishes a cohesive multi-organ analysis model that unifies the imaging features of the related organs of EV, namely esophagus, liver, and spleen. This integration significantly increases the diagnostic accuracy for EV. We have compiled an extensive NC-CT dataset of 1,255 patients diagnosed with EV, spanning three grades of severity. Each case is corroborated by endoscopic diagnostic results. The efficacy of MOON has been substantiated through a validation process involving multi-fold cross-validation on 1,010 cases and an independent test on 245 cases, exhibiting superior diagnostic performance compared to methods focusing solely on the esophagus (for classifying severe grade: AUC of 0.864 versus 0.803, and for moderate to severe grades: AUC of 0.832 versus 0.793). To our knowledge, MOON is the first work to incorporate a synchronized multi-organ NC-CT analysis for EV assessment, providing a more acceptable and minimally invasive alternative for patients compared to traditional endoscopy. | Improved Esophageal Varices Assessment from Non-Contrast CT Scans | [
"Li, Chunli",
"Zhang, Xiaoming",
"Gao, Yuan",
"Yin, Xiaoli",
"Lu, Le",
"Zhang, Ling",
"Yan, Ke",
"Shi, Yu"
] | Conference | 2407.13210 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 440 |
|
null | https://papers.miccai.org/miccai-2024/paper/3839_paper.pdf | @InProceedings{ De_Interpretable_MICCAI2024,
author = { De Vries, Matt and Naidoo, Reed and Fourkioti, Olga and Dent, Lucas G. and Curry, Nathan and Dunsby, Christopher and Bakal, Chris },
title = { { Interpretable phenotypic profiling of 3D cellular morphodynamics } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | The dynamic 3D shape of a cell acts as a signal of its physiological state, reflecting the interplay of environmental stimuli and intra- and extra-cellular processes. However, there is little quantitative understanding of cell shape determination in 3D, largely due to the lack of data-driven methods that analyse 3D cell shape dynamics. To address this, we have developed MorphoSense, an interpretable, variable-length multivariate time series classification (TSC) pipeline based on multiple instance learning (MIL). We use this pipeline to classify 3D cell shape dynamics of perturbed cancer cells and learn hallmark 3D shape changes associated with clinically relevant and shape-modulating small molecule treatments. To show the generalisability across datasets, we apply our pipeline to classify migrating T-cells in collagen matrices and assess interpretability on a synthetic dataset. Across datasets, our pipeline offers increased predictive performance and higher-quality interpretations. To our knowledge, our work is the first to utilise MIL for multivariate, variable-length TSC, focusing on interpretable 3D morphodynamic profiling of biological cells. | Interpretable phenotypic profiling of 3D cellular morphodynamics | [
"De Vries, Matt",
"Naidoo, Reed",
"Fourkioti, Olga",
"Dent, Lucas G.",
"Curry, Nathan",
"Dunsby, Christopher",
"Bakal, Chris"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 441 |
||
null | https://papers.miccai.org/miccai-2024/paper/1687_paper.pdf | @InProceedings{ Hua_MetaAD_MICCAI2024,
author = { Huang, Haolin and Shen, Zhenrong and Wang, Jing and Wang, Xinyu and Lu, Jiaying and Lin, Huamei and Ge, Jingjie and Zuo, Chuantao and Wang, Qian },
title = { { MetaAD: Metabolism-Aware Anomaly Detection for Parkinson’s Disease in 3D 18F-FDG PET } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | The dopamine transporter (DAT) imaging such as 11C-CFT PET has shown significant superiority in diagnosing Parkinson’s Disease (PD).
However, most hospitals have no access to DAT imaging but instead turn to the commonly used 18F-FDG PET, which may not show major abnormalities of PD at visual analysis and thus hinder the performance of computer-aided diagnosis (CAD).
To tackle this challenge, we propose a Metabolism-aware Anomaly Detection (MetaAD) framework to highlight abnormal metabolism cues of PD in 18F-FDG PET scans.
MetaAD converts the input FDG image into a synthetic CFT image with healthy patterns, and then reconstructs the FDG image by a reversed modality mapping.
The visual differences between the input and reconstructed images serve as indicators of PD metabolic anomalies.
A dual-path training scheme is adopted to prompt the generators to learn an explicit normal data distribution via cyclic modality translation while enhancing their abilities to memorize healthy metabolic characteristics.
The experiments reveal that MetaAD not only achieves superior performance in visual interpretability and anomaly detection for PD diagnosis, but also shows effectiveness in assisting supervised CAD methods. | MetaAD: Metabolism-Aware Anomaly Detection for Parkinson’s Disease in 3D 18F-FDG PET | [
"Huang, Haolin",
"Shen, Zhenrong",
"Wang, Jing",
"Wang, Xinyu",
"Lu, Jiaying",
"Lin, Huamei",
"Ge, Jingjie",
"Zuo, Chuantao",
"Wang, Qian"
] | Conference | [
"https://github.com/MedAIerHHL/MetaAD"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 442 |
||
null | https://papers.miccai.org/miccai-2024/paper/3997_paper.pdf | @InProceedings{ Pac_Vertex_MICCAI2024,
author = { Pacheco, Carolina and Yellin, Florence and Vidal, René and Haeffele, Benjamin },
title = { { Vertex Proportion Loss for Multi-Class Cell Detection from Label Proportions } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Learning from label proportions (LLP) is a weakly supervised classification task in which training instances are grouped into bags annotated only with class proportions. While this task emerges naturally in many applications, its performance is often evaluated in bags generated artificially by sampling uniformly from balanced, annotated datasets. In contrast, we study the LLP task in multi-class blood cell detection, where each image can be seen as a “bag”’ of cells and class proportions can be obtained using a hematocytometer. This application introduces several challenges that are not appropriately captured by the usual LLP evaluation regime, including variable bag size, noisy proportion annotations, and inherent class imbalance. In this paper, we propose the Vertex Proportion loss, a new, principled loss for LLP, which uses optimal transport to infer instance labels from label proportions, and a Deep Sparse Detector that leverages the sparsity of the images to localize and learn a useful representation of the cells in a self-supervised way. We demonstrate the advantages of the proposed method over existing approaches when evaluated in real and synthetic white blood cell datasets. | Vertex Proportion Loss for Multi-Class Cell Detection from Label Proportions | [
"Pacheco, Carolina",
"Yellin, Florence",
"Vidal, René",
"Haeffele, Benjamin"
] | Conference | [
"https://github.com/carolina-pacheco/LLP_multiclass_cell_detection/"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 443 |
||
null | https://papers.miccai.org/miccai-2024/paper/2889_paper.pdf | @InProceedings{ Zho_Enhancing_MICCAI2024,
author = { Zhou, Tianfeng and Zhou, Yukun },
title = { { Enhancing Model Generalisability through Sampling Diverse and Balanced Retinal Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Model generalisability, i.e. performance on multiple unseen datasets, can be improved by training on large volumes of annotated data, from which models can learn diverse representations. However, annotated medical data is limited due to the scarcity of expertise. In this work, we present an efficient data sampling pipeline to select DIVerse and bAlanced images (DataDIVA) from image pools to maximise model generalisability in retinal imaging. Specifically, we first extract image feature embeddings using the foundation model off-the-shelf and generate embedding clusters. We then evenly sample images from those diverse clusters and train a model. We run the trained model on the whole unlabelled image pool and sample the remaining images from those classified as rare categories. This pipeline aims to sample the retinal images with diverse representations and mitigate the unbalanced distribution. We show that DataDIVA consistently improved the model performance in both internal and external evaluation, on six public datasets, with clinically meaningful tasks of referable diabetic retinopathy and glaucoma detection. The code is available at https://doi.org/10.5281/zenodo.12674694. | Enhancing Model Generalisability through Sampling Diverse and Balanced Retinal Images | [
"Zhou, Tianfeng",
"Zhou, Yukun"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 444 |
||
null | https://papers.miccai.org/miccai-2024/paper/0869_paper.pdf | @InProceedings{ Wan_Selfguided_MICCAI2024,
author = { Wang, Zhepeng and Bao, Runxue and Wu, Yawen and Liu, Guodong and Yang, Lei and Zhan, Liang and Zheng, Feng and Jiang, Weiwen and Zhang, Yanfu },
title = { { Self-guided Knowledge-injected Graph Neural Network for Alzheimer’s Diseases } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Graph neural networks (GNNs) are proficient machine learning models in handling irregularly structured data. Nevertheless, their generic formulation falls short when applied to the analysis of brain connectomes in Alzheimer’s Disease (AD), necessitating the incorporation of domain-specific knowledge to achieve optimal model performance. The integration of AD-related expertise into GNNs presents a significant challenge. Current methodologies reliant on manual design often demand substantial expertise from external domain specialists to guide the development of novel models, thereby consuming considerable time and resources. To mitigate the need for manual curation, this paper introduces a novel self-guided knowledge-infused multimodal GNN to autonomously integrate domain knowledge into the model development process. We propose to conceptualize existing domain knowledge as natural language, and devise a specialized multimodal GNN framework tailored to leverage this uncurated knowledge to direct the learning of the GNN submodule, thereby enhancing its efficacy and improving prediction interpretability. To assess the effectiveness of our framework, we compile a comprehensive literature dataset comprising recent peer-reviewed publications on AD. By integrating this literature dataset with several real-world AD datasets, our experimental results illustrate the effectiveness of the proposed method in extracting curated knowledge and offering explanations on graphs for domain-specific applications. Furthermore, our approach successfully utilizes the extracted information to enhance the performance of the GNN. | Self-guided Knowledge-injected Graph Neural Network for Alzheimer’s Diseases | [
"Wang, Zhepeng",
"Bao, Runxue",
"Wu, Yawen",
"Liu, Guodong",
"Yang, Lei",
"Zhan, Liang",
"Zheng, Feng",
"Jiang, Weiwen",
"Zhang, Yanfu"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 445 |
||
null | https://papers.miccai.org/miccai-2024/paper/3077_paper.pdf | @InProceedings{ Xie_SurgicalGaussian_MICCAI2024,
author = { Xie, Weixing and Yao, Junfeng and Cao, Xianpeng and Lin, Qiqin and Tang, Zerui and Dong, Xiao and Guo, Xiaohu },
title = { { SurgicalGaussian: Deformable 3D Gaussians for High-Fidelity Surgical Scene Reconstruction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Dynamic reconstruction of deformable tissues in endoscopic video is a key technology for robot-assisted surgery. Recent reconstruction methods based on neural radiance fields (NeRFs) have achieved remarkable results in the reconstruction of surgical scenes. However, based on implicit representation, NeRFs struggle to capture the intricate details of objects in the scene and cannot achieve real-time rendering. In addition, restricted single view perception and occluded instruments also propose special challenges in surgical scene reconstruction. To address these issues, we develop SurgicalGaussian, a deformable 3D Gaussian Splatting method to model dynamic surgical scenes. Our approach models the spatio-temporal features of soft tissues at each time stamp via a forward-mapping deformation MLP and regularization to constrain local 3D Gaussians to comply with consistent movement. With the depth initialization strategy and tool mask-guided training, our method can remove surgical instruments and reconstruct high-fidelity surgical scenes. Through experiments on various surgical videos, our network outperforms existing method on many aspects, including rendering quality, rendering speed and GPU usage. The project page can be found at https://surgicalgaussian.github.io. | SurgicalGaussian: Deformable 3D Gaussians for High-Fidelity Surgical Scene Reconstruction | [
"Xie, Weixing",
"Yao, Junfeng",
"Cao, Xianpeng",
"Lin, Qiqin",
"Tang, Zerui",
"Dong, Xiao",
"Guo, Xiaohu"
] | Conference | 2407.05023 | [
""
] | https://huggingface.co/papers/2407.05023 | 0 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | Poster | 446 |
null | https://papers.miccai.org/miccai-2024/paper/2384_paper.pdf | @InProceedings{ Che_WiNet_MICCAI2024,
author = { Cheng, Xinxing and Jia, Xi and Lu, Wenqi and Li, Qiufu and Shen, Linlin and Krull, Alexander and Duan, Jinming },
title = { { WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Deep image registration has demonstrated exceptional accuracy and fast inference. Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner. However, due to the cascaded nature and repeated composition/warping operations on feature maps, these methods negatively increase memory usage during training and testing. Moreover, such approaches lack explicit constraints on the learning process of small deformations at different scales, thus lacking explainability. In this study, we introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales, utilizing the wavelet coefficients derived from the original input image pair. By exploiting the properties of the wavelet transform, these estimated coefficients facilitate the seamless reconstruction of a full-resolution displacement/velocity field via our devised inverse discrete wavelet transform (IDWT) layer. This approach avoids the complexities of cascading networks or composition operations, making our WiNet an explainable and efficient competitor with other coarse-to-fine methods. Extensive experimental results from two 3D datasets show that our WiNet is accurate and GPU efficient. Code is available at \url{https://github.com/x-xc/WiNet}. | WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration | [
"Cheng, Xinxing",
"Jia, Xi",
"Lu, Wenqi",
"Li, Qiufu",
"Shen, Linlin",
"Krull, Alexander",
"Duan, Jinming"
] | Conference | 2407.13426 | [
"https://github.com/x-xc/WiNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 447 |
|
null | https://papers.miccai.org/miccai-2024/paper/0737_paper.pdf | @InProceedings{ Wu_Cephalometric_MICCAI2024,
author = { Wu, Han and Wang, Chong and Mei, Lanzhuju and Yang, Tong and Zhu, Min and Shen, Dinggang and Cui, Zhiming },
title = { { Cephalometric Landmark Detection across Ages with Prototypical Network } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Automated cephalometric landmark detection is crucial in real-world orthodontic diagnosis.
Current studies mainly focus on only adult subjects, neglecting the clinically crucial scenario presented by adolescents whose landmarks often exhibit significantly different appearances compared to adults.
Hence, an open question arises about how to develop a unified and effective detection algorithm across various age groups, including adolescents and adults.
In this paper, we propose CeLDA, the first work for \textbf{Ce}phalometric \textbf{L}andmark \textbf{D}etection across \textbf{A}ges.
Our method leverages a prototypical network for landmark detection by comparing image features with landmark prototypes.
To tackle the appearance discrepancy of landmarks between age groups, we design new strategies for CeLDA to improve prototype alignment and obtain a holistic estimation of landmark prototypes from a large set of training images.
Moreover, a novel prototype relation mining paradigm is introduced to exploit the anatomical relations between the landmark prototypes.
Extensive experiments validate the superiority of CeLDA in detecting cephalometric landmarks on both adult and adolescent subjects.
To our knowledge, this is the first effort toward developing a unified solution and dataset for cephalometric landmark detection across age groups.
Our code and dataset will be made public on https://github.com/ShanghaiTech-IMPACT/CeLDA. | Cephalometric Landmark Detection across Ages with Prototypical Network | [
"Wu, Han",
"Wang, Chong",
"Mei, Lanzhuju",
"Yang, Tong",
"Zhu, Min",
"Shen, Dinggang",
"Cui, Zhiming"
] | Conference | 2406.12577 | [
"https://github.com/ShanghaiTech-IMPACT/CeLDA"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 448 |
|
null | https://papers.miccai.org/miccai-2024/paper/2617_paper.pdf | @InProceedings{ Fan_Simultaneous_MICCAI2024,
author = { Fan, Wenkang and Jiang, Wenjing and Fang, Hao and Shi, Hong and Chen, Jianhua and Luo, Xiongbiao },
title = { { Simultaneous Monocular Endoscopic Dense Depth and Odometry Estimation Using Local-Global Integration Networks } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Accurate dense depth prediction of monocular endoscopic images is essential in expanding the surgical field and augmenting the perception of depth for surgeons. However, it remains challenging since endoscopic videos generally suffer from limited field of view, illumination variations, and weak texture. This work proposes LGIN, a new architecture with unsupervised learning for accurate dense depth recovery of monocular endoscopic images. Specifically, LGIN creates a hybrid encoder using dense convolution and pyramid vision transformer to extract local textural features and global spatial-temporal features in parallel, while building a decoder to effectively integrate the local and global features and use two-heads to estimate dense depth and odometry simultaneously, respectively. Additionally, we extract structure-valid regions to assist odometry prediction and unsupervised training to improve the accuracy of depth prediction. We evaluated our model on both clinical and synthetic unannotated colonoscopic video images, with the experimental results demonstrating that our model can achieve more accurate depth distribution and more sufficient textures. Both the qualitative and quantitative assessment results of our method are better than current monocular dense depth estimation models. | Simultaneous Monocular Endoscopic Dense Depth and Odometry Estimation Using Local-Global Integration Networks | [
"Fan, Wenkang",
"Jiang, Wenjing",
"Fang, Hao",
"Shi, Hong",
"Chen, Jianhua",
"Luo, Xiongbiao"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 449 |
||
null | https://papers.miccai.org/miccai-2024/paper/0102_paper.pdf | @InProceedings{ Wu_Few_MICCAI2024,
author = { Wu, Xinyao and Xu, Zhe and Tong, Raymond Kai-yu },
title = { { Few Slices Suffice: Multi-Faceted Consistency Learning with Active Cross-Annotation for Barely-supervised 3D Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Deep learning-based 3D medical image segmentation typically demands extensive densely labeled data. Yet, voxel-wise annotation is laborious and costly to obtain. Cross-annotation, which involves annotating only a few slices from different orientations, has recently become an attractive strategy for labeling 3D images. Compared to previous weak labeling methods like bounding boxes and scribbles, it can efficiently preserve the 3D object’s shape and precise boundaries. However, learning from such sparse supervision signals (aka. barely supervised learning (BSL)) still poses great challenges including less fine-grained object perception, less compact class features and inferior generalizability. To this end, we present a Multi-Faceted ConSistency (MF-ConS) learning framework for the BSL scenario. Our approach starts with an active cross-annotation strategy that requires only three orthogonal labeled slices per scan, optimizing the usage of limited annotation budget through a human-in-the-loop process. Building on the popular teacher-student model, MF-ConS is equipped with three types of consistency regularization to tackle the aforementioned challenges of BSL: (i) neighbor-informed object prediction consistency, which improves fine-grained object perception by encouraging the student model to infer complete segmentation from partial visual cues; (ii) non-parametric prototype-driven consistency for more discriminative and compact intra-class features; (iii) a stability constraint under mild perturbations to enhance model’s robustness. Our method is evaluated on the task of brain tumor segmentation from T2-FLAIR MRI and the promising results show the superiority of our approach over relevant state-of-the-art methods. | Few Slices Suffice: Multi-Faceted Consistency Learning with Active Cross-Annotation for Barely-supervised 3D Medical Image Segmentation | [
"Wu, Xinyao",
"Xu, Zhe",
"Tong, Raymond Kai-yu"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 450 |
||
null | https://papers.miccai.org/miccai-2024/paper/3260_paper.pdf | @InProceedings{ Spi_SelfSupervised_MICCAI2024,
author = { Spieker, Veronika and Eichhorn, Hannah and Stelter, Jonathan K. and Huang, Wenqi and Braren, Rickmer F. and Rueckert, Daniel and Sahli Costabal, Francisco and Hammernik, Kerstin and Prieto, Claudia and Karampinos, Dimitrios C. and Schnabel, Julia A. },
title = { { Self-Supervised k-Space Regularization for Motion-Resolved Abdominal MRI Using Neural Implicit k-Space Representations } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Neural implicit k-space representations have shown promising results for dynamic MRI at high temporal resolutions. Yet, their exclusive training in k-space limits the application of common image regularization methods to improve the final reconstruction. In this work, we introduce the concept of parallel imaging-inspired self-consistency (PISCO), which we incorporate as novel self-supervised k-space regularization enforcing a consistent neighborhood relationship. At no additional data cost, the proposed regularization significantly improves neural implicit k-space reconstructions on simulated data. Abdominal in-vivo reconstructions using PISCO result in enhanced spatio-temporal image quality compared to state-of-the-art methods. Code available at ***.git. | Self-Supervised k-Space Regularization for Motion-Resolved Abdominal MRI Using Neural Implicit k-Space Representations | [
"Spieker, Veronika",
"Eichhorn, Hannah",
"Stelter, Jonathan K.",
"Huang, Wenqi",
"Braren, Rickmer F.",
"Rueckert, Daniel",
"Sahli Costabal, Francisco",
"Hammernik, Kerstin",
"Prieto, Claudia",
"Karampinos, Dimitrios C.",
"Schnabel, Julia A."
] | Conference | [
"https://github.com/compai-lab/2024-miccai-spieker"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 451 |
||
null | https://papers.miccai.org/miccai-2024/paper/1626_paper.pdf | @InProceedings{ Zho_MedMLP_MICCAI2024,
author = { Zhou, Menghan and Xu, Yanyu and Soh, Zhi Da and Fu, Huazhu and Goh, Rick Siow Mong and Cheng, Ching-Yu and Liu, Yong and Zhen, Liangli },
title = { { MedMLP: An Efficient MLP-like Network for Zero-shot Retinal Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Deep neural networks (DNNs) have demonstrated superior performance compared to humans across various tasks. However, DNNs often face the challenge of domain shift, where their performance notably deteriorates when applied to medical images with distributions differing from those seen during training. To address this issue and achieve high performance in new target domains under zero-shot settings, we leverage the ability of self-attention mechanisms to capture global dependencies. We introduce a novel MLP-like model designed for superior efficiency and zero-shot robustness. Specifically, we propose an adaptive fully-connected (AdaFC) layer to overcome the fundamental limitation of traditional fully-connected layers in adapting to inputs of various sizes while maintaining GPU efficiency. Building upon AdaFC, we present a new MLP-based network architecture named MedMLP. Through our proposed training pipeline, we achieve a significant 20.1% increase in model testing accuracy on an out-of-distribution dataset, surpassing the widely used ResNet-50 model. | MedMLP: An Efficient MLP-like Network for Zero-shot Retinal Image Classification | [
"Zhou, Menghan",
"Xu, Yanyu",
"Soh, Zhi Da",
"Fu, Huazhu",
"Goh, Rick Siow Mong",
"Cheng, Ching-Yu",
"Liu, Yong",
"Zhen, Liangli"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 452 |
||
null | https://papers.miccai.org/miccai-2024/paper/2442_paper.pdf | @InProceedings{ Li_3DPX_MICCAI2024,
author = { Li, Xiaoshuang and Meng, Mingyuan and Huang, Zimo and Bi, Lei and Delamare, Eduardo and Feng, Dagan and Sheng, Bin and Kim, Jinman },
title = { { 3DPX: Progressive 2D-to-3D Oral Image Reconstruction with Hybrid MLP-CNN Networks } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Panoramic X-ray (PX) is a prevalent modality in dental practice for its wide availability and low cost. However, as a 2D projection image, PX does not contain 3D anatomical information, and therefore has limited use in dental applications that can benefit from 3D information, e.g., tooth angular misalignment detection and classification. Reconstructing 3D structures directly from 2D PX has recently been explored to address limitations with existing methods primarily reliant on Convolutional Neural Networks (CNNs) for direct 2D-to-3D mapping. These methods, however, are unable to correctly infer depth-axis spatial information. In addition, they are limited by the intrinsic locality of convolution operations, as the convolution kernels only capture the information of immediate neighborhood pixels. In this study, we propose a progressive hybrid Multilayer Perceptron (MLP)-CNN pyramid network (3DPX) for 2D-to-3D oral PX reconstruction. We introduce a progressive reconstruction strategy, where 3D images are progressively reconstructed in the 3DPX with guidance imposed on the intermediate reconstruction result at each pyramid level. Further, motivated by the recent advancement of MLPs that show promise in capturing fine-grained long-range dependency, our 3DPX integrates MLPs and CNNs to improve the semantic understanding during reconstruction. Extensive experiments on two large datasets involving 464 studies demonstrate that our 3DPX outperforms state-of-the-art 2D-to-3D oral reconstruction methods, including standalone MLP and transformers, in reconstruction quality, and also improves the performance of downstream angular misalignment classification tasks. | 3DPX: Progressive 2D-to-3D Oral Image Reconstruction with Hybrid MLP-CNN Networks | [
"Li, Xiaoshuang",
"Meng, Mingyuan",
"Huang, Zimo",
"Bi, Lei",
"Delamare, Eduardo",
"Feng, Dagan",
"Sheng, Bin",
"Kim, Jinman"
] | Conference | 2408.01292 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 453 |
|
null | https://papers.miccai.org/miccai-2024/paper/1747_paper.pdf | @InProceedings{ Ju_AWeaklysupervised_MICCAI2024,
author = { Ju, Jianguo and Ren, Shumin and Qiu, Dandan and Tu, Huijuan and Yin, Juanjuan and Xu, Pengfei and Guan, Ziyu },
title = { { A Weakly-supervised Multi-lesion Segmentation Framework Based on Target-level Incomplete Annotations } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Effectively segmenting Crohn’s disease (CD) from computed tomography is crucial for clinical use. Given the difficulty of obtaining manual annotations, more and more researchers have begun to pay attention to weakly supervised methods. However, due to the challenges of designing weakly supervised frameworks with limited and complex medical data, most existing frameworks tend to study single-lesion diseases ignoring multi-lesion scenarios. In this paper, we propose a new local-to-global weakly supervised neural framework for effective CD segmentation. Specifically, we develop a novel weak annotation strategy called Target-level Incomplete Annotation (TIA). This strategy only annotates one region on each slice as a labeled sample, which significantly relieves the burden of annotation. We observe that the classification networks can discover target regions with more details when replacing the input images with their local views. Taking this into account, we first design a TIA-based affinity cropping network to crop multiple local views with global anatomical information from the global view. Then, we leverage a local classification branch to extract more detailed features from multiple local views. Our framework utilizes a local views-based class distance loss and cross-entropy loss to optimize local and global classification branches to generate high-quality pseudo-labels that can be directly used as supervisory information for the semantic segmentation network. Experimental results show that our framework achieves an average DSC score of 47.8% on the CD71 dataset. Our code is available at https://github.com/HeyJGJu/CD_TIA. | A Weakly-supervised Multi-lesion Segmentation Framework Based on Target-level Incomplete Annotations | [
"Ju, Jianguo",
"Ren, Shumin",
"Qiu, Dandan",
"Tu, Huijuan",
"Yin, Juanjuan",
"Xu, Pengfei",
"Guan, Ziyu"
] | Conference | [
"https://github.com/HeyJGJu/CD_TIA"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 454 |
||
null | https://papers.miccai.org/miccai-2024/paper/0862_paper.pdf | @InProceedings{ Ren_Selfsupervised_MICCAI2024,
author = { Ren, Jiaxiang and Li, Zhenghong and Cheng, Wensheng and Zou, Zhilin and Park, Kicheon and Pan, Yingtian and Ling, Haibin },
title = { { Self-supervised 3D Skeleton Completion for Vascular Structures } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | 3D skeleton is critical for analyzing vascular structures with many applications, it is however often limited by the broken skeletons due to image degradation. Existing methods usually correct such skeleton breaks via handcrafted connecting rules or rely on nontrivial manual annotation, which is susceptible to outliers or costly especially for 3D data. In this paper, we propose a self-supervised approach for vasculature reconnection. Specifically, we generate synthetic breaks from confident skeletons and use them to guide the learning of a 3D UNet-like skeleton completion network. To address serious imbalance among different types of skeleton breaks, we introduce three skeleton transformations that largely alleviate such imbalance in synthesized break samples. This allows our model to effectively handle challenging breaks such as bifurcations and tiny fragments. Additionally, to encourage the connectivity outcomes, we design a novel differentiable connectivity loss for further improvement. Experiments on a public medical segmentation benchmark and a 3D optical coherence Doppler tomography (ODT) dataset show the effectiveness of our method. | Self-supervised 3D Skeleton Completion for Vascular Structures | [
"Ren, Jiaxiang",
"Li, Zhenghong",
"Cheng, Wensheng",
"Zou, Zhilin",
"Park, Kicheon",
"Pan, Yingtian",
"Ling, Haibin"
] | Conference | [
"https://github.com/reckdk/SkelCompletion-3D"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 455 |
||
null | https://papers.miccai.org/miccai-2024/paper/0813_paper.pdf | @InProceedings{ Xu_LGRNet_MICCAI2024,
author = { Xu, Huihui and Yang, Yijun and Aviles-Rivero, Angelica I and Yang, Guang and Qin, Jing and Zhu, Lei },
title = { { LGRNet: Local-Global Reciprocal Network for Uterine Fibroid Segmentation in Ultrasound Videos } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Regular screening and early discovery of uterine fibroid are crucial for preventing potential malignant transformations and ensuring timely, life-saving interventions. To this end, we collect and annotate the first ultrasound video dataset with 100 videos for uterine fibroid segmentation (UFUV).
We also present Local-Global Reciprocal Network (LGRNet) to efficiently and effectively propagate the long-term temporal context which is crucial to help distinguish between uninformative noisy surrounding tissues and target lesion regions.
Specifically, the Cyclic Neighborhood Propagation (CNP) is introduced to propagate the inter-frame local temporal context in a cyclic manner.
Moreover, to aggregate global temporal context, we first condense each frame into a set of frame bottleneck queries and devise Hilbert Selective Scan (HilbertSS) to both efficiently path connect each frame and preserve the locality bias. A distribute layer is then utilized to disseminate back the global context for reciprocal refinement.
Extensive experiments on UFUV and three public Video Polyp Segmentation (VPS) datasets demonstrate consistent improvements compared to state-of-the-art segmentation methods, indicating the effectiveness and versatility of LGRNet.
Code, checkpoints, and dataset are available at https://github.com/bio-mlhui/LGRNet | LGRNet: Local-Global Reciprocal Network for Uterine Fibroid Segmentation in Ultrasound Videos | [
"Xu, Huihui",
"Yang, Yijun",
"Aviles-Rivero, Angelica I",
"Yang, Guang",
"Qin, Jing",
"Zhu, Lei"
] | Conference | 2407.05703 | [
"https://github.com/bio-mlhui/LGRNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 456 |
|
null | https://papers.miccai.org/miccai-2024/paper/0708_paper.pdf | @InProceedings{ Oh_Uncertaintyaware_MICCAI2024,
author = { Oh, Seok-Hwan and Jung, Guil and Kim, Sang-Yun and Kim, Myeong-Gee and Kim, Young-Min and Lee, Hyeon-Jik and Kwon, Hyuk-Sool and Bae, Hyeon-Min },
title = { { Uncertainty-aware meta-weighted optimization framework for domain-generalized medical image segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Accurate segmentation of echocardiograph images is essential for the diagnosis of cardiovascular diseases. Recent advances in deep learning have opened a possibility for automated cardiac image segmentation. However, the data-driven echocardiography segmentation schemes suffer from domain shift problems, since the ultrasonic image characteristics are largely affected by measurement conditions determined by device and probe specification. In order to overcome this problem, we propose a domain generalization method, utilizing a generative model for data augmentation. An acoustic content and style-aware diffusion probabilistic model is proposed to synthesize echocardiography images of diverse cardiac anatomy and measurement conditions. In addition, a meta-learning-based spatial weighting scheme is introduced to prevent the network from training unreliable pixels of synthetic images, thereby achieving precise image segmentation. The proposed framework is thoroughly evaluated using both in-distribution and out-of-distribution echocardiography datasets and demonstrates outstanding performance compared to state-of-the-art methods. | Uncertainty-aware meta-weighted optimization framework for domain-generalized medical image segmentation | [
"Oh, Seok-Hwan",
"Jung, Guil",
"Kim, Sang-Yun",
"Kim, Myeong-Gee",
"Kim, Young-Min",
"Lee, Hyeon-Jik",
"Kwon, Hyuk-Sool",
"Bae, Hyeon-Min"
] | Conference | [
"https://github.com/Seokhwan-Oh/MLSW"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 457 |
||
null | https://papers.miccai.org/miccai-2024/paper/4215_paper.pdf | @InProceedings{ Kha_DomainAdapt_MICCAI2024,
author = { Khan, Misaal and Singh, Richa and Vatsa, Mayank and Singh, Kuldeep },
title = { { DomainAdapt: Leveraging Multitask Learning and Domain Insights for Children’s Nutritional Status Assessment } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | This study presents a novel approach for automating nutritional status assessments in children, designed to assist health workers in public health contexts. We introduce DomainAdapt a novel dynamic task-weighing method within a multitask learning framework, which leverages domain knowledge and Mutual Information to balance task-specific losses, enhancing the learning efficiency for nutritional status screening. We have also assembled an unprecedented dataset comprising 16,938 multipose images and anthropometric data from 2,141 children across various settings, marking a significant first in this domain. Through rigorous testing, this method demonstrates superior performance in identifying malnutrition in children and predicting their anthropometric measures compared to existing multitask learning approaches. Dataset is available at : iab-rubric.org/resources/healthcare-datasets/anthrovision-dataset | DomainAdapt: Leveraging Multitask Learning and Domain Insights for Children’s Nutritional Status Assessment | [
"Khan, Misaal",
"Singh, Richa",
"Vatsa, Mayank",
"Singh, Kuldeep"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 458 |
||
null | https://papers.miccai.org/miccai-2024/paper/2860_paper.pdf | @InProceedings{ Cen_ORCGT_MICCAI2024,
author = { Cen, Min and Wang, Zheng and Zhuang, Zhenfeng and Zhang, Hong and Su, Dan and Bao, Zhen and Wei, Weiwei and Magnier, Baptiste and Yu, Lequan and Wang, Liansheng },
title = { { ORCGT: Ollivier-Ricci Curvature-based Graph Model for Lung STAS Prediction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Tumor Spread Through Air Spaces (STAS), identified as a mechanism of invasion, has been substantiated by multiple studies to be associated with lower survival rates, underscoring its significant prognostic implications. In clinical practice, pathological diagnosis is regarded as the gold standard for STAS examination. Nonetheless, manual STAS diagnosis is characterized by labor-intensive and time-consuming processes, which are susceptible to misdiagnosis. In this paper, we attempt for the first time to identify the underlying features from histopathological images for the automatic prediction of STAS. Existing deep learning-based methods usually produce undesirable predictive performance with poor interpretability for this task, as they fail to identify small tumor cells spread around the main tumor and their complex correlations. To address these issues, we propose a novel Ollivier-Ricci Curvature-based Graph model for STAS prediction (ORCGT), which utilizes the information from the major tumor margin to improve both the accuracy and interpretability. The model first extracts the major tumor margin by a tumor density map with minimal and coarse annotations, which enhances the visibility of small tumor regions to the model. Then, we develop a Pool-Refined Ollivier-Ricci Curvature-based module to enable complex interactions between patches regardless of long distances and reduce the negative impact of the over-squashing phenomenon among patches linked by negative curvature edges. Extensive experiments conducted on our collected dataset demonstrate the effectiveness and interpretability of the proposed approach for predicting lung STAS. | ORCGT: Ollivier-Ricci Curvature-based Graph Model for Lung STAS Prediction | [
"Cen, Min",
"Wang, Zheng",
"Zhuang, Zhenfeng",
"Zhang, Hong",
"Su, Dan",
"Bao, Zhen",
"Wei, Weiwei",
"Magnier, Baptiste",
"Yu, Lequan",
"Wang, Liansheng"
] | Conference | [
"https://github.com/zhengwang9/ORCGT"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 459 |
||
null | https://papers.miccai.org/miccai-2024/paper/1279_paper.pdf | @InProceedings{ Hua_AReferandGround_MICCAI2024,
author = { Huang, Xiaoshuang and Huang, Haifeng and Shen, Lingdong and Yang, Yehui and Shang, Fangxin and Liu, Junwei and Liu, Jia },
title = { { A Refer-and-Ground Multimodal Large Language Model for Biomedicine } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | With the rapid development of multimodal large language models (MLLMs), especially their capabilities in visual chat through refer and ground functionalities, their significance is increasingly recognized. However, the biomedical field currently exhibits a substantial gap in this area, primarily due to the absence of a dedicated refer and ground dataset for biomedical images. To address this challenge, we devised the Med-GRIT-270k dataset. It comprises 270k question-and-answer pairs and spans eight distinct medical imaging modalities. Most importantly, it is the first dedicated to the biomedical domain and integrating refer and ground conversations. The key idea is to sample large-scale biomedical image-mask pairs from medical segmentation datasets and generate instruction datasets from text using chatGPT. Additionally, we introduce a Refer-and-GrounD Multimodal Large Language Model for Biomedicine (BiRD) by using this dataset and multi-task instruction learning. Extensive experiments have corroborated the efficacy of the Med-GRIT-270k dataset and the multi-modal, fine-grained interactive capabilities of the BiRD model. This holds significant reference value for the exploration and development of intelligent biomedical assistants. The repository is at https://github.com/ShawnHuang497/BiRD | A Refer-and-Ground Multimodal Large Language Model for Biomedicine | [
"Huang, Xiaoshuang",
"Huang, Haifeng",
"Shen, Lingdong",
"Yang, Yehui",
"Shang, Fangxin",
"Liu, Junwei",
"Liu, Jia"
] | Conference | 2406.18146 | [
"https://github.com/ShawnHuang497/BiRD"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 460 |
|
null | https://papers.miccai.org/miccai-2024/paper/2429_paper.pdf | @InProceedings{ Du_DistributionallyAdaptive_MICCAI2024,
author = { Du, Jing and Dong, Guangwei and Ma, Congbo and Xue, Shan and Wu, Jia and Yang, Jian and Beheshti, Amin and Sheng, Quan Z. and Giral, Alexis },
title = { { Distributionally-Adaptive Variational Meta Learning for Brain Graph Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Recent developments in Graph Neural Networks~(GNNs) have shed light on understanding brain networks through innovative approaches. Despite these innovations, the significant costs associated with data collection and the challenges posed by data drift in real-world scenarios present substantial hurdles for models dependent on large datasets to capture brain activity features.
To address these issues, we introduce the Distributionally-Adaptive Variational Meta Learning (DAML) framework,
designed to equip the model with rapid adaptability to varying distributions by meta-learning-driven minimization of discrepancies between subject sets. Initially, we employ a graph encoder with the message-passing strategy to generate precise brain graph representations. Subsequently, we implement a distributionally-adaptive variational meta learning approach to functionally simulate data drift across subject sets, utilizing variational layers for parameterization and adaptive alignment methods to reduce discrepancies. Through comprehensive experiments on three real-world datasets with both few-shot and standard settings against various baselines, our DAML model demonstrates the state-of-the-art performance across all metrics, underscoring its efficiency and potential within limited data. | Distributionally-Adaptive Variational Meta Learning for Brain Graph Classification | [
"Du, Jing",
"Dong, Guangwei",
"Ma, Congbo",
"Xue, Shan",
"Wu, Jia",
"Yang, Jian",
"Beheshti, Amin",
"Sheng, Quan Z.",
"Giral, Alexis"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 461 |
||
null | https://papers.miccai.org/miccai-2024/paper/2266_paper.pdf | @InProceedings{ Alh_FedMedICL_MICCAI2024,
author = { Alhamoud, Kumail and Ghunaim, Yasir and Alfarra, Motasem and Hartvigsen, Thomas and Torr, Philip and Ghanem, Bernard and Bibi, Adel and Ghassemi, Marzyeh },
title = { { FedMedICL: Towards Holistic Evaluation of Distribution Shifts in Federated Medical Imaging } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | For medical imaging AI models to be clinically impactful, they must generalize. However, this goal is hindered by \emph{(i)} diverse types of distribution shifts, such as temporal, demographic, and label shifts, and \emph{(ii)} limited diversity in datasets that are siloed within single medical institutions. While these limitations have spurred interest in federated learning, current evaluation benchmarks fail to evaluate different shifts simultaneously. However, in real healthcare settings, multiple types of shifts co-exist, yet their impact on medical imaging performance remains unstudied. In response, we introduce FedMedICL, a unified framework and benchmark to holistically evaluate federated medical imaging challenges, simultaneously capturing label, demographic, and temporal distribution shifts. We comprehensively evaluate several popular methods on six diverse medical imaging datasets (totaling 550 GPU hours). Furthermore, we use FedMedICL to simulate COVID-19 propagation across hospitals and evaluate whether methods can adapt to pandemic changes in disease prevalence. We find that a simple batch balancing technique surpasses advanced methods in average performance across FedMedICL experiments. This finding questions the applicability of results from previous, narrow benchmarks in real-world medical settings. Code is available at: \url{https://github.com/m1k2zoo/FedMedICL}. | FedMedICL: Towards Holistic Evaluation of Distribution Shifts in Federated Medical Imaging | [
"Alhamoud, Kumail",
"Ghunaim, Yasir",
"Alfarra, Motasem",
"Hartvigsen, Thomas",
"Torr, Philip",
"Ghanem, Bernard",
"Bibi, Adel",
"Ghassemi, Marzyeh"
] | Conference | 2407.08822 | [
"https://github.com/m1k2zoo/FedMedICL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 462 |
|
null | https://papers.miccai.org/miccai-2024/paper/3736_paper.pdf | @InProceedings{ Li_Development_MICCAI2024,
author = { Li, Guoshi and Thung, Kim-Han and Taylor, Hoyt and Wu, Zhengwang and Li, Gang and Wang, Li and Lin, Weili and Ahmad, Sahar and Yap, Pew-Thian },
title = { { Development of Effective Connectome from Infancy to Adolescence } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Delineating the normative developmental profile of functional connectome is important for both standardized assessment of individual growth and early detection of diseases. However, functional connectome has been mostly studied using functional connectivity (FC), where undirected connectivity strengths are estimated from statistical correlation of resting-state functional MRI (rs-fMRI) signals. To address this limitation, we applied regression dynamic causal modeling (rDCM) to delineate the developmental trajectories of effective connectivity (EC), the directed causal influence among neuronal populations, in whole-brain networks from infancy to adolescence (0-21 years old) based on high-quality rs-fMRI data from Baby Connectome Project (BCP) and Human Connectome Project Development (HCPD). Analysis with linear mixed model demonstrates significant age effect on the mean nodal EC which is best fit by a “U” shaped quadratic curve with minimal EC at around 2 years old. Further analysis indicates that five brain regions including the left and right cuneus, left precuneus, left supramarginal gyrus and right inferior temporal gyrus have the most significant age effect on nodal EC (p < 0.05, FDR corrected). Moreover, the frontoparietal control (FPC) network shows the fastest increase from early childhood (1-2 years) to adolescence (6-21 years) followed by the visual and salience networks. Our findings suggest complex nonlinear developmental profile of effective connectivity from infancy to adolescence, which may reflect dynamic structural and functional maturation during this critical growth period. | Development of Effective Connectome from Infancy to Adolescence | [
"Li, Guoshi",
"Thung, Kim-Han",
"Taylor, Hoyt",
"Wu, Zhengwang",
"Li, Gang",
"Wang, Li",
"Lin, Weili",
"Ahmad, Sahar",
"Yap, Pew-Thian"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 463 |
||
null | https://papers.miccai.org/miccai-2024/paper/2720_paper.pdf | @InProceedings{ Li_MultiFrequency_MICCAI2024,
author = { Li, Hao and Zhai, Xiangyu and Xue, Jie and Gu, Changming and Tian, Baolong and Hong, Tingxuan and Jin, Bin and Li, Dengwang and Huang, Pu },
title = { { Multi-Frequency and Smoke Attention-aware Learning based Diffusion Model for Removing Surgical Smoke } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Surgical smoke in laparoscopic surgery can deteriorate the visibility and pose hazards to surgeons, although medical devices for mechanical smoke evacuation worked well, it prolonged operative duration and thus restricted the efficiency. This work aims to simultaneously remove the surgical smoke and restore the true-to-live image colors with deep learning strategy to improve the surgical efficiency and safety. However, the deep network-based smoke removal remains a challenge due to: 1) higher frequency modes are hindered from being learned by spectral bias, 2) the distribution of surgical smoke is non-homogeneity. We propose the multi-frequency and smoke attention-aware learning-based diffusion model for removing surgical smoke. In this work, the frequency compensation strategy combines the multi-level frequency learning and contrast enhancement to integrates comprehensive features for learning mid-to-high frequency details that the smoke has obscured. The smoke attention learning employs the pixel-wise measurement and provides the diffusion model with complementary features about where smoke is present, which helps restore the smokeless regions during the inverse diffusion process. And the multi-task learning strategy incorporates L1 loss, smoke perception loss, dark channel prior loss, and contrast enhancement loss to help the model optimization. Additionally, a paired smokeless/smoky dataset is simulated by a 3D smoke rendering engine. The experimental results show that the proposed method outperforms other state-of-the-art methods on both synthetic/real laparoscopic surgical images, with the potential to be embedded in laparoscopic devices for smoke removal. | Multi-Frequency and Smoke Attention-aware Learning based Diffusion Model for Removing Surgical Smoke | [
"Li, Hao",
"Zhai, Xiangyu",
"Xue, Jie",
"Gu, Changming",
"Tian, Baolong",
"Hong, Tingxuan",
"Jin, Bin",
"Li, Dengwang",
"Huang, Pu"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 464 |
||
null | https://papers.miccai.org/miccai-2024/paper/3328_paper.pdf | @InProceedings{ Pol_Letting_MICCAI2024,
author = { Poles, Isabella and Santambrogio, Marco D. and D’Arnese, Eleonora },
title = { { Letting Osteocytes Teach SR-microCT Bone Lacunae Segmentation: A Feature Variation Distillation Method via Diffusion Denoising } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Synchrotron Radiation micro-Computed Tomography (SR-microCT) is a promising imaging technique for osteocyte-lacunar bone pathophysiology study. However, acquiring them costs more than histopathology, thus requiring multi-modal approaches to enrich limited/costly data with complementary information. Nevertheless, paired modalities are rarely available in clinical settings. To overcome these problems, we present a novel histopathology-enhanced disease-aware distillation model for bone microstructure segmentation from SR-microCTs. Our method uses unpaired histopathology images to emphasize lacunae morphology during SR-microCT image training while avoiding the need for histopathologies during testing. Specifically, we leverage denoising diffusion to eliminate the noisy information within the student and distill valuable information effectively. On top of this, a feature variation distillation method pushes the student to learn intra-class semantic variations similar to the teacher, improving label co-occurrence information learning. Experimental results on clinical and public microscopy datasets demonstrate superior performance over single-, multi-modal and state-of-the-art distillation methods for image segmentation. | Letting Osteocytes Teach SR-microCT Bone Lacunae Segmentation: A Feature Variation Distillation Method via Diffusion Denoising | [
"Poles, Isabella",
"Santambrogio, Marco D.",
"D’Arnese, Eleonora"
] | Conference | [
"https://github.com/isabellapoles/LOTUS"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 465 |
||
null | https://papers.miccai.org/miccai-2024/paper/2322_paper.pdf | @InProceedings{ Din_AWasserstein_MICCAI2024,
author = { Ding, Jiaqi and Dan, Tingting and Wei, Ziquan and Laurienti, Paul and Wu, Guorong },
title = { { A Wasserstein Recipe for Replicable Machine Learning on Functional Neuroimages } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Advances in neuroimaging have dramatically expanded our ability to probe the neurobiological bases of behavior in-vivo. Leveraging a growing repository of publicly available neuroimaging data, there is a surging interest for utilizing machine learning approaches to explore new questions in neuroscience. Despite the impressive achievements of current deep learning models, there remains an under-acknowledged risk: the variability in cognitive states may undermine the experimental replicability of the ML models, leading to potentially misleading findings in the realm of neuroscience. To address this challenge, we first dissect the critical (but often missed) challenge of ensuring the replicability of predictions despite task-irrelevant functional fluctuations. We then formulate the solution as a domain adaptation, where we design a dual-branch Transformer with minimizing Wasserstein distance. We evaluate the cognitive task recognition accuracy and consistency of test and retest functional neuroimages (serial imaging measures of the same cognitive task over a short period of time) of the Human Connectome Project. Our model demonstrates significant improvements in both replicability and accuracy of task recognition, showing the great potential of reliable deep models for solving real-world neuroscience problems. | A Wasserstein Recipe for Replicable Machine Learning on Functional Neuroimages | [
"Ding, Jiaqi",
"Dan, Tingting",
"Wei, Ziquan",
"Laurienti, Paul",
"Wu, Guorong"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 466 |
||
null | https://papers.miccai.org/miccai-2024/paper/2142_paper.pdf | @InProceedings{ Gao_Evidential_MICCAI2024,
author = { Gao, Yibo and Gao, Zheyao and Gao, Xin and Liu, Yuanye and Wang, Bomin and Zhuang, Xiahai },
title = { { Evidential Concept Embedding Models: Towards Reliable Concept Explanations for Skin Disease Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Due to the high stakes in medical decision-making, there is a compelling demand for interpretable deep learning methods in medical image analysis. Concept Bottleneck Models (CBM) have emerged as an active interpretable framework incorporating human-interpretable concepts into decision-making. However, their concept predictions may lack reliability when applied to clinical diagnosis, impeding concept explanations’ quality. To address this, we propose an evidential concept embedding model (evi-CEM), which employs evidential learning to model the concept uncertainty. Additionally, we offer to leverage the concept uncertainty to rectify concept misalignments that arise when training CBMs using vision-language models without complete concept supervision. With the proposed methods, we can enhance concept explanations’ reliability for both supervised and label-efficient settings. Furthermore, we introduce concept uncertainty for effective test-time intervention. Our evaluation demonstrates that evi-CEM achieves superior performance in terms of concept prediction, and the proposed concept rectification effectively mitigates concept misalignments for label-efficient training. Our code is available at https://github.com/obiyoag/evi-CEM. | Evidential Concept Embedding Models: Towards Reliable Concept Explanations for Skin Disease Diagnosis | [
"Gao, Yibo",
"Gao, Zheyao",
"Gao, Xin",
"Liu, Yuanye",
"Wang, Bomin",
"Zhuang, Xiahai"
] | Conference | 2406.19130 | [
"https://github.com/obiyoag/evi-CEM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 467 |
|
null | https://papers.miccai.org/miccai-2024/paper/3895_paper.pdf | @InProceedings{ Cho_AdaCBM_MICCAI2024,
author = { Chowdhury, Townim F. and Phan, Vu Minh Hieu and Liao, Kewen and To, Minh-Son and Xie, Yutong and van den Hengel, Anton and Verjans, Johan W. and Liao, Zhibin },
title = { { AdaCBM: An Adaptive Concept Bottleneck Model for Explainable and Accurate Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | The integration of vision-language models such as CLIP and Concept Bottleneck Models (CBMs) offers a promising approach to explaining deep neural network (DNN) decisions using concepts understandable by humans, addressing the black-box concern of DNNs. While CLIP provides both explainability and zero-shot classification capability, its pre-training on generic image and text data may limit its classification accuracy and applicability to medical image diagnostic tasks, creating a transfer learning problem. To maintain explainability and address transfer learning needs, CBM methods commonly design post-processing modules after the bottleneck module. However, this way has been ineffective. This paper takes an unconventional approach by re-examining the CBM framework through the lens of its geometrical representation as a simple linear classification system. The analysis uncovers that post-CBM fine-tuning modules merely rescale and shift the classification outcome of the system, failing to fully leverage the system’s learning potential. We introduce an adaptive module strategically positioned between CLIP and CBM to bridge the gap between source and downstream domains. This simple yet effective approach enhances classification performance while preserving the explainability afforded by the framework. Our work offers a comprehensive solution that encompasses the entire process, from concept discovery to model training, providing a holistic recipe for leveraging the strengths of GPT, CLIP, and CBM. | AdaCBM: An Adaptive Concept Bottleneck Model for Explainable and Accurate Diagnosis | [
"Chowdhury, Townim F.",
"Phan, Vu Minh Hieu",
"Liao, Kewen",
"To, Minh-Son",
"Xie, Yutong",
"van den Hengel, Anton",
"Verjans, Johan W.",
"Liao, Zhibin"
] | Conference | 2408.02001 | [
"https://github.com/AIML-MED/AdaCBM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 468 |
|
null | https://papers.miccai.org/miccai-2024/paper/0719_paper.pdf | @InProceedings{ Che_Selfsupervised_MICCAI2024,
author = { Chen, Dongdong and Yao, Linlin and Liu, Mengjun and Shen, Zhenrong and Hu, Yuqi and Song, Zhiyun and Wang, Qian and Zhang, Lichi },
title = { { Self-supervised Learning with Adaptive Graph Structure and Function Representation For Cross-Dataset Brain Disorder Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Resting-state functional magnetic resonance imaging (rs-fMRI) helps characterize the regional neural activity of the human brain. Currently, supervised deep learning methods that rely on a large amount of fMRI data have shown good performance in diagnosing specific brain diseases. However, there are significant differences in the structure and function of brain connectivity networks among patients with different brain diseases. This makes it difficult for the model to achieve satisfactory diagnostic performance when facing new diseases with limited data, thus severely hindering their application in clinical practice. In this work, we propose a self-supervised learning framework based on graph contrastive learning for cross-dataset brain disorder diagnosis. Specifically, we develop a graph structure learner that adaptively characterizes general brain connectivity networks for various brain disorders. We further develop a multi-state brain network encoder that can effectively enhance the representation of brain networks with functional information related to different brain diseases. We finally evaluate our model on different brain disorders and demonstrate advantages compared to other state-of-the-art methods. | Self-supervised Learning with Adaptive Graph Structure and Function Representation For Cross-Dataset Brain Disorder Diagnosis | [
"Chen, Dongdong",
"Yao, Linlin",
"Liu, Mengjun",
"Shen, Zhenrong",
"Hu, Yuqi",
"Song, Zhiyun",
"Wang, Qian",
"Zhang, Lichi"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 469 |
||
null | https://papers.miccai.org/miccai-2024/paper/0870_paper.pdf | @InProceedings{ Mis_STANLOC_MICCAI2024,
author = { Mishra, Divyanshu and Saha, Pramit and Zhao, He and Patey, Olga and Papageorghiou, Aris T. and Noble, J. Alison },
title = { { STAN-LOC: Visual Query-based Video Clip Localization for Fetal Ultrasound Sweep Videos } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Detecting standard frame clips in fetal ultrasound videos is
crucial for accurate clinical assessment and diagnosis. It enables health-
care professionals to evaluate fetal development, identify abnormalities,
and monitor overall health with clarity and standardization. To aug-
ment sonographer workflow and to detect standard frame clips, we in-
troduce the task of Visual Query-based Video Clip Localization in med-
ical video understanding. It aims to retrieve a video clip from a given
ultrasound sweep that contains frames similar to a given exemplar frame
of the required standard anatomical view. To solve the task, we propose
STAN-LOC that consists of three main components: (a) a Query-Aware
Spatio-Temporal Fusion Transformer that fuses information available in
the visual query with the input video. This results in visual query-aware
video features which we model temporally to understand spatio-temporal
relationship between them. (b) a Multi-Anchor, View-Aware Contrastive
loss to reduce the influence of inherent noise in manual annotations es-
pecially at event boundaries and in videos featuring highly similar ob-
jects. (c) a query selection algorithm during inference that selects the
best visual query for a given video to reduce model’s sensitivity to the
quality of visual queries. We apply STAN-LOC to the task of detect-
ing standard-frame clips in fetal ultrasound heart sweeps given four-
chamber view queries. Additionally, we assess the performance of our
best model on PULSE [2] data for retrieving standard transventricular
plane (TVP) in fetal head videos. STAN-LOC surpasses the state-of-the-
art method by 22% in mtIoU. The code will be available upon acceptance
at xxx.github.com. | STAN-LOC: Visual Query-based Video Clip Localization for Fetal Ultrasound Sweep Videos | [
"Mishra, Divyanshu",
"Saha, Pramit",
"Zhao, He",
"Patey, Olga",
"Papageorghiou, Aris T.",
"Noble, J. Alison"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 470 |
||
null | https://papers.miccai.org/miccai-2024/paper/4090_paper.pdf | @InProceedings{ Ren_SkinCON_MICCAI2024,
author = { Ren, Zhihang and Li, Yunqi and Li, Xinyu and Xie, Xinrong and Duhaime, Erik P. and Fang, Kathy and Chakraborty, Tapabrata and Guo, Yunhui and Yu, Stella X. and Whitney, David },
title = { { SkinCON: Towards consensus for the uncertainty of skin cancer sub-typing through distribution regularized adaptive predictive sets (DRAPS) } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Deep learning has been widely utilized in medical diagnosis. Convolutional neural networks and transformers can achieve high predictive accuracy, which can be on par with or even exceed human performance. However, uncertainty quantification remains an unresolved issue, impeding the deployment of deep learning models in practical settings. Conformal analysis can, in principle, estimate the uncertainty of each diagnostic prediction, but doing so effectively requires extensive human annotations to characterize the underlying empirical distributions. This has been challenging in the past because instance-level class distribution data has been unavailable: Collecting massive ground truth labels is already challenging, and obtaining the class distribution of each instance is even more difficult. Here, we provide a large skin cancer instance-level class distribution dataset, SkinCON, that contains 25,331 skin cancer images from the ISIC 2019 challenge dataset. SkinCON is built upon over 937,167 diagnostic judgments from 10,509 participants. Using SkinCON, we propose the distribution regularized adaptive predictive sets (DRAPS) method for skin cancer diagnosis. We also provide a new evaluation metric based on SkinCON. Experiment results show the quality of our proposed DRAPS method and the uncertainty variation with respect to patient age and sex from health equity and fairness perspective. The dataset and code are available at https://skincon.github.io. | SkinCON: Towards consensus for the uncertainty of skin cancer sub-typing through distribution regularized adaptive predictive sets (DRAPS) | [
"Ren, Zhihang",
"Li, Yunqi",
"Li, Xinyu",
"Xie, Xinrong",
"Duhaime, Erik P.",
"Fang, Kathy",
"Chakraborty, Tapabrata",
"Guo, Yunhui",
"Yu, Stella X.",
"Whitney, David"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 471 |
||
null | https://papers.miccai.org/miccai-2024/paper/1236_paper.pdf | @InProceedings{ Xio_Contrast_MICCAI2024,
author = { Xiong, Honglin and Fang, Yu and Sun, Kaicong and Wang, Yulin and Zong, Xiaopeng and Zhang, Weijun and Wang, Qian },
title = { { Contrast Representation Learning from Imaging Parameters for Magnetic Resonance Image Synthesis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Magnetic Resonance Imaging (MRI) is a widely used noninvasive medical imaging technique that provides excellent contrast for soft tissues, making it invaluable for diagnosis and intervention. Acquiring multiple contrast images is often desirable for comprehensive evaluation and precise disease diagnosis. However, due to technical limitations, patient-related issues, and medical conditions, obtaining all desired MRI contrasts is not always feasible. Cross-contrast MRI synthesis can potentially address this challenge by generating target contrasts based on existing source contrasts. In this work, we propose Contrast Representation Learning (CRL), which explores the changes in MRI contrast by modifying MR sequences. Unlike generative models that treat image generation as an end-to-end cross-domain mapping, CRL aims to uncover the complex relationships between contrasts by embracing the interplay of imaging parameters within this space. By doing so, CRL enhances the fidelity and realism of synthesized MR images, providing a more accurate representation of intricate details. Experimental results on the Fast Spin Echo (FSE) sequence demonstrate the promising performance and generalization capability of CRL, even with limited training data. Moreover, CRL introduces a perspective of considering imaging parameters as implicit coordinates, shedding light on the underlying structure governing contrast variation in MR images. Our code is available at
https://github.com/xionghonglin/CRL_MICCAI_2024. | Contrast Representation Learning from Imaging Parameters for Magnetic Resonance Image Synthesis | [
"Xiong, Honglin",
"Fang, Yu",
"Sun, Kaicong",
"Wang, Yulin",
"Zong, Xiaopeng",
"Zhang, Weijun",
"Wang, Qian"
] | Conference | [
"https://github.com/xionghonglin/CRL_MICCAI_2024"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 472 |
||
null | https://papers.miccai.org/miccai-2024/paper/1006_paper.pdf | @InProceedings{ She_Spatiotemporal_MICCAI2024,
author = { Shen, Chengzhi and Menten, Martin J. and Bogunović, Hrvoje and Schmidt-Erfurth, Ursula and Scholl, Hendrik P. N. and Sivaprasad, Sobha and Lotery, Andrew and Rueckert, Daniel and Hager, Paul and Holland, Robbie },
title = { { Spatiotemporal Representation Learning for Short and Long Medical Image Time Series } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Analyzing temporal developments is crucial for the accurate prognosis of many medical conditions. Temporal changes that occur over short time scales are key to assessing the health of physiological functions, such as the cardiac cycle. Moreover, tracking longer term developments that occur over months or years in evolving processes, such as age-related macular degeneration (AMD), is essential for accurate prognosis. Despite the importance of both short and long term analysis to clinical decision making, they remain understudied in medical deep learning. State of the art methods for spatiotemporal representation learning, developed for short natural videos, prioritize the detection of temporal constants rather than temporal developments. Moreover, they do not account for varying time intervals between acquisitions, which are essential for contextualizing observed changes. To address these issues, we propose two approaches. First, we combine clip-level contrastive learning with a novel temporal embedding to adapt to irregular time series. Second, we propose masking and predicting latent frame representations of the temporal sequence. Our two approaches outperform all prior methods on temporally-dependent tasks including cardiac output estimation and three prognostic AMD tasks. Overall, this enables the automated analysis of temporal patterns which are typically overlooked in applications of deep learning to medicine. | Spatiotemporal Representation Learning for Short and Long Medical Image Time Series | [
"Shen, Chengzhi",
"Menten, Martin J.",
"Bogunović, Hrvoje",
"Schmidt-Erfurth, Ursula",
"Scholl, Hendrik P. N.",
"Sivaprasad, Sobha",
"Lotery, Andrew",
"Rueckert, Daniel",
"Hager, Paul",
"Holland, Robbie"
] | Conference | 2403.07513 | [
"https://github.com/Leooo-Shen/tvrl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 473 |
|
null | https://papers.miccai.org/miccai-2024/paper/0213_paper.pdf | @InProceedings{ Li_Surfacebased_MICCAI2024,
author = { Li, Yuan and Nie, Xinyu and Zhang, Jianwei and Shi, Yonggang },
title = { { Surface-based and Shape-informed U-fiber Atlasing for Robust Superficial White Matter Connectivity Analysis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Superficial white matter (SWM) U-fibers contain consider able structural connectivity in the human brain; however, related studies are not well-developed compared to the well-studied deep white matter (DWM). Conventionally, SWM U-fiber is obtained through DWM tracking, which is inaccurate on the cortical surface. The significant variability in the cortical folding patterns of the human brain renders a conventional template-based atlas unsuitable for accurately mapping U-fibers within the thin layer of SWM beneath the cortical surface. Recently, new surface-based tracking methods have been developed to reconstruct more complete and reliable U-fibers. To leverage surface-based U-fiber tracking methods, we propose to create a surface-based U-fiber dictionary using high-resolution diffusion MRI (dMRI) data from the Human Connectome Project (HCP). We first identify the major U-fiber bundles and then build a dictionary containing subjects with high groupwise consistency of major U-fiber bundles. Finally, we propose a shape-informed U-fiber atlasing method for robust SWM connectivity analysis. Through experiments, we demonstrate that our shape-informed atlasing method can obtain anatomically more accurate U-fiber representations than state of-the-art atlas. Additionally, our method is capable of restoring incomplete U-fibers in low-resolution dMRI, thus helping better characterize SWM connectivity in clinical studies such as the Alzheimer’s Disease Neuroimaging Initiative (ADNI). | Surface-based and Shape-informed U-fiber Atlasing for Robust Superficial White Matter Connectivity Analysis | [
"Li, Yuan",
"Nie, Xinyu",
"Zhang, Jianwei",
"Shi, Yonggang"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 474 |
||
null | https://papers.miccai.org/miccai-2024/paper/3454_paper.pdf | @InProceedings{ Bap_Keypoint_MICCAI2024,
author = { Baptista, Tânia and Raposo, Carolina and Marques, Miguel and Antunes, Michel and Barreto, Joao P. },
title = { { Keypoint Matching for Instrument-Free 3D Registration in Video-based Surgical Navigation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Video-based Surgical Navigation (VBSN) inside the articular joint using an arthroscopic camera has proven to have important clinical benefits in arthroscopy. It works by referencing the anatomy and instruments with respect to the system of coordinates of a fiducial marker that is rigidly attached to the bone. In order to overlay surgical plans on the anatomy, VBSN performs registration of a pre-operative model with intra-operative data, which is acquired by means of an instrumented touch probe for surface reconstruction. The downside is that this procedure is typically time-consuming and may cause iatrogenic damage to the anatomy. Performing anatomy reconstruction by using solely the arthroscopic video overcomes these problems but raises new ones, namely the difficulty in accomplishing keypoint detection and matching in bone and cartilage regions that are often very low textured. This paper presents a thorough analysis of the performance of classical and learning-based approaches for keypoint matching in arthroscopic images acquired in the knee joint. It is demonstrated that by employing learning-based methods in such imagery, it becomes possible, for the first time, to perform registration in the context of VBSN without the aid of any instruments, i.e., in an instrument-free manner. | Keypoint Matching for Instrument-Free 3D Registration in Video-based Surgical Navigation | [
"Baptista, Tânia",
"Raposo, Carolina",
"Marques, Miguel",
"Antunes, Michel",
"Barreto, Joao P."
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 475 |
||
null | https://papers.miccai.org/miccai-2024/paper/2573_paper.pdf | @InProceedings{ Yan_Spatial_MICCAI2024,
author = { Yang, Yan and Hossain, Md Zakir and Li, Xuesong and Rahman, Shafin and Stone, Eric },
title = { { Spatial Transcriptomics Analysis of Zero-shot Gene Expression Prediction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Spatial transcriptomics (ST) captures gene expression fine-grained distinct regions (\ie, windows) of a tissue slide. Traditional supervised learning frameworks applied to model ST are constrained to predicting expression of gene types seen during training from slide image windows, failing to generalize to unseen gene types. To overcome this limitation, we propose a semantic guided network, a pioneering zero-shot gene expression prediction framework. Considering a gene type can be described by functionality and phenotype, we dynamically embed a gene type to a vector per its functionality and phenotype, and employ this vector to project slide image windows to gene expression in feature space, unleashing zero-shot expression prediction for unseen gene types. The gene type functionality and phenotype are queried with a carefully designed prompt from a pre-trained large language model. On standard benchmark datasets, we demonstrate competitive zero-shot performance compared to past state-of-the-art supervised learning approaches. Our code is available at \url{https://github.com/Yan98/SGN}. | Spatial Transcriptomics Analysis of Zero-shot Gene Expression Prediction | [
"Yang, Yan",
"Hossain, Md Zakir",
"Li, Xuesong",
"Rahman, Shafin",
"Stone, Eric"
] | Conference | 2401.14772 | [
"https://github.com/Yan98/SGN"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 476 |
|
null | https://papers.miccai.org/miccai-2024/paper/0926_paper.pdf | @InProceedings{ Gho_MammoCLIP_MICCAI2024,
author = { Ghosh, Shantanu and Poynton, Clare B. and Visweswaran, Shyam and Batmanghelich, Kayhan },
title = { { Mammo-CLIP: A Vision Language Foundation Model to Enhance Data Efficiency and Robustness in Mammography } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | The lack of large and diverse training data on Computer-Aided Diagnosis (CAD) in breast cancer detection has been one of the concerns that impedes the adoption of the system.
Recently, pre-training with large-scale image text datasets via Vision-Language models (VLM) (\eg CLIP) partially addresses the issue of robustness and data efficiency in computer vision (CV).
This paper proposes Mammo-CLIP, the first VLM pre-trained on a substantial amount of screening mammogram-report pairs, addressing the challenges of dataset diversity and size. Our experiments on two public datasets demonstrate strong performance in classifying and localizing various mammographic attributes crucial for breast cancer detection, showcasing data efficiency and robustness similar to CLIP in CV. We also propose Mammo-FActOR, a novel feature attribution method, to provide spatial interpretation of representation with sentence-level granularity within mammography reports. Code is available publicly\footnote{We will release the model checkpoints upon decision}: \url{https://github.com/annonymous-vision/miccai}. | Mammo-CLIP: A Vision Language Foundation Model to Enhance Data Efficiency and Robustness in Mammography | [
"Ghosh, Shantanu",
"Poynton, Clare B.",
"Visweswaran, Shyam",
"Batmanghelich, Kayhan"
] | Conference | 2405.12255 | [
"https://github.com/batmanlab/Mammo-CLIP"
] | https://huggingface.co/papers/2405.12255 | 0 | 0 | 0 | 4 | [
"shawn24/Mammo-CLIP"
] | [] | [] | [
"shawn24/Mammo-CLIP"
] | [] | [] | 1 | Poster | 477 |
null | https://papers.miccai.org/miccai-2024/paper/2046_paper.pdf | @InProceedings{ Xio_Multimodality_MICCAI2024,
author = { Xiong, Zicheng and Zhao, Kai and Ji, Like and Shu, Xujun and Long, Dazhi and Chen, Shengbo and Yang, Fuxing },
title = { { Multi-modality 3D CNN Transformer for Assisting Clinical Decision in Intracerebral Hemorrhage } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Intracerebral hemorrhage (ICH) is a cerebrovascular disease with high mortality and morbidity rates. Early-stage ICH patients often lack clear surgical indications, which is quite challenging for neurosurgeons to make treatment decisions. Currently, early treatment decisions for ICH primarily rely on the clinical experience of neurosurgeons. Although there have been attempts to combine local CT imaging with clinical data for decision-making, these approaches fail to provide deep semantic analysis and do not fully leverage the synergistic effects between different modalities. To address this issue, this paper introduces a novel multi-modality predictive model that combines CT images and clinical data to provide reliable treatment decisions for ICH patients. Specifically, this model employs a combination of 3D CNN and Transformer to analyze patients’ brain CT scans, effectively capturing the 3D spatial information of intracranial hematomas and surrounding brain tissue. In addition, it utilizes a contrastive language-image pre-training (CLIP) module to extract demographic features and important clinical data and integrates with CT imaging data through a cross-attention mechanism. Furthermore, a novel CNN-based multilayer perceptron (MLP) layer is designed to enhance the understanding of the 3D spatial features. Extensive experiments conducted on real clinical datasets demonstrate that the proposed method significantly improves the accuracy of treatment decisions compared to existing state-of-the-art methods. Code is available at https://github.com/Henry-Xiong/3DCT-ICH. | Multi-modality 3D CNN Transformer for Assisting Clinical Decision in Intracerebral Hemorrhage | [
"Xiong, Zicheng",
"Zhao, Kai",
"Ji, Like",
"Shu, Xujun",
"Long, Dazhi",
"Chen, Shengbo",
"Yang, Fuxing"
] | Conference | [
"https://github.com/Henry-Xiong/3DCT-ICH"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 478 |
||
null | https://papers.miccai.org/miccai-2024/paper/2842_paper.pdf | @InProceedings{ Zhu_When_MICCAI2024,
author = { Zhu, Xi and Zhang, Wei and Li, Yijie and O’Donnell, Lauren J. and Zhang, Fan },
title = { { When Diffusion MRI Meets Diffusion Model: A Novel Deep Generative Model for Diffusion MRI Generation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Diffusion MRI (dMRI) is an advanced imaging technique characterizing tissue microstructure and white matter structural connectivity of the human brain. The demand for high-quality dMRI data is growing, driven by the need for better resolution and improved tissue contrast. However, acquiring high-quality dMRI data is expensive and time-consuming. In this context, deep generative modeling emerges as a promising solution to enhance image quality while minimizing acquisition costs and scanning time. In this study, we propose a novel generative approach to perform dMRI generation using deep diffusion models. It can generate high dimension (4D) and high resolution data preserving the gradients information and brain structure. We demonstrated our method through an image mapping task aimed at enhancing the quality of dMRI images from 3T to 7T. Our approach demonstrates highly enhanced performance in generating dMRI images when compared to the current state-of-the-art (SOTA) methods. This achievement underscores a substantial progression in enhancing dMRI quality, highlighting the potential of our novel generative approach to revolutionize dMRI imaging standards. | When Diffusion MRI Meets Diffusion Model: A Novel Deep Generative Model for Diffusion MRI Generation | [
"Zhu, Xi",
"Zhang, Wei",
"Li, Yijie",
"O’Donnell, Lauren J.",
"Zhang, Fan"
] | Conference | 2408.12897 | [
"https://github.com/XiZhu-UE/Diffusion-model-meet-dMRI.git"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 479 |
|
null | https://papers.miccai.org/miccai-2024/paper/2481_paper.pdf | @InProceedings{ Ngu_Towards_MICCAI2024,
author = { Nguyen, Anh Tien and Vuong, Trinh Thi Le and Kwak, Jin Tae },
title = { { Towards a text-based quantitative and explainable histopathology image analysis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Recently, vision-language pre-trained models have emerged in computational pathology. Previous works generally focused on the alignment of image-text pairs via the contrastive pre-training paradigm. Such pre-trained models have been applied to pathology image classification in zero-shot learning or transfer learning fashion. Herein, we hypothesize that the pre-trained vision-language models can be utilized for quantitative histopathology image analysis through a simple image-to-text retrieval. To this end, we propose a Text-based Quantitative and Explainable histopathology image analysis, which we call TQx. Given a set of histopathology images, we adopt a pre-trained vision-language model to retrieve a word-of-interest pool. The retrieved words are then used to quantify the histopathology images and generate understandable feature embeddings due to the direct mapping to the text description. To evaluate the proposed method, the text-based embeddings of four histopathology image datasets are utilized to perform clustering and classification tasks. The results demonstrate that TQx is able to quantify and analyze histopathology images that are comparable to the prevalent visual models in computational pathology. | Towards a text-based quantitative and explainable histopathology image analysis | [
"Nguyen, Anh Tien",
"Vuong, Trinh Thi Le",
"Kwak, Jin Tae"
] | Conference | 2407.07360 | [
"https://github.com/anhtienng/TQx"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 480 |
|
null | https://papers.miccai.org/miccai-2024/paper/1493_paper.pdf | @InProceedings{ Yu_CLIPDR_MICCAI2024,
author = { Yu, Qinkai and Xie, Jianyang and Nguyen, Anh and Zhao, He and Zhang, Jiong and Fu, Huazhu and Zhao, Yitian and Zheng, Yalin and Meng, Yanda },
title = { { CLIP-DR: Textual Knowledge-Guided Diabetic Retinopathy Grading with Ranking-aware Prompting } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Diabetic retinopathy (DR) is a complication of diabetes and usually takes decades to reach sight-threatening levels. Accurate and robust detection of DR severity is critical for the timely management and treatment of diabetes. However, most current DR grading methods suffer from insufficient robustness to data variability (e.g. colour fundus images), posing a significant difficulty for accurate and robust grading. In this work, we propose a novel DR grading framework CLIP-DR based on three observations: 1) Recent pre-trained visual language models, such as CLIP, showcase a notable capacity for generalisation across various downstream tasks, serving as effective baseline models. 2) The grading of image-text pairs for DR often adheres to a discernible natural sequence, yet most existing DR grading methods have primarily overlooked this aspect. 3) A long-tailed distribution among DR severity levels complicates the grading process. This work proposes a novel ranking-aware prompting strategy to help the CLIP model exploit the ordinal information. Specifically, we sequentially design learnable prompts between neighbouring text-image pairs in two different ranking directions. Additionally, we introduce a Similarity Matrix Smooth module into the structure of CLIP to balance the class distribution. Finally, we perform extensive comparisons with several state-of-the-art methods on the GDRBench benchmark, demonstrating our CLIP-DR’s robustness and superior performance. The implementation code is available at https://github.com/Qinkaiyu/CLIP-DR. | CLIP-DR: Textual Knowledge-Guided Diabetic Retinopathy Grading with Ranking-aware Prompting | [
"Yu, Qinkai",
"Xie, Jianyang",
"Nguyen, Anh",
"Zhao, He",
"Zhang, Jiong",
"Fu, Huazhu",
"Zhao, Yitian",
"Zheng, Yalin",
"Meng, Yanda"
] | Conference | 2407.04068 | [
"https://github.com/Qinkaiyu/CLIP-DR"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 481 |
|
null | https://papers.miccai.org/miccai-2024/paper/1343_paper.pdf | @InProceedings{ Ash_DMASTER_MICCAI2024,
author = { Ashraf, Tajamul and Rangarajan, Krithika and Gambhir, Mohit and Gauba, Richa and Arora, Chetan },
title = { { D-MASTER: Mask Annealed Transformer for Unsupervised Domain Adaptation in Breast Cancer Detection from Mammograms } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | We focus on the problem of Unsupervised Domain Adaptation (\uda) for breast cancer detection from mammograms (BCDM) problem. Recent advancements have shown that masked image modeling serves as a robust pretext task for UDA. However, when applied to cross-domain BCDM, these techniques struggle with breast abnormalities such as masses, asymmetries, and micro-calcifications, in part due to the typically much smaller size of region of interest in comparison to natural images. This often results in more false positives per image (FPI) and significant noise in pseudo-labels typically used to bootstrap such techniques. Recognizing these challenges, we introduce a transformer-based Domain-invariant Mask Annealed Student Teacher autoencoder (D-MASTER) framework. D-MASTER adaptively masks and reconstructs multi-scale feature maps, enhancing the model’s ability to capture reliable target domain features. D-MASTER also includes adaptive confidence refinement to filter pseudo-labels, ensuring only high-quality detections are considered. We also provide a bounding box annotated subset of 1000 mammograms from the RSNA Breast Screening Dataset (referred to as RSNA-BSD1K) to support further research in BCDM. We evaluate D-MASTER on multiple BCDM datasets acquired from diverse domains. Experimental results show a significant improvement of 9% and 13% in sensitivity at 0.3 FPI over state-of-the-art UDA techniques on publicly available benchmark INBreast and DDSM datasets respectively. We also report an improvement of 11% and 17% on In-house and RSNA-BSD1K datasets respectively. The source code, pre-trained D-MASTER model, along with RSNA-BSD1K dataset annotations is available at https://dmaster-iitd.github.io/webpage. | D-MASTER: Mask Annealed Transformer for Unsupervised Domain Adaptation in Breast Cancer Detection from Mammograms | [
"Ashraf, Tajamul",
"Rangarajan, Krithika",
"Gambhir, Mohit",
"Gauba, Richa",
"Arora, Chetan"
] | Conference | 2407.06585 | [
"https://github.com/Tajamul21/D-MASTER"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 482 |
|
null | https://papers.miccai.org/miccai-2024/paper/2968_paper.pdf | @InProceedings{ Du_Prompting_MICCAI2024,
author = { Du, Chenlin and Chen, Xiaoxuan and Wang, Jingyi and Wang, Junjie and Li, Zhongsen and Zhang, Zongjiu and Lao, Qicheng },
title = { { Prompting Vision-Language Models for Dental Notation Aware Abnormality Detection } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | The large pretrained vision-language models (VLMs) have demonstrated remarkable data efficiency when transferred to the medical domain. However, the successful transfer hinges on the development of effective prompting strategies. Despite progress in this area, the application of VLMs to dentistry, a field characterized by complex, multi-level dental abnormalities and subtle features associated with minor dental issues, remains uncharted territory. To address this, we propose a novel approach for detecting dental abnormalities by prompting VLMs, leveraging the symmetrical structure of the oral cavity and guided by the dental notation system. Our framework consists of two main components: dental notation-aware tooth identification and multi-level dental abnormality detection. Initially, we prompt VLMs with tooth notations for enumerating each tooth to aid subsequent detection. We then initiate a multi-level detection of dental abnormalities with quadrant and tooth codes, prompting global abnormalities across the entire image and local abnormalities on the matched teeth. Our method harmonizes subtle features with global information for local-level abnormality detection. Extensive experiments on the re-annotated DETNEX dataset demonstrate that our proposed framework significantly improves performance by at least 4.3% mAP and 10.8% AP50 compared to state-of-the-art methods. Code and annotations will be released on https://github.com/CDchenlin/DentalVLM. | Prompting Vision-Language Models for Dental Notation Aware Abnormality Detection | [
"Du, Chenlin",
"Chen, Xiaoxuan",
"Wang, Jingyi",
"Wang, Junjie",
"Li, Zhongsen",
"Zhang, Zongjiu",
"Lao, Qicheng"
] | Conference | [
"https://github.com/CDchenlin/DentalVLM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 483 |
||
null | https://papers.miccai.org/miccai-2024/paper/2317_paper.pdf | @InProceedings{ Wan_Crossmodal_MICCAI2024,
author = { Wang, Xiaofei and Huang, Xingxu and Price, Stephen and Li, Chao },
title = { { Cross-modal Diffusion Modelling for Super-resolved Spatial Transcriptomics } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | The recent advancement of spatial transcriptomics (ST) allows to characterize spatial gene expression within tissue for discovery research. However, current ST platforms suffer from low resolution, hindering in-depth understanding of spatial gene expression. Super-resolution approaches promise to enhance ST maps by integrating histology images with gene expressions of profiled tissue spots. However, current super-resolution methods are limited by restoration uncertainty and mode collapse. Although diffusion models have shown promise in capturing complex interactions between multi-modal conditions, it remains a challenge to integrate histology images and gene expression for super-resolved ST maps. This paper proposes a cross-modal conditional diffusion model for super-resolving ST maps with the guidance of histology images. Specifically, we design a multi-modal disentangling network with cross-modal adaptive modulation to utilize complementary information from histology images and spatial gene expression. Moreover, we propose a dynamic cross-attention modelling strategy to extract hierarchical cell-to-tissue information from histology images. Lastly, we propose a co-expression-based gene-correlation graph network to model the co-expression relationship of multiple genes. Experiments show that our method outperforms other state-of-the-art methods in ST super-resolution on three public datasets. | Cross-modal Diffusion Modelling for Super-resolved Spatial Transcriptomics | [
"Wang, Xiaofei",
"Huang, Xingxu",
"Price, Stephen",
"Li, Chao"
] | Conference | 2404.12973 | [
"https://github.com/XiaofeiWang2018/Diffusion-ST"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 484 |
|
null | https://papers.miccai.org/miccai-2024/paper/0148_paper.pdf | @InProceedings{ Zha_AMultiInformation_MICCAI2024,
author = { Zhang, Jianqiao and Xiong, Hao and Jin, Qiangguo and Feng, Tian and Ma, Jiquan and Xuan, Ping and Cheng, Peng and Ning, Zhiyuan and Ning, Zhiyu and Li, Changyang and Wang, Linlin and Cui, Hui },
title = { { A Multi-Information Dual-Layer Cross-Attention Model for Esophageal Fistula Prognosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Esophageal fistula (EF) is a critical and life-threatening complication following radiotherapy treatment for esophageal cancer (EC). Albeit tabular clinical data contains other clinically valuable information, it is inherently different from CT images and the heterogeneity among them may impede the effective fusion of multi-modal data and thus degrade the performance of deep learning methods. However, current methodologies do not explicitly address this limitation. To tackle this gap, we present an adaptive multi-information dual-layer cross-attention (MDC) model using both CT images and tabular clinical data for early-stage EF detection before radiotherapy. Our MDC model comprises a clinical data encoder, an adaptive 3D Trans-CNN image encoder, and a dual-layer cross-attention (DualCrossAtt) module. The Image Encoder utilizes both CNN and transformer to extract multi-level local and global features, followed by global depth-wise convolution to remove the redundancy from these features for robust adaptive fusion. To mitigate the heterogeneity among multi-modal features and enhance fusion effectiveness, our DualCrossAtt applies the first layer of a cross-attention mechanism to perform alignment between the features of clinical data and images, generating commonly attended features to the second-layer cross-attention that models the global relationship among multi-modal features for prediction. Furthermore, we introduce a contrastive learning-enhanced hybrid loss function to further boost performance. Comparative evaluations against eight state-of-the-art multi-modality predictive models demonstrate the superiority of our method in EF prediction, with potential to assist personalized stratification and precision EC treatment planning. | A Multi-Information Dual-Layer Cross-Attention Model for Esophageal Fistula Prognosis | [
"Zhang, Jianqiao",
"Xiong, Hao",
"Jin, Qiangguo",
"Feng, Tian",
"Ma, Jiquan",
"Xuan, Ping",
"Cheng, Peng",
"Ning, Zhiyuan",
"Ning, Zhiyu",
"Li, Changyang",
"Wang, Linlin",
"Cui, Hui"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 485 |
||
null | https://papers.miccai.org/miccai-2024/paper/1091_paper.pdf | @InProceedings{ Guo_Stochastic_MICCAI2024,
author = { Guo, Yixiao and Pei, Yuru and Chen, Si and Zhou, Zhi-bo and Xu, Tianmin and Zha, Hongbin },
title = { { Stochastic Anomaly Simulation for Maxilla Completion from Cone-Beam Computed Tomography } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Automated alveolar cleft defect restoration from cone beam computed tomography (CBCT) remains a challenging task, considering large morphological variations due to inter-subject abnormal maxilla development processes and a small cohort of clinical data. Existing works relied on rigid or deformable registration to borrow bony tissues from an unaffected side or a template for bony tissue filling. However, they lack harmony with the surrounding irregular maxilla structures and are limited when faced with bilateral defects. In this paper, we present a stochastic anomaly simulation algorithm for defected CBCT generation, combating limited clinical data and burdensome volumetric image annotation. By respecting the facial fusion process, the proposed anomaly simulation algorithm enables plausible data generation and relieves gaps from clinical data. We propose a weakly supervised volumetric inpainting framework for cleft defect restoration and maxilla completion, taking advantage of anomaly simulation-based data generation and the recent success of deep image inpainting techniques. Extensive experimental results demonstrate that our approach effectively restores defected CBCTs with performance gains over state-of-the-art methods. | Stochastic Anomaly Simulation for Maxilla Completion from Cone-Beam Computed Tomography | [
"Guo, Yixiao",
"Pei, Yuru",
"Chen, Si",
"Zhou, Zhi-bo",
"Xu, Tianmin",
"Zha, Hongbin"
] | Conference | [
"https://github.com/Code-11342/SAS-Restorer.git"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 486 |
||
null | https://papers.miccai.org/miccai-2024/paper/2909_paper.pdf | @InProceedings{ Xie_VDPF_MICCAI2024,
author = { Xie, Xiaotong and Ye, Yufeng and Yang, Tingting and Huang, Bin and Huang, Bingsheng and Huang, Yi },
title = { { VDPF: Enhancing DVT Staging Performance Using a Global-Local Feature Fusion Network } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Deep Vein Thrombosis (DVT) presents a high incidence rate and serious health risks. Therefore, accurate staging is essential for formulating effective treatment plans and enhancing prognosis. Recent studies have shown the effectiveness of Black-blood Magnetic Resonance Thrombus Imaging (BTI) in differentiating thrombus stages without necessitating contrast agents. However, the accuracy of clinical DVT staging is still limited by the experience and subjective assessments of radiologists, underscoring the importance of implementing Computer-aided Diagnosis (CAD) systems for objective and precise thrombus staging. Given the small size of thrombi and their high similarity in signal intensity and shape to surrounding tissues, precise staging using CAD technology poses a significant challenge. To address this, we have developed an innovative classification framework that employs a Global-Local Feature Fusion Module (GLFM) for the effective integration of global imaging and lesion-focused local imaging. Within the GLFM, a cross-attention module is designed to capture relevant global features information based on local features. Additionally, the Feature Fusion Focus Network (FFFN) module within the GLFM facilitates the integration of features across various dimensions. The synergy between these modules ensures an effective fusion of local and global features within the GLFM framework. Experimental evidence confirms the superior performance of our proposed GLFM in feature fusion, demonstrating a significant advantage over existing methods in the task of DVT staging. The code is available at https://github.com/xiextong/VDPF. | VDPF: Enhancing DVT Staging Performance Using a Global-Local Feature Fusion Network | [
"Xie, Xiaotong",
"Ye, Yufeng",
"Yang, Tingting",
"Huang, Bin",
"Huang, Bingsheng",
"Huang, Yi"
] | Conference | [
"https://github.com/xiextong/VDPF"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 487 |
||
null | https://papers.miccai.org/miccai-2024/paper/1321_paper.pdf | @InProceedings{ Ten_KnowledgeGuided_MICCAI2024,
author = { Teng, Lin and Zhao, Zihao and Huang, Jiawei and Cao, Zehong and Meng, Runqi and Shi, Feng and Shen, Dinggang },
title = { { Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Automatic and accurate segmentation of brain MR images throughout the human lifespan into tissue and structure is crucial for understanding brain development and diagnosing diseases. However, challenges arise from the intricate variations in brain appearance due to rapid early brain development, aging, and disorders, compounded by the limited availability of manually labeled datasets. In response, we present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI. Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels, followed by the incorporation of knowledge-driven embeddings learned from image-text alignment into the models. The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes, enabling models to learn structural feature embeddings across diverse age groups. Experimental findings demonstrate the superiority and robustness of our proposed method, particularly noticeable when employing Swin UNETR as the backbone. Our approach achieves average DSC values of 95.17% and 94.19% for brain tissue and structure segmentation, respectively. Our code is available at https://github.com/TL9792/KGPL. | Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation | [
"Teng, Lin",
"Zhao, Zihao",
"Huang, Jiawei",
"Cao, Zehong",
"Meng, Runqi",
"Shi, Feng",
"Shen, Dinggang"
] | Conference | 2407.21328 | [
"https://github.com/TL9792/KGPL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 488 |
|
null | https://papers.miccai.org/miccai-2024/paper/2162_paper.pdf | @InProceedings{ Cho_Understanding_MICCAI2024,
author = { Chow, Chiyuen and Dan, Tingting and Styner, Martin and Wu, Guorong },
title = { { Understanding Brain Dynamics Through Neural Koopman Operator with Structure-Function Coupling } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | The fundamental question in neuroscience is to understand the working mechanism of how anatomical structure supports brain function and how remarkable functional fluctuations emerge ubiquitous behaviors. We formulate this inverse problem in the realm of system identification, where we use a geometric scattering transform (GST) to model the structure-function coupling and a neural Koopman operator to uncover dynamic mechanism of the underlying complex system. First, GST is used to construct a collection of measurements by projecting the proxy signal of brain activity into a neural manifold constrained by the geometry of wiring patterns in the brain. Then, we seek to find a Koopman operator to elucidate the complex relationship between partial observations and behavior outcomes with a relatively simpler linear mapping, which allows us to understand functional dynamics in the clich'e of control system. Furthermore, we integrate GST and Koopman operator into an end-to-end deep neural network, yielding an explainable model for brain dynamics with a mathematical guarantee. Through rigorous experiments conducted on the Human Connectome Project-Aging (HCP-A) dataset, our method demonstrates state-of-the-art performance in cognitive task classification, surpassing existing benchmarks. More importantly, our method shows great potential in uncovering novel insights of brain dynamics using machine learning approach. | Understanding Brain Dynamics Through Neural Koopman Operator with Structure-Function Coupling | [
"Chow, Chiyuen",
"Dan, Tingting",
"Styner, Martin",
"Wu, Guorong"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 489 |
||
null | https://papers.miccai.org/miccai-2024/paper/1737_paper.pdf | @InProceedings{ Hua_TopologicalCycle_MICCAI2024,
author = { Huang, Jinghan and Chen, Nanguang and Qiu, Anqi },
title = { { Topological Cycle Graph Attention Network for Brain Functional Connectivity } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | This study, we introduce a novel Topological Cycle Graph Attention Network (CycGAT), designed to delineate a functional backbone within brain functional graphs—key pathways essential for signal transmission—from non-essential, redundant connections that form cycles around this core structure. We first introduce a cycle incidence matrix that establishes an independent cycle basis within a graph, mapping its relationship with edges. We propose a cycle graph convolution that leverages a cycle adjacency matrix, derived from the cycle incidence matrix, to specifically filter edge signals in a domain of cycles. Additionally, we strengthen the representation power of the cycle graph convolution by adding an attention mechanism, which is further augmented by the introduction of edge positional encodings in cycles, to enhance the topological awareness of CycGAT. We demonstrate CycGAT’s localization through simulation and its efficacy on an ABCD study’s fMRI data (n=8765), comparing it with baseline models. CycGAT outperforms these models, identifying a functional backbone with significantly fewer cycles, crucial for understanding neural circuits related to general intelligence. Our code will be released once accepted. | Topological Cycle Graph Attention Network for Brain Functional Connectivity | [
"Huang, Jinghan",
"Chen, Nanguang",
"Qiu, Anqi"
] | Conference | 2403.19149 | [
"https://github.com/JH-415/CycGAT"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 490 |
|
null | https://papers.miccai.org/miccai-2024/paper/1820_paper.pdf | @InProceedings{ Als_Zoom_MICCAI2024,
author = { Alsharid, Mohammad and Yasrab, Robail and Drukker, Lior and Papageorghiou, Aris T. and Noble, J. Alison },
title = { { Zoom Pattern Signatures for Fetal Ultrasound Structures } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | During a fetal ultrasound scan, a sonographer will zoom in and zoom out as they attempt to get clearer images of the anatomical structures of interest. This paper explores how to use this zoom information which is an under-utilised piece of information that is extractable from fetal ultrasound images. We explore associating zooming patterns to specific structures. The presence of such patterns would indicate that each individual anatomical structure has a unique signature associated with it, thereby allowing for classification of fetal ultrasound clips without directly feeding the actual fetal ultrasound content into a convolutional neural network. | Zoom Pattern Signatures for Fetal Ultrasound Structures | [
"Alsharid, Mohammad",
"Yasrab, Robail",
"Drukker, Lior",
"Papageorghiou, Aris T.",
"Noble, J. Alison"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 491 |
||
null | https://papers.miccai.org/miccai-2024/paper/3255_paper.pdf | @InProceedings{ Yan_MambaMIL_MICCAI2024,
author = { Yang, Shu and Wang, Yihui and Chen, Hao },
title = { { MambaMIL: Enhancing Long Sequence Modeling with Sequence Reordering in Computational Pathology } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Multiple Instance Learning (MIL) has emerged as a dominant paradigm to extract discriminative feature representations within Whole Slide Images (WSIs) in computational pathology. Despite driving notable progress, existing MIL approaches suffer from limitations in facilitating comprehensive and efficient interactions among instances, as well as challenges related to time-consuming computations and overfitting. In this paper, we incorporate the Selective Scan Space State Sequential Model (Mamba) in Multiple Instance Learning (MIL) for long sequence modeling with linear complexity, termed as MambaMIL. By inheriting the capability of vanilla Mamba, MambaMIL demonstrates the ability to comprehensively understand and perceive long sequences of instances. Furthermore, we propose the Sequence Reordering Mamba (SR-Mamba) aware of the order and distribution of instances, which exploits the inherent valuable information embedded within the long sequences. With the SR-Mamba as the core component, MambaMIL can effectively capture more discriminative features and mitigate the challenges associated with overfitting and high computational overhead. Extensive experiments on two public challenging tasks across nine diverse datasets demonstrate that our proposed framework performs favorably against state-of-the-art MIL methods. The code is released at https://github.com/isyangshu/MambaMIL. | MambaMIL: Enhancing Long Sequence Modeling with Sequence Reordering in Computational Pathology | [
"Yang, Shu",
"Wang, Yihui",
"Chen, Hao"
] | Conference | 2403.06800 | [
"https://github.com/isyangshu/MambaMIL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 492 |
|
null | https://papers.miccai.org/miccai-2024/paper/0771_paper.pdf | @InProceedings{ Che_LighTDiff_MICCAI2024,
author = { Chen, Tong and Lyu, Qingcheng and Bai, Long and Guo, Erjian and Gao, Huxin and Yang, Xiaoxiao and Ren, Hongliang and Zhou, Luping },
title = { { LighTDiff: Surgical Endoscopic Image Low-Light Enhancement with T-Diffusion } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Advances in endoscopy use in surgeries face challenges like inadequate lighting. Deep learning, notably the Denoising Diffusion Probabilistic Model (DDPM), holds promise for low-light image enhancement in the medical field. However, DDPMs are computationally demanding and slow, limiting their practical medical applications. To bridge this gap, we propose a lightweight DDPM, dubbed LighTDiff. It adopts a T-shape model architecture to capture global structural information using low-resolution images and gradually recover the details in subsequent denoising steps. We further prone the model to significantly reduce the model size while retaining performance. While discarding certain downsampling operations to save parameters leads to instability and low efficiency in convergence during the training, we introduce a Temporal Light Unit (TLU), a plug-and-play module, for more stable training and better performance. TLU associates time steps with denoised image features, establishing temporal dependencies of the denoising steps and improving denoising outcomes. Moreover, while recovering images using the diffusion model, potential spectral shifts were noted. We further introduce a Chroma Balancer (CB) to mitigate this issue. Our LighTDiff outperforms many competitive LLIE methods with exceptional computational efficiency. | LighTDiff: Surgical Endoscopic Image Low-Light Enhancement with T-Diffusion | [
"Chen, Tong",
"Lyu, Qingcheng",
"Bai, Long",
"Guo, Erjian",
"Gao, Huxin",
"Yang, Xiaoxiao",
"Ren, Hongliang",
"Zhou, Luping"
] | Conference | 2405.10550 | [
"https://github.com/DavisMeee/LighTDiff"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 493 |
|
null | https://papers.miccai.org/miccai-2024/paper/3528_paper.pdf | @InProceedings{ Saa_PEMMA_MICCAI2024,
author = { Saadi, Nada and Saeed, Numan and Yaqub, Mohammad and Nandakumar, Karthik },
title = { { PEMMA: Parameter-Efficient Multi-Modal Adaptation for Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Imaging modalities such as Computed Tomography (CT) and Positron Emission Tomography (PET) are key in cancer detection, inspiring Deep Neural Networks (DNN) models that merge these scans for tumor segmentation. When both CT and PET scans are available, it is common to combine them as two channels of the input to the segmentation model. However, this method requires both scan types during training and inference, posing a challenge due to the limited availability of PET scans, thereby sometimes limiting the process to CT scans only. Hence, there is a need to develop a flexible DNN architecture that can be trained/updated using only CT scans but can effectively utilize PET scans when they become available. In this work, we propose a parameter-efficient multi-modal adaptation (PEMMA) framework for lightweight upgrading of a transformer-based segmentation model trained only on CT scans to also incorporate PET scans. The benefits of the proposed approach are two-fold. Firstly, we leverage the inherent modularity of the transformer architecture and perform low-rank adaptation (LoRA) of the attention weights to achieve parameter-efficient adaptation. Secondly, since the PEMMA framework attempts to minimize cross-modal entanglement, it is possible to subsequently update the combined model using only one modality, without causing catastrophic forgetting of the other modality. Our proposed method achieves comparable results with the performance of early fusion techniques with just 8% of the trainable parameters, especially with a remarkable +28% improvement on the average dice score on PET scans when trained on a single modality. | PEMMA: Parameter-Efficient Multi-Modal Adaptation for Medical Image Segmentation | [
"Saadi, Nada",
"Saeed, Numan",
"Yaqub, Mohammad",
"Nandakumar, Karthik"
] | Conference | 2404.13704 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 494 |
|
null | https://papers.miccai.org/miccai-2024/paper/0897_paper.pdf | @InProceedings{ Tan_HySparK_MICCAI2024,
author = { Tang, Fenghe and Xu, Ronghao and Yao, Qingsong and Fu, Xueming and Quan, Quan and Zhu, Heqin and Liu, Zaiyi and Zhou, S. Kevin },
title = { { HySparK: Hybrid Sparse Masking for Large Scale Medical Image Pre-Training } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | The generative self-supervised learning strategy exhibits remarkable learning representational capabilities. However, there is limited attention to end-to-end pre-training methods based on a hybrid architecture of CNN and Transformer, which can learn strong local and global representations simultaneously. To address this issue, we propose a generative pre-training strategy called Hybrid Sparse masKing (HySparK) based on masked image modeling and apply it to large-scale pre-training on medical images. First, we perform a bottom-up 3D hybrid masking strategy on the encoder to keep consistency masking. Then we utilize sparse convolution for the top CNNs and encode unmasked patches for the bottom vision Transformers. Second, we employ a simple hierarchical decoder with skip-connections to achieve dense multi-scale feature reconstruction. Third, we implement our pre-training method on a collection of multiple large-scale 3D medical imaging datasets. Extensive experiments indicate that our proposed pre-training strategy demonstrates robust transfer-ability in supervised downstream tasks and sheds light on HySparK’s promising prospects. The code is available at https://github.com/FengheTan9/HySparK. | HySparK: Hybrid Sparse Masking for Large Scale Medical Image Pre-Training | [
"Tang, Fenghe",
"Xu, Ronghao",
"Yao, Qingsong",
"Fu, Xueming",
"Quan, Quan",
"Zhu, Heqin",
"Liu, Zaiyi",
"Zhou, S. Kevin"
] | Conference | 2408.05815 | [
"https://github.com/FengheTan9/HySparK"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 495 |
|
null | https://papers.miccai.org/miccai-2024/paper/0728_paper.pdf | @InProceedings{ Xu_Temporal_MICCAI2024,
author = { Xu, Jingwen and Zhu, Ye and Lyu, Fei and Wong, Grace Lai-Hung and Yuen, Pong C. },
title = { { Temporal Neighboring Multi-Modal Transformer with Missingness-Aware Prompt for Hepatocellular Carcinoma Prediction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Early prediction of hepatocellular carcinoma (HCC) is necessary to facilitate appropriate surveillance strategy and reduce cancer mortality. Incorporating CT scans and clinical time series can greatly increase the accuracy of predictive models. However, there are two challenges to effective multi-modal learning: (a) CT scans and clinical time series can be asynchronous and irregularly sampled. (b) CT scans are often missing compared with clinical time series. To tackle the above challenges, we propose a Temporal Neighboring Multi-modal Transformer with Missingness-Aware Prompt (\textbf{TNformer-MP}) to integrate clinical time series and available CT scans for HCC prediction. Specifically, to explore the inter-modality temporal correspondence, TNformer-MP exploits a Temporal Neighboring Multimodal Tokenizer (\textbf{TN-MT}) to fuse the CT embedding into its multiple-scale neighboring tokens from clinical time series. To mitigate the performance drop caused by missing CT modality, TNformer-MP exploits a Missingness-aware Prompt-driven Multimodal Tokenizer (\textbf{MP-MT}) that adopts missingness-aware prompts to adjust the encoding of clinical time series tokens. Experiments conducted on a largescale multimodal datasets of 36,353 patients show that our method achieves superior performance with existing methods. | Temporal Neighboring Multi-Modal Transformer with Missingness-Aware Prompt for Hepatocellular Carcinoma Prediction | [
"Xu, Jingwen",
"Zhu, Ye",
"Lyu, Fei",
"Wong, Grace Lai-Hung",
"Yuen, Pong C."
] | Conference | [
"https://github.com/LyapunovStability/TNformer-MP.git"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 496 |
||
null | https://papers.miccai.org/miccai-2024/paper/2995_paper.pdf | @InProceedings{ Zhe_Rethinking_MICCAI2024,
author = { Zheng, Zixuan and Shi, Yilei and Li, Chunlei and Hu, Jingliang and Zhu, Xiao Xiang and Mou, Lichao },
title = { { Rethinking Cell Counting Methods: Decoupling Counting and Localization } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Cell counting in microscopy images is vital in medicine and biology but extremely tedious and time-consuming to perform manually. While automated methods have advanced in recent years, state-of-the-art approaches tend to increasingly complex model designs. In this paper, we propose a conceptually simple yet effective decoupled learning scheme for automated cell counting, consisting of separate counter and localizer networks. In contrast to jointly learning counting and density map estimation, we show that decoupling these objectives surprisingly improves results. The counter operates on intermediate feature maps rather than pixel space to leverage global context and produce count estimates, while also generating coarse density maps. The localizer then reconstructs high-resolution density maps that precisely localize individual cells, conditional on the original images and coarse density maps from the counter. Besides, to boost counting accuracy, we further introduce a global message passing module to integrate cross-region patterns. Extensive experiments on four datasets demonstrate that our approach, despite its simplicity, challenges common practice and achieves state-of-the-art performance by significant margins. Our key insight is that decoupled learning alleviates the need to learn counting on high-resolution density maps directly, allowing the model to focus on global features critical for accurate estimates. Code is available at https://github.com/MedAITech/DCL. | Rethinking Cell Counting Methods: Decoupling Counting and Localization | [
"Zheng, Zixuan",
"Shi, Yilei",
"Li, Chunlei",
"Hu, Jingliang",
"Zhu, Xiao Xiang",
"Mou, Lichao"
] | Conference | [
"https://github.com/MedAITech/DCL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 497 |
||
null | https://papers.miccai.org/miccai-2024/paper/0165_paper.pdf | @InProceedings{ Han_Advancing_MICCAI2024,
author = { Han, Woojung and Kim, Chanyoung and Ju, Dayun and Shim, Yumin and Hwang, Seong Jae },
title = { { Advancing Text-Driven Chest X-Ray Generation with Policy-Based Reinforcement Learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Recent advances in text-conditioned image generation diffusion models have begun paving the way for new opportunities in modern medical domain, in particular, generating Chest X-rays (CXRs) from diagnostic reports. Nonetheless, to further drive the diffusion models to generate CXRs that faithfully reflect the complexity and diversity of real data, it has become evident that a nontrivial learning approach is needed. In light of this, we propose CXRL, a framework motivated by the potential of reinforcement learning (RL). Specifically, we integrate a policy gradient RL approach with well-designed multiple distinctive CXR-domain specific reward models. This approach guides the diffusion denoising trajectory, achieving precise CXR posture and pathological details. Here, considering the complex medical image environment, we present “RL with Comparative Feedback” (RLCF) for the reward mechanism, a human-like comparative evaluation that is known to be more effective and reliable in complex scenarios compared to direct evaluation. Our CXRL framework includes jointly optimizing learnable adaptive condition embeddings (ACE) and the image generator, enabling the model to produce more accurate and higher perceptual CXR quality. Our extensive evaluation of the MIMIC-CXR-JPG dataset demonstrates the effectiveness of our RL-based tuning approach. Consequently, our CXRL generates pathologically realistic CXRs, establishing a new standard for generating CXRs with high fidelity to real-world clinical scenarios. | Advancing Text-Driven Chest X-Ray Generation with Policy-Based Reinforcement Learning | [
"Han, Woojung",
"Kim, Chanyoung",
"Ju, Dayun",
"Shim, Yumin",
"Hwang, Seong Jae"
] | Conference | 2403.06516 | [
"https://github.com/MICV-yonsei/CXRL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 498 |
|
null | https://papers.miccai.org/miccai-2024/paper/2799_paper.pdf | @InProceedings{ Bay_BiasPruner_MICCAI2024,
author = { Bayasi, Nourhan and Fayyad, Jamil and Bissoto, Alceu and Hamarneh, Ghassan and Garbi, Rafeef },
title = { { BiasPruner: Debiased Continual Learning for Medical Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Continual Learning (CL) is crucial for enabling networks to dynamically adapt as they learn new tasks sequentially, accommodating new data and classes without catastrophic forgetting. Diverging from conventional perspectives on CL, our paper introduces a new perspective wherein forgetting could actually benefit sequential learning paradigm. Specifically, we present BiasPruner, a CL framework that intentionally forgets spurious correlations in the training data that could lead to shortcut learning. Utilizing a new bias score that measures the contribution of each unit in the network to learning spurious features, BiasPruner prunes those units with the highest bias scores to form a debiased subnetwork preserved for a given task. As BiasPruner learns a new task, it constructs a new debiased subnetwork, potentially incorporating units from previous subnetworks, which improves adaptation and performance on the new task. During inference, BiasPruner employs a simple task-agnostic approach to select the best debiased subnetwork for predictions. We conduct experiments on three medical datasets for skin lesion classification and chest X-RAY classification and demonstrate that BiasPruner consistently outperforms SOTA CL methods in terms of classification performance and fairness. Our code is available at: Link. | BiasPruner: Debiased Continual Learning for Medical Image Classification | [
"Bayasi, Nourhan",
"Fayyad, Jamil",
"Bissoto, Alceu",
"Hamarneh, Ghassan",
"Garbi, Rafeef"
] | Conference | 2407.08609 | [
"https://github.com/nourhanb/BiasPruner"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 499 |