Datasets:

bibtex_url
null
proceedings
stringlengths
58
58
bibtext
stringlengths
511
974
abstract
stringlengths
92
2k
title
stringlengths
30
207
authors
sequencelengths
1
22
id
stringclasses
1 value
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
14 values
n_linked_authors
int64
-1
1
upvotes
int64
-1
1
num_comments
int64
-1
0
n_authors
int64
-1
10
Models
sequencelengths
0
4
Datasets
sequencelengths
0
1
Spaces
sequencelengths
0
0
old_Models
sequencelengths
0
4
old_Datasets
sequencelengths
0
1
old_Spaces
sequencelengths
0
0
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
unique_id
int64
0
855
null
https://papers.miccai.org/miccai-2024/paper/1368_paper.pdf
@InProceedings{ El_Joint_MICCAI2024, author = { El Nahhas, Omar S. M. and Wölflein, Georg and Ligero, Marta and Lenz, Tim and van Treeck, Marko and Khader, Firas and Truhn, Daniel and Kather, Jakob Nikolas }, title = { { Joint multi-task learning improves weakly-supervised biomarker prediction in computational pathology } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Deep Learning (DL) can predict biomarkers directly from digitized cancer histology in a weakly-supervised setting. Recently, the prediction of continuous biomarkers through regression-based DL has seen an increasing interest. Nonetheless, clinical decision making often requires a categorical outcome. Consequently, we developed a weakly-supervised joint multi-task Transformer architecture which has been trained and evaluated on four public patient cohorts for the prediction of two key predictive biomarkers, microsatellite instability (MSI) and homologous recombination deficiency (HRD), trained with auxiliary regression tasks related to the tumor microenvironment. Moreover, we perform a comprehensive benchmark of 16 task balancing approaches for weakly-supervised joint multi-task learning in computational pathology. Using our novel approach, we outperform the state of the art by +7.7% and +4.1% as measured by the area under the receiver operating characteristic, and enhance clustering of latent embeddings by +8% and +5%, for the prediction of MSI and HRD in external cohorts, respectively.
Joint multi-task learning improves weakly-supervised biomarker prediction in computational pathology
[ "El Nahhas, Omar S. M.", "Wölflein, Georg", "Ligero, Marta", "Lenz, Tim", "van Treeck, Marko", "Khader, Firas", "Truhn, Daniel", "Kather, Jakob Nikolas" ]
Conference
2403.03891
[ "https://github.com/KatherLab/joint-mtl-cpath" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
800
null
https://papers.miccai.org/miccai-2024/paper/2178_paper.pdf
@InProceedings{ Ans_Algorithmic_MICCAI2024, author = { Ansari, Faizanuddin and Chakraborti, Tapabrata and Das, Swagatam }, title = { { Algorithmic Fairness in Lesion Classification by Mitigating Class Imbalance and Skin Tone Bias } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Deep learning models have shown considerable promise in the classification of skin lesions. However, a notable challenge arises from their inherent bias towards dominant skin tones and the issue of imbalanced class representation. This study introduces a novel data augmentation technique designed to address these limitations. Our approach harnesses contextual information from the prevalent class to synthesize various samples representing minority classes. Using a mixup-based algorithm guided by an adaptive sampler, our method effectively tackles bias and class imbalance issues. The adaptive sampler dynamically adjusts sampling probabilities based on the network’s meta-set performance, enhancing overall accuracy. Our research demonstrates the efficacy of this approach in mitigating skin tone bias and achieving robust lesion classification across a spectrum of diverse skin colors from two distinct benchmark datasets, offering promising implications for improving dermatological diagnostic systems.
Algorithmic Fairness in Lesion Classification by Mitigating Class Imbalance and Skin Tone Bias
[ "Ansari, Faizanuddin", "Chakraborti, Tapabrata", "Das, Swagatam" ]
Conference
[ "https://github.com/fa-submit/Submission_M" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
801
null
https://papers.miccai.org/miccai-2024/paper/0928_paper.pdf
@InProceedings{ Tia_PANS_MICCAI2024, author = { Tian, Qingyao and Chen, Zhen and Liao, Huai and Huang, Xinyan and Yang, Bingyu and Li, Lujie and Liu, Hongbin }, title = { { PANS: Probabilistic Airway Navigation System for Real-time Robust Bronchoscope Localization } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Accurate bronchoscope localization is essential for pulmonary interventions, by providing six degrees of freedom (DOF) in airway navigation. However, the robustness of current vision-based methods is often compromised in clinical practice, and they struggle to perform in real-time and to generalize across cases unseen during training. To overcome these challenges, we propose a novel Probabilistic Airway Navigation System (PANS), leveraging Monte-Carlo method with pose hypotheses and likelihoods to achieve robust and real-time bronchoscope localization. Specifically, our PANS incorporates diverse visual representations (e.g., odometry and landmarks) by leveraging two key modules, including the Depth-based Motion Inference (DMI) and the Bronchial Semantic Analysis (BSA). To generate the pose hypotheses of bronchoscope for PANS, we devise the DMI to accurately propagate the estimation of pose hypotheses over time. Moreover, to estimate the accurate pose likelihood, we devise the BSA module by effectively distinguishing between similar bronchial regions in endoscopic images, along with a novel metric to assess the congruence between estimated depth maps and the segmented airway structure. Under this probabilistic formulation, our PANS is capable of achieving the 6-DOF bronchoscope localization with superior accuracy and robustness. Extensive experiments on the collected pulmonary intervention dataset comprising 10 clinical cases confirm the advantage of our PANS over state-of-the-arts, in terms of both robustness and generalization in localizing deeper airway branches and the efficiency of real-time inference. The proposed PANS reveals its potential to be a reliable tool in the operating room, promising to enhance the quality and safety of pulmonary interventions.
PANS: Probabilistic Airway Navigation System for Real-time Robust Bronchoscope Localization
[ "Tian, Qingyao", "Chen, Zhen", "Liao, Huai", "Huang, Xinyan", "Yang, Bingyu", "Li, Lujie", "Liu, Hongbin" ]
Conference
2407.05554
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
802
null
https://papers.miccai.org/miccai-2024/paper/3894_paper.pdf
@InProceedings{ Ngu_Volumeoptimal_MICCAI2024, author = { Nguyen, Nghi and Hou, Tao and Amico, Enrico and Zheng, Jingyi and Huang, Huajun and Kaplan, Alan D. and Petri, Giovanni and Goñi, Joaqúın and Kaufmann, Ralph and Zhao, Yize and Duong-Tran, Duy and Shen, Li }, title = { { Volume-optimal persistence homological scaffolds of hemodynamic networks covary with MEG theta-alpha aperiodic dynamics } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Higher-order properties of functional magnetic resonance imaging (fMRI) induced connectivity have been shown to unravel many exclusive topological and dynamical insights beyond pairwise interactions. Nonetheless, whether these fMRI-induced higher-order properties play a role in disentangling other neuroimaging modalities’ insights remains largely unexplored and poorly understood. In this work, by analyzing fMRI data from the Human Connectome Project Young Adult dataset using persistent homology, we discovered that the volume-optimal persistence homological scaffolds of fMRI-based functional connectomes exhibited conservative topological reconfigurations from the resting state to attentional task-positive state. Specifically, while reflecting the extent to which each cortical region contributed to functional cycles following different cognitive demands, these reconfigurations were constrained such that the spatial distribution of cavities in the connectome is relatively conserved. Most importantly, such level of contributions covaried with powers of aperiodic activities mostly within the theta-alpha (4-12 Hz) band measured by magnetoencephalography (MEG). This comprehensive result suggests that fMRI-induced hemodynamics and MEG theta-alpha aperiodic activities are governed by the same functional constraints specific to each cortical morpho-structure. Methodologically, our work paves the way toward an innovative computing paradigm in multimodal neuroimaging topological learning. The code for our analyses is provided in https://github.com/ngcaonghi/scaffold_noise.
Volume-optimal persistence homological scaffolds of hemodynamic networks covary with MEG theta-alpha aperiodic dynamics
[ "Nguyen, Nghi", "Hou, Tao", "Amico, Enrico", "Zheng, Jingyi", "Huang, Huajun", "Kaplan, Alan D.", "Petri, Giovanni", "Goñi, Joaqúın", "Kaufmann, Ralph", "Zhao, Yize", "Duong-Tran, Duy", "Shen, Li" ]
Conference
2407.05060
[ "https://github.com/ngcaonghi/scaffold_noise" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
803
null
https://papers.miccai.org/miccai-2024/paper/0193_paper.pdf
@InProceedings{ Xu_Poisson_MICCAI2024, author = { Xu, Yinsong and Wang, Yipei and Shen, Ziyi and Gayo, Iani J. M. B. and Thorley, Natasha and Punwani, Shonit and Men, Aidong and Barratt, Dean C. and Chen, Qingchao and Hu, Yipeng }, title = { { Poisson Ordinal Network for Gleason Group Estimation Using Bi-Parametric MRI } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
The Gleason groups serve as the primary histological grading system for prostate cancer, providing crucial insights into the cancer’s potential for growth and metastasis. In clinical practice, pathologists determine the Gleason groups based on specimens obtained from ultrasound-guided biopsies. In this study, we investigate the feasibility of directly estimating the Gleason groups from MRI scans to reduce otherwise required biopsies. We identify two characteristics of this task, ordinality and the resulting dependent yet unknown variances between Gleason groups. In addition to the inter-/intra-observer variability in a multi-step Gleason scoring process based on the interpretation of Gleason patterns, our MR-based prediction is also subject to specimen sampling variance and, to a lesser degree, varying MR imaging protocols. To address this challenge, we propose a novel Poisson ordinal network (PON). PONs model the prediction using a Poisson distribution and leverages Poisson encoding and Poisson focal loss to capture a learnable dependency between ordinal classes (here, Gleason groups), rather than relying solely on the numerical ground-truth (e.g. Gleason Groups 1-5 or Gleason Scores 6-10). To improve this modelling efficacy, PONs also employ contrastive learning with a memory bank to regularise intra-class variance, decoupling the memory requirement of contrast learning from the batch size. Experimental results based on the images labelled by saturation biopsies from 265 prior-biopsy-blind patients, across two tasks demonstrate the superiority and effectiveness of our proposed method.
Poisson Ordinal Network for Gleason Group Estimation Using Bi-Parametric MRI
[ "Xu, Yinsong", "Wang, Yipei", "Shen, Ziyi", "Gayo, Iani J. M. B.", "Thorley, Natasha", "Punwani, Shonit", "Men, Aidong", "Barratt, Dean C.", "Chen, Qingchao", "Hu, Yipeng" ]
Conference
2407.05796
[ "https://github.com/Yinsongxu/PON.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
804
null
https://papers.miccai.org/miccai-2024/paper/3427_paper.pdf
@InProceedings{ Cob_Improved_MICCAI2024, author = { Cobb, Robert and Cook, Gary J. R. and Reader, Andrew J. }, title = { { Improved Classification Learning from Highly Imbalanced Multi-Label Datasets of Inflamed Joints in [99mTc]Maraciclatide Imaging of Arthritic Patients by Natural Image and Diffusion Model Augmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Gamma camera imaging of the novel radiopharmaceutical [99mTc]maraciclatide can be used to detect inflammation in patients with rheumatoid arthritis. Due to the novelty of this clinical imaging application, data are especially scarce with only one dataset composed of 48 patients available for development of classification models. In this work we classify inflammation in individual joints in the hands of patients using only this small dataset. Our methodology combines diffusion models to augment the available training data for this classification task from an otherwise small and imbalanced dataset. We also explore the use of augmenting with a publicly available natural image dataset in combination with a diffusion model. We use a DenseNet model to classify the inflammation of individual joints in the hand. Our results show that compared to non-augmented baseline classification accuracy, sensitivity, and specificity metrics of 0.79 ± 0.05, 0.50 ± 0.04, and 0.85 ± 0.05, respectively our method improves model performance for these metrics to 0.91 ± 0.02, 0.79 ± 0.11, 0.93 ± 0.02, respectively. When we use an ensemble model and combine natural image augmentation with [99mTc]maraciclatide augmentation we see performance increase to 0.92 ± 0.02, 0.80 ± 0.09, 0.95 ± 0.02 for accuracy, sensitivity, and specificity, respectively.
Improved Classification Learning from Highly Imbalanced Multi-Label Datasets of Inflamed Joints in [99mTc]Maraciclatide Imaging of Arthritic Patients by Natural Image and Diffusion Model Augmentation
[ "Cobb, Robert", "Cook, Gary J. R.", "Reader, Andrew J." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
805
null
https://papers.miccai.org/miccai-2024/paper/1760_paper.pdf
@InProceedings{ Fen_Unified_MICCAI2024, author = { Feng, Yidan and Gao, Bingchen and Deng, Sen and Qiu, Anqi and Qin, Jing }, title = { { Unified Multi-Modal Learning for Any Modality Combinations in Alzheimer’s Disease Diagnosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Our method solves unified multi-modal learning in an diverse and imbalanced setting, which are the key features of medical modalities compared with the extensively-studied ones. Different from existing works that assumed fixed or maximum number of modalities for multi-modal learning, our model not only manages any missing scenarios but is also capable of handling new modalities and unseen combinations. We argue that, the key towards this any combination model is the proper design of alignment, which should guarantee both modality invariance across diverse inputs and effective modeling of complementarities within the unified metric space. Instead of exact cross-modal alignment, we propose to decouple these two functions into representation-level and task-level alignment, which we empirically show is both dispensable in this task. Moreover, we introduce tunable modality-agnostic Transformer to unify the representation learning process, which significantly reduces modality-specific parameters and enhances the scalability of our model. The experiments have shown that the proposed method enables one single model handling all possible combinations of the six seen modalities and two new modalities in Alzheimer’s Disease diagnosis, with superior performance on longer combinations.
Unified Multi-Modal Learning for Any Modality Combinations in Alzheimer’s Disease Diagnosis
[ "Feng, Yidan", "Gao, Bingchen", "Deng, Sen", "Qiu, Anqi", "Qin, Jing" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
806
null
https://papers.miccai.org/miccai-2024/paper/2321_paper.pdf
@InProceedings{ Wu_Cortical_MICCAI2024, author = { Wu, Wenxuan and Qu, Ruowen and Shi, Dongzi and Xiong, Tong and Xu, Xiangmin and Xing, Xiaofen and Zhang, Xin }, title = { { Cortical Surface Reconstruction from 2D MRI with Segmentation-Constrained Super-Resolution and Representation Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Cortical surface reconstruction typically relies on high-quality 3D brain MRI to establish the structure of cortex, playing a pivotal role in unveiling neurodevelopmental patterns. However, clinical challenges emerge due to elevated costs and prolonged acquisition times, often resulting in low-quality 2D brain MRI. To optimize the utilization of clinical data for cerebral cortex analysis, we propose a two-stage method for cortical surface reconstruction from 2D brain MRI images. The first stage employs segmentation-constrained MRI super-resolution, concatenating the super-resolution (SR) model and cortical ribbon segmentation model to emphasize cortical regions in the 3D images generated from 2D inputs. In the second stage, two encoders extract features from the original and super-resoulution images. Through a shared decoder and the mask-swap module with multi-trocess training strategy, cortical surface reconstruction is achieved by mapping features from both the original and super-resolution images to a unified latent space. Experiments on the developing Human Connectome Project (dHCP) dataset demonstrate a significant improvement in geometric accuracy over the leading-SR based cortical surface reconstruction methods, facilitating precise cortical surface reconstruction from 2D images.
Cortical Surface Reconstruction from 2D MRI with Segmentation-Constrained Super-Resolution and Representation Learning
[ "Wu, Wenxuan", "Qu, Ruowen", "Shi, Dongzi", "Xiong, Tong", "Xu, Xiangmin", "Xing, Xiaofen", "Zhang, Xin" ]
Conference
[ "https://github.com/SCUT-Xinlab/CSR-from-2D-MRI" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
807
null
https://papers.miccai.org/miccai-2024/paper/3457_paper.pdf
@InProceedings{ Ibr_SemiSupervised_MICCAI2024, author = { Ibrahim, Yasin and Warr, Hermione and Kamnitsas, Konstantinos }, title = { { Semi-Supervised Learning for Deep Causal Generative Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Developing models that are capable of answering questions of the form “How would x change if y had been z?” is fundamental to advancing medical image analysis. Training causal generative models that address such counterfactual questions, though, currently requires that all relevant variables have been observed and that the corresponding labels are available in the training data. However, clinical data may not have complete records for all patients and state of the art causal generative models are unable to take full advantage of this. We thus develop, for the first time, a semi-supervised deep causal generative model that exploits the causal relationships between variables to maximise the use of all available data. We explore this in the setting where each sample is either fully labelled or fully unlabelled, as well as the more clinically realistic case of having different labels missing for each sample. We leverage techniques from causal inference to infer missing values and subsequently generate realistic counterfactuals, even for samples with incomplete labels. Code is available at: https://github.com/yi249/ssl-causal
Semi-Supervised Learning for Deep Causal Generative Models
[ "Ibrahim, Yasin", "Warr, Hermione", "Kamnitsas, Konstantinos" ]
Conference
2403.18717
[ "https://github.com/yi249/ssl-causal" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
808
null
https://papers.miccai.org/miccai-2024/paper/0983_paper.pdf
@InProceedings{ Yan_Inject_MICCAI2024, author = { Yang, Ziyuan and Chen, Yingyu and Sun, Mengyu and Zhang, Yi }, title = { { Inject Backdoor in Measured Data to Jeopardize Full-Stack Medical Image Analysis System } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Deep learning has achieved remarkable success in the medical domain, which makes it crucial to assess its vulnerabilities in medical systems. This study examines backdoor attack (BA) methods to evaluate the reliability and security of medical image analysis systems. However, most BA methods focus on isolated downstream tasks and are considered post-imaging attacks, missing a comprehensive security assessment of the full-stack medical image analysis systems from data acquisition to analysis. Reconstructing images from measured data for downstream tasks requires complex transformations, which challenge the design of triggers in the measurement domain. Typically, hackers only access measured data in scanners. To tackle this challenge, this paper introduces a novel Learnable Trigger Generation Method~(LTGM) for measured data. This pre-imaging attack method aims to attack the downstream task without compromising the reconstruction process or imaging quality. LTGM employs a trigger function in the measurement domain to inject a learned trigger into the measured data. To avoid the bias from handcrafted knowledge, this trigger is formulated by learning from the gradients of two key tasks: reconstruction and analysis. Crucially, LTGM’s trigger strives to balance its impact on analysis with minimal additional noise and artifacts in the reconstructed images by carefully analyzing gradients from both tasks. Comprehensive experiments have been conducted to demonstrate the vulnerabilities in full-stack medical systems and to validate the effectiveness of the proposed method using the public dataset. Our code is available at https://github.com/Deep-Imaging-Group/LTGM.
Inject Backdoor in Measured Data to Jeopardize Full-Stack Medical Image Analysis System
[ "Yang, Ziyuan", "Chen, Yingyu", "Sun, Mengyu", "Zhang, Yi" ]
Conference
[ "https://github.com/Deep-Imaging-Group/LTGM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
809
null
https://papers.miccai.org/miccai-2024/paper/1949_paper.pdf
@InProceedings{ Li_VCLIPSeg_MICCAI2024, author = { Li, Lei and Lian, Sheng and Luo, Zhiming and Wang, Beizhan and Li, Shaozi }, title = { { VCLIPSeg: Voxel-wise CLIP-Enhanced model for Semi-Supervised Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Semi-supervised learning has emerged as a critical approach for addressing medical image segmentation with limited annotation, and pseudo labeling-based methods made significant progress for this task. However, the varying quality of pseudo labels poses a challenge to model generalization. In this paper, we propose a Voxel-wise CLIP-enhanced model for semi-supervised medical image Segmentation (VCLIPSeg). Our model incorporates three modules: Voxel-Wise Prompts Module (VWPM), Vision-Text Consistency Module (VTCM), and Dynamic Labeling Branch (DLB). The VWPM integrates CLIP embeddings in a voxel-wise manner, learning the semantic relationships among pixels. The VTCM constrains the image prototype features, reducing the impact of noisy data. The DLB adaptively generates pseudo-labels, effectively leveraging the unlabeled data. Experimental results on the Left Atrial (LA) dataset and Pancreas-CT dataset demonstrate the superiority of our method over state-of-the-art approaches in terms of the Dice score. For instance, it achieves a Dice score of 88.51% using only 5% labeled data from the LA dataset.
VCLIPSeg: Voxel-wise CLIP-Enhanced model for Semi-Supervised Medical Image Segmentation
[ "Li, Lei", "Lian, Sheng", "Luo, Zhiming", "Wang, Beizhan", "Li, Shaozi" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
810
null
https://papers.miccai.org/miccai-2024/paper/2098_paper.pdf
@InProceedings{ Li_SDFPlane_MICCAI2024, author = { Li, Hao and Shan, Jiwei and Wang, Hesheng }, title = { { SDFPlane: Explicit Neural Surface Reconstruction of Deformable Tissues } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Three-dimensional reconstruction of soft tissues from stereoscopic surgical videos is crucial for enhancing various medical applications. Existing methods often struggle to generate accurate soft tissue geometries or suffer from slow network convergence. To address these challenges, we introduce SDFPlane, an innovative method for fast and precise geometric reconstruction of surgical scenes. This approach efficiently captures scene deformation using a spatial-temporal structure encoder and combines an SDF decoder with a color decoder to accurately model the scene’s geometry and color. Subsequently, we synthesize color images and depth maps with SDF-based volume rendering. Additionally, we implement an error-guided importance sampling strategy, which directs the network’s focus towards areas that are not fully optimized during training. Comparative analysis on multiple public datasets demonstrates that SDFPlane accelerates optimization by over 10× compared to existing SDF-based methods while maintaining state-of-the-art rendering quality. Code is available at:https://github.com/IRMVLab/SDFPlane.git
SDFPlane: Explicit Neural Surface Reconstruction of Deformable Tissues
[ "Li, Hao", "Shan, Jiwei", "Wang, Hesheng" ]
Conference
[ "https://github.com/IRMVLab/SDFPlane" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
811
null
https://papers.miccai.org/miccai-2024/paper/0782_paper.pdf
@InProceedings{ Zha_MoreStyle_MICCAI2024, author = { Zhao, Haoyu and Dong, Wenhui and Yu, Rui and Zhao, Zhou and Du, Bo and Xu, Yongchao }, title = { { MoreStyle: Relax Low-frequency Constraint of Fourier-based Image Reconstruction in Generalizable Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
The task of single-source domain generalization (SDG) in medical image segmentation is crucial due to frequent domain shifts in clinical image datasets. To address the challenge of poor generalization across different domains, we introduce a Plug-and-Play module for data augmentation called MoreStyle. MoreStylediversifies image styles by relaxing low-frequency constraints in Fourier space, guiding the image reconstruction network. With the help of adversarial learning, MoreStylefurther expands the style range and pinpoints the most intricate style combinations within latent features. To handle significant style variations, we introduce an uncertainty-weighted loss. This loss emphasizes hard-to-classify pixels resulting only from style shifts while mitigating true hard-to-classify pixels in both MoreStyle-generated and original images. Extensive experiments on two widely used benchmarks demonstrate that the proposed MoreStyle effectively helps to achieve good domain generalization ability, and has the potential to further boost the performance of some state-of-the-art SDG methods.
MoreStyle: Relax Low-frequency Constraint of Fourier-based Image Reconstruction in Generalizable Medical Image Segmentation
[ "Zhao, Haoyu", "Dong, Wenhui", "Yu, Rui", "Zhao, Zhou", "Du, Bo", "Xu, Yongchao" ]
Conference
2403.11689
[ "https://github.com/zhaohaoyu376/morestyle" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
812
null
https://papers.miccai.org/miccai-2024/paper/1001_paper.pdf
@InProceedings{ Gu_3DDX_MICCAI2024, author = { Gu, Yi and Otake, Yoshito and Uemura, Keisuke and Takao, Masaki and Soufi, Mazen and Okada, Seiji and Sugano, Nobuhiko and Talbot, Hugues and Sato, Yoshinobu }, title = { { 3DDX: Bone Surface Reconstruction from a Single Standard-Geometry Radiograph via Dual-Face Depth Estimation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Radiography is widely used in orthopedics for its affordability and low radiation exposure. 3D reconstruction from a single radiograph, so-called 2D-3D reconstruction, offers the possibility of various clinical applications, but achieving clinically viable accuracy and computational efficiency is still an unsolved challenge. Unlike other areas in computer vision, X-ray imaging’s unique properties, such as ray penetration and standard geometry, have not been fully exploited. We propose a novel approach that simultaneously learns multiple depth maps (front and back surfaces of multiple bones) derived from the X-ray image to computed tomography (CT) registration. The proposed method not only leverages the standard geometry characteristic of X-ray imaging but also enhances the precision of the reconstruction of the whole surface. Our study involved 600 CT and 2651 X-ray images (4 to 5 posed X-ray images per patient), demonstrating our method’s superiority over traditional approaches with a surface reconstruction error reduction from 4.78 mm to 1.96 mm and further to 1.76 mm using higher resolution and pretraining. This significant accuracy improvement and enhanced computational efficiency suggest our approach’s potential for clinical application.
3DDX: Bone Surface Reconstruction from a Single Standard-Geometry Radiograph via Dual-Face Depth Estimation
[ "Gu, Yi", "Otake, Yoshito", "Uemura, Keisuke", "Takao, Masaki", "Soufi, Mazen", "Okada, Seiji", "Sugano, Nobuhiko", "Talbot, Hugues", "Sato, Yoshinobu" ]
Conference
2409.16702
[ "https://github.com/Kayaba-Akihiko/3DDX" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
813
null
https://papers.miccai.org/miccai-2024/paper/2320_paper.pdf
@InProceedings{ Sha_Fewshot_MICCAI2024, author = { Shakeri, Fereshteh and Huang, Yunshi and Silva-Rodriguez, Julio and Bahig, Houda and Tang, An and Dolz, Jose and Ben Ayed, Ismail }, title = { { Few-shot Adaptation of Medical Vision-Language Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Integrating image and text data through multi-modal learning has emerged as a new approach in medical imaging research, following its successful deployment in computer vision. While considerable efforts have been dedicated to establishing medical foundation models and their zero-shot transfer to downstream tasks, the popular few-shot setting remains relatively unexplored. Following on from the currently strong emergence of this setting in computer vision, we introduce the first structured benchmark for adapting medical vision-language models (VLMs) in a strict few-shot regime and investigate various adaptation strategies commonly used in the context of natural images. Furthermore, we evaluate a simple generalization of the linear-probe adaptation baseline, which seeks an optimal blending of the visual prototypes and text embeddings via learnable class-wise multipliers. Surprisingly, such a text-informed linear probe yields competitive performances in comparison to convoluted prompt-learning and adapter-based strategies, while running considerably faster and accommodating the black-box setting. Our extensive experiments span three different medical modalities and specialized foundation models, nine downstream tasks, and several state-of-the-art few-shot adaptation methods. We made our benchmark and code publicly available to trigger further developments in this emergent subject.
Few-shot Adaptation of Medical Vision-Language Models
[ "Shakeri, Fereshteh", "Huang, Yunshi", "Silva-Rodriguez, Julio", "Bahig, Houda", "Tang, An", "Dolz, Jose", "Ben Ayed, Ismail" ]
Conference
2409.03868
[ "https://github.com/FereshteShakeri/few-shot-MedVLMs" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
814
null
https://papers.miccai.org/miccai-2024/paper/4190_paper.pdf
@InProceedings{ Dha_VLSMAdapter_MICCAI2024, author = { Dhakal, Manish and Adhikari, Rabin and Thapaliya, Safal and Khanal, Bishesh }, title = { { VLSM-Adapter: Finetuning Vision-Language Segmentation Efficiently with Lightweight Blocks } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Foundation Vision-Language Models (VLMs) trained using large-scale open-domain images and text pairs have recently been adapted to develop Vision-Language Segmentation Models (VLSMs) that allow providing text prompts during inference to guide image segmentation. If robust and powerful VLSMs can be built for medical images, it could aid medical professionals in many clinical tasks where they must spend substantial time delineating the target structure of interest. VLSMs for medical images resort to fine-tuning base VLM or VLSM pretrained on open-domain natural image datasets due to fewer annotated medical image datasets; this fine-tuning is resource-consuming and expensive as it usually requires updating all or a significant fraction of the pretrained parameters. Recently, lightweight blocks called adapters have been proposed in VLMs that keep the pretrained model frozen and only train adapters during fine-tuning, substantially reducing the computing resources required. We introduce a novel adapter, VLSM-Adapter, that can fine-tune pretrained vision-language segmentation models using transformer encoders. Our experiments in widely used CLIP-based segmentation models show that with only 3 million trainable parameters, the VLSM-Adapter outperforms state-of-the-art and is comparable to the upper bound end-to-end fine-tuning. The source code is available at: https://github.com/naamiinepal/vlsm-adapter.
VLSM-Adapter: Finetuning Vision-Language Segmentation Efficiently with Lightweight Blocks
[ "Dhakal, Manish", "Adhikari, Rabin", "Thapaliya, Safal", "Khanal, Bishesh" ]
Conference
2405.06196
[ "https://github.com/naamiinepal/vlsm-adapter" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
815
null
https://papers.miccai.org/miccai-2024/paper/1475_paper.pdf
@InProceedings{ Zhu_Multivariate_MICCAI2024, author = { Zhu, Zhihong and Cheng, Xuxin and Zhang, Yunyan and Chen, Zhaorun and Long, Qingqing and Li, Hongxiang and Huang, Zhiqi and Wu, Xian and Zheng, Yefeng }, title = { { Multivariate Cooperative Game for Image-Report Pairs: Hierarchical Semantic Alignment for Medical Report Generation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Medical report generation (MRG) has great clinical potential, which could relieve radiologists from the heavy workloads of report writing. One of the core challenges in MRG is establishing accurate cross-modal semantic alignment between radiology images and their corresponding reports. Toward this goal, previous methods made great attempts to model from case-level alignment to more fine-grained region-level alignment. Although achieving promising results, they (1) either perform implicit alignment through end-to-end training or heavily rely on extra manual annotations and pre-training tools; (2) neglect to leverage the high-level inter-subject relationship semantic (e.g., disease) alignment. In this paper, we present Hierarchical Semantic Alignment (HSA) for MRG in a unified game theory based framework, which achieves semantic alignment at multiple levels. To solve the first issue, we treat image regions and report words as binary game players and value possible alignment between them, thus achieving explicit and adaptive alignment in a self-supervised manner at region-level. To solve the second issue, we treat images, reports, and diseases as ternary game players, which enforces the cross-modal cluster assignment consistency at disease-level. Extensive experiments and analyses on IU-Xray and MIMIC-CXR benchmark datasets demonstrate the superiority of our proposed HSA against various state-of-the-art methods.
Multivariate Cooperative Game for Image-Report Pairs: Hierarchical Semantic Alignment for Medical Report Generation
[ "Zhu, Zhihong", "Cheng, Xuxin", "Zhang, Yunyan", "Chen, Zhaorun", "Long, Qingqing", "Li, Hongxiang", "Huang, Zhiqi", "Wu, Xian", "Zheng, Yefeng" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
816
null
https://papers.miccai.org/miccai-2024/paper/0195_paper.pdf
@InProceedings{ Ji_Diffusionbased_MICCAI2024, author = { Ji, Wen and Chung, Albert C. S. }, title = { { Diffusion-based Domain Adaptation for Medical Image Segmentation using Stochastic Step Alignment } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
The purpose of this study is to improve Unsupervised Domain Adaptation (UDA) by utilizing intermediate image distributions from the source domain to the target-like domain during the image generation process. However, image generators like Generative Adversarial Networks (GANs) can be regarded as black boxes due to their complex internal workings, and we can only access the final generated image. This limitation makes it unable for UDA to use the available knowledge of the intermediate distribution produced in the generation process when executing domain alignment. To address this problem, we propose a novel UDA framework that utilizes diffusion models to capture and transfer an amount of inter-domain knowledge, thereby mitigating the domain shift problem. A coupled structure-preserved diffusion model is designed to synthesize intermediate images in multiple steps, making the intermediate image distributions accessible. A stochastic step alignment strategy is further developed to align feature distributions, resulting in improved adaptation ability. The effectiveness of the proposed method is demonstrated through experiments on abdominal multi-organ segmentation.
Diffusion-based Domain Adaptation for Medical Image Segmentation using Stochastic Step Alignment
[ "Ji, Wen", "Chung, Albert C. S." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
817
null
https://papers.miccai.org/miccai-2024/paper/3200_paper.pdf
@InProceedings{ Raj_Death_MICCAI2024, author = { Rajput, Junaid R. and Weinmueller, Simon and Endres, Jonathan and Dawood, Peter and Knoll, Florian and Maier, Andreas and Zaiss, Moritz }, title = { { Death by Retrospective Undersampling - Caveats and Solutions for Learning-Based MRI Reconstructions } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
This study challenges the validity of retrospective undersampling in MRI data science by analysis via an MRI physics simulation. We demonstrate that retrospective undersampling, a method often used to create training data for reconstruction models, can inherently alter MRI signals from their prospective counterparts. This arises from the sequential nature of MRI acquisition, where undersampling post-acquisition effectively alters the MR sequence and the magnetization dynamic in a non-linear fashion. We show that even in common sequences, this effect can make learning-based reconstructions unreliable. Our simulation provides both, (i) a tool for generating accurate prospective undersampled datasets for analysis of such effects, or for MRI training data augmentation, and (ii) a differentiable reconstruction operator that models undersampling correctly. The provided insights are crucial for the development and evaluation of AI-driven acceleration of diagnostic MRI tools.
Death by Retrospective Undersampling - Caveats and Solutions for Learning-Based MRI Reconstructions
[ "Rajput, Junaid R.", "Weinmueller, Simon", "Endres, Jonathan", "Dawood, Peter", "Knoll, Florian", "Maier, Andreas", "Zaiss, Moritz" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
818
null
https://papers.miccai.org/miccai-2024/paper/1159_paper.pdf
@InProceedings{ Che_MMQL_MICCAI2024, author = { Chen, Qishen and Bian, Minjie and Xu, Huahu }, title = { { MMQL: Multi-Question Learning for Medical Visual Question Answering } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Medical visual question answering (Med-VQA) aims to answer medical questions with given medical images. Current methods are all designed to answer a single question with its image. Still, medical diagnoses are based on multiple factors, so questions related to the same image should be answered together. This paper proposes a novel multi-question learning method to capture the correlation among questions. Notably, for one image, all related questions are given predictions simultaneously. For those images that already have some questions answered, the answered questions can be used as prompts for better diagnosis. Further, to deal with the error prompts, an entropy-based prompt prune algorithm is designed. A shuffle-based algorithm is designed to make the model less sensitive to the sequence of input questions. In the experiment, patient-level accuracy is designed to compare the reliability of the models and reflect the effectiveness of our multi-question learning for Med-VQA. The results show our methods on top of recent state-of-the-art Med-VQA models on both VQA-RAD and SLAKE, with a 3.77% and 4.24% improvement of overall accuracy, respectively. And a 6.90% and 15.63% improvement in patient-level accuracy. The codes are available at: https://github.com/shanziSZ/MMQL.
MMQL: Multi-Question Learning for Medical Visual Question Answering
[ "Chen, Qishen", "Bian, Minjie", "Xu, Huahu" ]
Conference
[ "https://github.com/shanziSZ/MMQL" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
819
null
https://papers.miccai.org/miccai-2024/paper/0179_paper.pdf
@InProceedings{ Zha_DepthAware_MICCAI2024, author = { Zhang, Francis Xiatian and Chen, Shuang and Xie, Xianghua and Shum, Hubert P. H. }, title = { { Depth-Aware Endoscopic Video Inpainting } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Video inpainting fills in corrupted video content with plausible replacements. While recent advances in endoscopic video inpainting have shown potential for enhancing the quality of endoscopic videos, they mainly repair 2D visual information without effectively preserving crucial 3D spatial details for clinical reference. Depth-aware inpainting methods attempt to preserve these details by incorporating depth information. Still, in endoscopic contexts, they face challenges including reliance on pre-acquired depth maps, less effective fusion designs, and ignorance of the fidelity of 3D spatial details. To address them, we introduce a novel Depth-aware Endoscopic Video Inpainting (DAEVI) framework. It features a Spatial-Temporal Guided Depth Estimation module for direct depth estimation from visual features, a Bi-Modal Paired Channel Fusion module for effective channel-by-channel fusion of visual and depth information, and a Depth Enhanced Discriminator to assess the fidelity of the RGB-D sequence comprised of the inpainted frames and estimated depth images. Experimental evaluations on established benchmarks demonstrate our framework’s superiority, achieving a 2% improvement in PSNR and a 6% reduction in MSE compared to state-of-the-art methods. Qualitative analyses further validate its enhanced ability to inpaint fine details, highlighting the benefits of integrating depth information into endoscopic inpainting.
Depth-Aware Endoscopic Video Inpainting
[ "Zhang, Francis Xiatian", "Chen, Shuang", "Xie, Xianghua", "Shum, Hubert P. H." ]
Conference
2407.02675
[ "https://github.com/FrancisXZhang/DAEVI" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
820
null
https://papers.miccai.org/miccai-2024/paper/1957_paper.pdf
@InProceedings{ Lia_Leveraging_MICCAI2024, author = { Liang, Xiao and Wang, Yin and Wang, Di and Jiao, Zhicheng and Zhong, Haodi and Yang, Mengyu and Wang, Quan }, title = { { Leveraging Coarse-to-Fine Grained Representations in Contrastive Learning for Differential Medical Visual Question Answering } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Chest X-ray Differential Medical Visual Question Answering (Diff-MedVQA) is a novel multi-modal task designed to answer questions about diseases, especially their differences, based on a main image and a reference image. Compared to the widely explored visual question answering in the general domain, Diff-MedVQA presents two unique issues: (1) variations in medical images are often subtle, and (2) it is impossible for two chest X-rays taken at different times to be at exactly the same view. These issues significantly hinder the ability to answer questions about medical image differences accurately. To address this, we introduce a two-stage framework featuring Coarse-to-Fine Granularity Contrastive Learning. Specifically, our method initially employs an anatomical encoder and a disease classifier to obtain fine-grained visual features of main and reference images. It then integrates the anatomical knowledge graph to strengthen the relationship between anatomical and disease regions, while Multi-Change Captioning transformers identify the subtle differences between main and reference features. During pre-training, Coarse-to-Fine Granularity Contrastive Learning is used to align knowledge enhanced visual differences with keyword features like anatomical parts, symptoms, and diseases. During the Diff-MedVQA Fine-tuning, the model treats the differential features as context-grounded queries, with Language Modeling guiding answer generation. Extensive experiments on the MIMIC-CXR-Diff dataset validate the effectiveness of our proposed method.
Leveraging Coarse-to-Fine Grained Representations in Contrastive Learning for Differential Medical Visual Question Answering
[ "Liang, Xiao", "Wang, Yin", "Wang, Di", "Jiao, Zhicheng", "Zhong, Haodi", "Yang, Mengyu", "Wang, Quan" ]
Conference
[ "https://github.com/big-white-rabbit/Coarse-to-Fine-Grained-Contrastive-Learning" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
821
null
https://papers.miccai.org/miccai-2024/paper/0608_paper.pdf
@InProceedings{ Abb_Sparse_MICCAI2024, author = { Abboud, Zeinab and Lombaert, Herve and Kadoury, Samuel }, title = { { Sparse Bayesian Networks: Efficient Uncertainty Quantification in Medical Image Analysis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Efficiently quantifying predictive uncertainty in medical images remains a challenge. While Bayesian neural networks (BNN) offer reliable predictive uncertainty, they require substantial computational resources to train. Although Bayesian approximations such as ensembles have shown promise, they still suffer from high training costs. Existing approaches to reducing computational burden primarily focus on lowering the costs of BNN inference, with limited efforts to improve training efficiency and minimize parameter complexity. This study introduces a training procedure for a sparse (partial) Bayesian network. Our method selectively assigns a subset of parameters as Bayesian by assessing their deterministic saliency through gradient sensitivity analysis. The resulting network combines deterministic and Bayesian parameters, exploiting the advantages of both representations to achieve high task-specific performance and minimize predictive uncertainty. Demonstrated on multi-label ChestMNIST for classification and ISIC, LIDC-IDRI for segmentation, our approach achieves competitive performance and predictive uncertainty estimation by reducing Bayesian parameters by over 95%, significantly reducing computational expenses compared to fully Bayesian and ensemble methods.
Sparse Bayesian Networks: Efficient Uncertainty Quantification in Medical Image Analysis
[ "Abboud, Zeinab", "Lombaert, Herve", "Kadoury, Samuel" ]
Conference
2406.06946
[ "https://github.com/zabboud/SparseBayesianNetwork" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
822
null
https://papers.miccai.org/miccai-2024/paper/2563_paper.pdf
@InProceedings{ Li_Blind_MICCAI2024, author = { Li, Xing and Yang, Yan and Zheng, Hairong and Xu, Zongben }, title = { { Blind Proximal Diffusion Model for Joint Image and Sensitivity Estimation in Parallel MRI } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Parallel imaging (PI) has demonstrated notable efficiency in accelerating magnetic resonance imaging (MRI) using deep learning techniques. However, these models often face challenges regarding their adaptability and robustness across varying data acquisition. In this work, we introduce a novel joint estimation framework for MR image reconstruction and multi-channel sensitivity maps utilizing denoising diffusion models under blind settings, termed Blind Proximal Diffusion Model in Parallel MRI (BPDM-PMRI). BPDM-PMRI formulates the reconstruction problem as a non-convex optimization task for simultaneous estimation of MR images and sensitivity maps across multiple channels. We employ the proximal alternating linearized minimization (PALM) to iteratively update the reconstructed MR images and sensitivity maps. Distinguished from the traditional proximal operators, our diffusion-based proximal operators provide a more generalizable and stable prior characterization. Once the diffusion model is trained, it can be applied to various sampling trajectories. Comprehensive experiments conducted on publicly available MR datasets demonstrate that BPDM-PMRI outperforms existing methods in terms of denoising effectiveness and generalization capability, while keeping clinically acceptable inference times.
Blind Proximal Diffusion Model for Joint Image and Sensitivity Estimation in Parallel MRI
[ "Li, Xing", "Yang, Yan", "Zheng, Hairong", "Xu, Zongben" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
823
null
https://papers.miccai.org/miccai-2024/paper/1863_paper.pdf
@InProceedings{ Zha_Hierarchical_MICCAI2024, author = { Zhang, Hao and Zhao, Mingyue and Liu, Mingzhu and Luo, Jiejun and Guan, Yu and Zhang, Jin and Xia, Yi and Zhang, Di and Zhou, Xiuxiu and Fan, Li and Liu, Shiyuan and Zhou, S. Kevin }, title = { { Hierarchical multiple instance learning for COPD grading with relatively specific similarity } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Chronic obstructive pulmonary disease (COPD) is a type of obstructive lung disease characterized by persistent airflow limitation and ranks as the third leading cause of death globally. As a heterogeneous lung disorder, the diversity of COPD phenotypes and the complexity of its pathology pose significant challenges for recognizing its grade. Many existing deep learning models based on 3D CT scans overlook the spatial position information of lesion regions and the correlation within different lesion grades. To this, we define the COPD grading task as a multiple instance learning (MIL) task and propose a hierarchical multiple instance learning (H-MIL) model. Unlike previous MIL models, our H-MIL model pays more attention to the spatial position information of patches and achieves a fine-grained classification of COPD by extracting patch features in a multi-level and granularity-oriented manner. Furthermore, we recognize the significant correlations within lesions of different grades and propose a Relatively Specific Similarity (RSS) function to capture such relative correlations. We demonstrate that H-MIL achieves better performances than competing methods on an internal dataset comprising 2,142 CT scans. Additionally, we validate the effectiveness of the model architecture and loss design through an ablation study. and the robustness of our model on different central datasets.
Hierarchical multiple instance learning for COPD grading with relatively specific similarity
[ "Zhang, Hao", "Zhao, Mingyue", "Liu, Mingzhu", "Luo, Jiejun", "Guan, Yu", "Zhang, Jin", "Xia, Yi", "Zhang, Di", "Zhou, Xiuxiu", "Fan, Li", "Liu, Shiyuan", "Zhou, S. Kevin" ]
Conference
[ "https://github.com/Mars-Zhang123/H-MIL.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
824
null
https://papers.miccai.org/miccai-2024/paper/3565_paper.pdf
@InProceedings{ He_SANGRE_MICCAI2024, author = { He, Ying and Miquel, Marc E. and Zhang, Qianni }, title = { { SANGRE: a Shallow Attention Network Guided by Resolution Expansion for MR Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Magnetic Resonance (MR) imaging plays a vital role in clinical diagnostics and treatment planning, with the accurate segmentation of MR images being of paramount importance. Vision transformers have demonstrated remarkable success in medical image segmentation; however, they fall short in capturing the local context. While images of larger sizes provide broad contextual information, such as shape and texture, training deep learning models on such large images demands additional computational resources. To overcome these challenges, we introduce a shallow attention feature aggregation (SAFA) module to progressively enhance features’ local context and filter out redundant features. Moreover, we use feature interactions in a resolution expansion guidance (REG) module to leverage the wide contextual information from the images at higher resolution, ensuring adequate exploitation of small class features, leading to a more accurate segmentation without a significant increase in FLOPs. The model is evaluated on two dynamic MR datasets for speech and cardiac cases. The proposed model outperforms other state-of-the-art methods. The codes are available at https://github.com/Yhe9718/SANGRE.
SANGRE: a Shallow Attention Network Guided by Resolution Expansion for MR Image Segmentation
[ "He, Ying", "Miquel, Marc E.", "Zhang, Qianni" ]
Conference
[ "https://github.com/Yhe9718/SANGRE" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
825
null
https://papers.miccai.org/miccai-2024/paper/2347_paper.pdf
@InProceedings{ Yoo_Volumetric_MICCAI2024, author = { Yoon, Siyeop and Tivnan, Matthew and Hu, Rui and Wang, Yuang and Son, Young-don and Wu, Dufan and Li, Xiang and Kim, Kyungsang and Li, Quanzheng }, title = { { Volumetric Conditional Score-based Residual Diffusion Model for PET/MR Denoising } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
PET imaging is a powerful modality offering quantitative assessments of molecular and physiological processes. The necessity for PET denoising arises from the intrinsic high noise levels in PET imaging, which can significantly hinder the accurate interpretation and quantitative analysis of the scans. With advances in deep learning techniques, diffusion model-based PET denoising techniques have shown remarkable performance improvement. However, these models often face limitations when applied to volumetric data. Additionally, many existing diffusion models do not adequately consider the unique characteristics of PET imaging, such as its 3D volumetric nature, leading to the potential loss of anatomic consistency. Our Conditional Score-based Residual Diffusion (CSRD) model addresses these issues by incorporating a refined score function and 3D patch-wise training strategy, optimizing the model for efficient volumetric PET denoising. The CSRD model significantly lowers computational demands and expedites the denoising process. By effectively integrating volumetric data from PET and MRI scans, the CSRD model maintains spatial coherence and anatomical detail. Lastly, we demonstrate that the CSRD model achieves superior denoising performance in both qualitative and quantitative evaluations while maintaining image details and outperforms existing state-of-the-art methods. Our code is available at: \url{https://github.com/siyeopyoon/Residual-Diffusion-Model-for-PET-MR-Denoising}
Volumetric Conditional Score-based Residual Diffusion Model for PET/MR Denoising
[ "Yoon, Siyeop", "Tivnan, Matthew", "Hu, Rui", "Wang, Yuang", "Son, Young-don", "Wu, Dufan", "Li, Xiang", "Kim, Kyungsang", "Li, Quanzheng" ]
Conference
2410.00184
[ "https://github.com/siyeopyoon/Residual-Diffusion-Model-for-PET-MR-Denoising" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
826
null
https://papers.miccai.org/miccai-2024/paper/2121_paper.pdf
@InProceedings{ Xio_TAKT_MICCAI2024, author = { Xiong, Conghao and Lin, Yi and Chen, Hao and Zheng, Hao and Wei, Dong and Zheng, Yefeng and Sung, Joseph J. Y. and King, Irwin }, title = { { TAKT: Target-Aware Knowledge Transfer for Whole Slide Image Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Knowledge transfer from a source to a target domain is vital for whole slide image classification, given the limited dataset size due to high annotation costs. However, domain shift and task discrepancy between datasets can impede this process. To address these issues, we propose a Target-Aware Knowledge Transfer framework using a teacher-student paradigm, enabling a teacher model to learn common knowledge from both domains by actively incorporating unlabelled target images into the teacher model training. The teacher bag features are subsequently adapted to supervise the student model training on the target domain. Despite incorporating the target features during training, the teacher model tends to neglect them under inherent domain shift and task discrepancy. To alleviate this, we introduce a target-aware feature alignment module to establish a transferable latent relationship between the source and target features by solving an optimal transport problem. Experimental results show that models employing knowledge transfer outperform those trained from scratch, and our method achieves state-of-the-art performance among other knowledge transfer methods on various datasets, including TCGA-RCC, TCGA-NSCLC, and Camelyon16. Codes are released at https://github.com/BearCleverProud/TAKT.
TAKT: Target-Aware Knowledge Transfer for Whole Slide Image Classification
[ "Xiong, Conghao", "Lin, Yi", "Chen, Hao", "Zheng, Hao", "Wei, Dong", "Zheng, Yefeng", "Sung, Joseph J. Y.", "King, Irwin" ]
Conference
2303.05780
[ "https://github.com/BearCleverProud/TAKT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
827
null
https://papers.miccai.org/miccai-2024/paper/0394_paper.pdf
@InProceedings{ Cha_Decoding_MICCAI2024, author = { Chakraborty, Souradeep and Gupta, Rajarsi and Yaskiv, Oksana and Friedman, Constantin and Sheuka, Natallia and Perez, Dana and Friedman, Paul and Zelinsky, Gregory and Saltz, Joel and Samaras, Dimitris }, title = { { Decoding the visual attention of pathologists to reveal their level of expertise } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
We present a method for classifying the expertise of a pathologist based on how they allocated their attention during a cancer reading. We engage this decoding task by developing a novel method for predicting the attention of pathologists as they read Whole-Slide Images (WSIs) of prostate tissue and make cancer grade classifications. Our ground truth measure of a pathologists’ attention is the x, y and z (magnification) movement of their viewport as they navigated through WSIs during readings, and to date we have the attention behavior of 43 pathologists reading 123 WSIs. These data revealed that specialists have higher agreement in both their attention and cancer grades compared to general pathologists and residents, suggesting that sufficient information may exist in their attention behavior to classify their expertise level. To attempt this, we trained a transformer-based model to predict the visual attention heatmaps of resident, general, and specialist (Genitourinary) pathologists during Gleason grading. Based solely on a pathologist’s attention during a reading, our model was able to predict their level of expertise with 75.3%, 56.1%, and 77.2% accuracy, respectively, better than chance and baseline models. Our model therefore enables a pathologist’s expertise level to be easily and objectively evaluated, important for pathology training and competency assessment. Tools developed from our model could be used to help pathology trainees learn how to read WSIs like an expert.
Decoding the visual attention of pathologists to reveal their level of expertise
[ "Chakraborty, Souradeep", "Gupta, Rajarsi", "Yaskiv, Oksana", "Friedman, Constantin", "Sheuka, Natallia", "Perez, Dana", "Friedman, Paul", "Zelinsky, Gregory", "Saltz, Joel", "Samaras, Dimitris" ]
Conference
2403.17255
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
828
null
https://papers.miccai.org/miccai-2024/paper/2277_paper.pdf
@InProceedings{ Das_SimBrainNet_MICCAI2024, author = { Das Chakladar, Debashis and Simistira Liwicki, Foteini and Saini, Rajkumar }, title = { { SimBrainNet: Evaluating Brain Network Similarity for Attention Disorders } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Electroencephalography (EEG)-based attention disorder research seeks to understand brain activity patterns associated with attention. Previous studies have mainly focused on identifying brain regions involved in cognitive processes or classifying Attention-Deficit Hyperactivity Disorder (ADHD) and control subjects. However, analyzing effective brain connectivity networks for specific attentional processes and comparing them has not been explored. Therefore, in this study, we propose multivariate transfer entropy-based connectivity networks for cognitive events and introduce a new similarity measure, “SimBrainNet”, to assess these networks. A high similarity score suggests similar brain dynamics during cognitive events, indicating less attention variability. Our experiment involves 12 individuals with attention disorders (7 children and 5 adolescents). Noteworthy that child participants exhibit lower similarity scores compared to adolescents, indicating greater changes in attention. We found strong connectivity patterns in the left pre-frontal cortex for adolescent individuals compared to the child. Our study highlights the changes in attention levels across various cognitive events, offering insights into the underlying cognitive mechanisms, brain dynamics, and potential deficits in individuals with this disorder.
SimBrainNet: Evaluating Brain Network Similarity for Attention Disorders
[ "Das Chakladar, Debashis", "Simistira Liwicki, Foteini", "Saini, Rajkumar" ]
Conference
2410.09422
[ "https://github.com/DDasChakladar/SimBrainNet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
829
null
https://papers.miccai.org/miccai-2024/paper/1903_paper.pdf
@InProceedings{ Mai_Dynamic_MICCAI2024, author = { Maitre, Thomas and Bretin, Elie and Phan, Romain and Ducros, Nicolas and Sdika, Michaël }, title = { { Dynamic Single-Pixel Imaging on an Extended Field of View without Warping the Patterns } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
A single-pixel camera is a spatial-multiplexing device that reconstructs an image from a sequence of projections of the scene onto some patterns. This architecture is used, for example, to assist neurosurgery with hyperspectral imaging. However, capturing dynamic scenes is very challenging: as the different projections measure different frames of the scene, standard reconstruction approaches suffer from strong motion artifacts. This paper presents a general framework to reconstruct a moving scene with two main contributions. First, we extend the field of view of the camera beyond that defined by the spatial light modulator, which dramatically reduces the model mismatch. Second, we propose to build the dynamic system matrix without warping the patterns, effectively dismissing discretization errors. Numerical experiments show that both our contributions are necessary for an artifact-free reconstruction. The influence of a reduced measured set, robustness to noise and to motion errors were also evaluated.
Dynamic Single-Pixel Imaging on an Extended Field of View without Warping the Patterns
[ "Maitre, Thomas", "Bretin, Elie", "Phan, Romain", "Ducros, Nicolas", "Sdika, Michaël" ]
Conference
[ "https://github.com/openspyrit/spyrit" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
830
null
https://papers.miccai.org/miccai-2024/paper/1602_paper.pdf
@InProceedings{ Wan_Learnable_MICCAI2024, author = { Wang, Yao and Chen, Jiahao and Huang, Wenjian and Dong, Pei and Qian, Zhen }, title = { { Learnable Skeleton-Based Medical Landmark Estimation with Graph Sparsity and Fiedler Regularizations } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Recent development in heatmap regression-based models have been central to anatomical landmark detection, yet their efficiency is often limited due to the lack of skeletal structure constraints. Despite the notable use of graph convolution networks (GCNs) in human pose estimation and facial landmark detection, manual construction of skeletal structures remains prevalent, presenting challenges in medical contexts with numerous non-intuitive structure. This paper introduces an innovative skeleton construction model for GCNs, integrating graph sparsity and Fiedler regularization, diverging from traditional manual methods. We provide both theoretical validation and a practical implementation of our model, demonstrating its real-world efficacy. Additionally, we have developed two new medical datasets tailored for this research, along with testing on an open dataset. Our results consistently show our method’s superior performance and versatility in anatomical landmark detection, establishing a new benchmark in the field, as evidenced by extensive testing across diverse datasets.
Learnable Skeleton-Based Medical Landmark Estimation with Graph Sparsity and Fiedler Regularizations
[ "Wang, Yao", "Chen, Jiahao", "Huang, Wenjian", "Dong, Pei", "Qian, Zhen" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
831
null
https://papers.miccai.org/miccai-2024/paper/4115_paper.pdf
@InProceedings{ Nat_Pixel2Mechanics_MICCAI2024, author = { Natarajan, Sai and Muñoz-Moya, Estefano and Ruiz Wills, Carlos and Piella, Gemma and Noailly, Jérôme and Humbert, Ludovic and González Ballester, Miguel A. }, title = { { Pixel2Mechanics: Automated biomechanical simulations of high-resolution intervertebral discs from anisotropic MRIs } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Intervertebral disc (IVD) degeneration poses demanding challenges for improved diagnosis and treatment personalization. Biomechanical simulations bridge the gap between phenotypes and functional mechanobiology. However, personalized IVD modelling is hindered by complex manual workflows to obtain meshes suitable for biomechanical analysis using clinical MR data. This study proposes Pixel2Mechanics: a novel pipeline for biomechanical finite element (FE) simulation of high-resolution IVD meshes out of low resolution clinical MRI. We use our geometrical deep learning framework incorporating cross-level feature fusion to generate meshes of the lumbar Annuli Fibrosis (AF) and Nuclei Pulposi (NP), from the L1-L2 to L4-L5 IVD. Further, we improve our framework by proposing a novel optimization method based on differentiable rendering. Next, a custom morphing algorithm based on the Bayesian Coherent Point Drift++ approach generates volumetric FE meshes from the surface meshes, preserving tissue topology through the whole cohort while capturing shape specificities. Daily load simulations on these FE model simulations were evaluated in three volumes within the IVD: the center of the NP and the two transition zones (posterior and anterior) on mechanical responses. These were compared with the results obtained with a manual segmentation procedure. This study delivers a fully automated pipeline performing patient-personalized simulations of L1-L2 to L4-L5 IVD spine levels from clinical MRIs. It facilitates functional modeling and further exploration of normal and pathological discs while minimizing manual intervention. These features position the pipeline as a promising candidate for future clinical integration. Our data & code will be made available at: http://www.pixel2mechanics.github.io
Pixel2Mechanics: Automated biomechanical simulations of high-resolution intervertebral discs from anisotropic MRIs
[ "Natarajan, Sai", "Muñoz-Moya, Estefano", "Ruiz Wills, Carlos", "Piella, Gemma", "Noailly, Jérôme", "Humbert, Ludovic", "González Ballester, Miguel A." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
832
null
https://papers.miccai.org/miccai-2024/paper/3036_paper.pdf
@InProceedings{ Sha_Hierarchical_MICCAI2024, author = { Sha, Qingrui and Sun, Kaicong and Xu, Mingze and Li, Yonghao and Xue, Zhong and Cao, Xiaohuan and Shen, Dinggang }, title = { { Hierarchical Symmetric Normalization Registration using Deformation-Inverse Network } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Most existing deep learning-based medical image registration methods estimate a single-directional displacement field between the moving and fixed image pair, resulting in registration errors when there are substantial differences between the to-be-registered image pairs. To solve this issue, we propose a symmetric normalization network to estimate the deformations in a bi-directional way. Specifically, our method learns two bi-directional half-way displacement fields, which warp the moving and fixed images to their mean space. Besides, a symmetric magnitude constraint is designed in the mean space to ensure precise registration. Additionally, a deformation-inverse network is employed to obtain the inverse of the displacement field, which is applied to the inference pipeline to compose the final end-to-end displacement field between the moving and fixed images. During inference, our method first estimates the two half-way displacement fields and then composes one half-way displacement field with the inverse of another half. Moreover, we adopt a multi-level strategy to hierarchically perform registration, for gradually aligning images to their mean space, thereby improving accuracy and smoothness. Experimental results on two datasets demonstrate that the proposed method improves registration performance compared with state-of-the-art algorithms. Our code is available at https://github.com/QingRui-Sha/HSyN.
Hierarchical Symmetric Normalization Registration using Deformation-Inverse Network
[ "Sha, Qingrui", "Sun, Kaicong", "Xu, Mingze", "Li, Yonghao", "Xue, Zhong", "Cao, Xiaohuan", "Shen, Dinggang" ]
Conference
[ "https://github.com/QingRui-Sha/HSyN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
833
null
https://papers.miccai.org/miccai-2024/paper/4218_paper.pdf
@InProceedings{ Bea_Towards_MICCAI2024, author = { Beaudet, Karl-Philippe and Karargyris, Alexandros and El Hadramy, Sidaty and Cotin, Stéphane and Mazellier, Jean-Paul and Padoy, Nicolas and Verde, Juan }, title = { { Towards Real-time Intrahepatic Vessel Identification in Intraoperative Ultrasound-Guided Liver Surgery } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
While laparoscopic liver resection is less prone to complications and maintains patient outcomes compared to traditional open surgery, its complexity hinders widespread adoption due to challenges in representing the liver’s internal structure. Laparoscopic intraoperative ultrasound offers efficient, cost-effective, and radiation-free guidance. Our objective is to aid physicians in identifying internal liver structures using laparoscopic intraoperative ultrasound. We propose a patient-specific approach using preoperative 3D ultrasound liver volume to train a deep learning model for real-time identification of portal tree and branch structures. Our personalized AI model, validated on ex vivo swine livers, achieved superior precision (0.95) and recall (0.93) compared to surgeons, laying groundwork for precise vessel identification in ultrasound-based liver resection. Its adaptability and potential clinical impact promise to advance surgical interventions and improve patient care.
Towards Real-time Intrahepatic Vessel Identification in Intraoperative Ultrasound-Guided Liver Surgery
[ "Beaudet, Karl-Philippe", "Karargyris, Alexandros", "El Hadramy, Sidaty", "Cotin, Stéphane", "Mazellier, Jean-Paul", "Padoy, Nicolas", "Verde, Juan" ]
Conference
2410.03420
[ "https://github.com/CAMMA-public/Lupin/" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
834
null
https://papers.miccai.org/miccai-2024/paper/3276_paper.pdf
@InProceedings{ Sam_LS_MICCAI2024, author = { Sambyal, Abhishek Singh and Niyaz, Usma and Shrivastava, Saksham and Krishnan, Narayanan C. and Bathula, Deepti R. }, title = { { LS+: Informed Label Smoothing for Improving Calibration in Medical Image Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Deep Neural Networks (DNNs) exhibit exceptional performance in various tasks; however, their susceptibility to miscalibration poses challenges in healthcare applications, impacting reliability and trustworthiness. Label smoothing, which prefers soft targets based on uniform distribution over labels, is a widely used strategy to improve model calibration. We propose an improved strategy, Label Smoothing Plus (LS+), which uses class-specific prior that is estimated from validation set to account for current model calibration level. We evaluate the effectiveness of our approach by comparing it with state-of-the-art methods on three benchmark medical imaging datasets, using two different architectures and several performance and calibration metrics for the classification task. Experimental results show notable reduction in calibration error metrics with nominal improvement in performance compared to other approaches, suggesting that our proposed method provides more reliable prediction probabilities. Code is available at https://github.com/abhisheksambyal/lsplus.
LS+: Informed Label Smoothing for Improving Calibration in Medical Image Classification
[ "Sambyal, Abhishek Singh", "Niyaz, Usma", "Shrivastava, Saksham", "Krishnan, Narayanan C.", "Bathula, Deepti R." ]
Conference
[ "https://github.com/abhisheksambyal/lsplus" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
835
null
https://papers.miccai.org/miccai-2024/paper/1615_paper.pdf
@InProceedings{ Bae_OCL_MICCAI2024, author = { Baek, Seunghun and Sim, Jaeyoon and Wu, Guorong and Kim, Won Hwa }, title = { { OCL: Ordinal Contrastive Learning for Imputating Features with Progressive Labels } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Accurately discriminating progressive stages of Alzheimer’s Disease (AD) is crucial for early diagnosis and prevention. It often involves multiple imaging modalities to understand the complex pathology of AD, however, acquiring a complete set of images is challenging due to high cost and burden for subjects. In the end, missing data become inevitable which lead to limited sample-size and decrease in precision in downstream analyses. To tackle this challenge, we introduce a holistic imaging feature imputation method that enables to leverage diverse imaging features while retaining all subjects. The proposed method comprises two networks: 1) An encoder to extract modality-independent embeddings and 2) A decoder to reconstruct the original measures conditioned on their imaging modalities. The encoder includes a novel ordinal contrastive loss, which aligns samples in the embedding space according to the progression of AD. We also maximize modality-wise coherence of embeddings within each subject, in conjunction with domain adversarial training algorithms, to further enhance alignment between different imaging modalities. The proposed method promotes our holistic imaging feature imputation across various modalities in the shared embedding space. In the experiments, we show that our networks deliver favorable results for statistical analysis and classification against imputation baselines with Alzheimer’s Disease Neuroimaging Initiative (ADNI) study.
OCL: Ordinal Contrastive Learning for Imputating Features with Progressive Labels
[ "Baek, Seunghun", "Sim, Jaeyoon", "Wu, Guorong", "Kim, Won Hwa" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
836
null
https://papers.miccai.org/miccai-2024/paper/0582_paper.pdf
@InProceedings{ Ber_Topologically_MICCAI2024, author = { Berger, Alexander H. and Lux, Laurin and Stucki, Nico and Bürgin, Vincent and Shit, Suprosanna and Banaszak, Anna and Rueckert, Daniel and Bauer, Ulrich and Paetzold, Johannes C. }, title = { { Topologically faithful multi-class segmentation in medical images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Topological accuracy in medical image segmentation is a highly important property for downstream applications such as network analysis and flow modeling in vessels or cell counting. Recently, significant methodological advancements have brought well-founded concepts from algebraic topology to binary segmentation. However, these approaches have been underexplored in multi-class segmentation scenarios, where topological errors are common. We propose a general loss function for topologically faithful multi-class segmentation extending the recent Betti matching concept, which is based on induced matchings of persistence barcodes. We project the N-class segmentation problem to N single-class segmentation tasks, which allows us to use 1-parameter persistent homology, making training of neural networks computationally feasible. We validate our method on a comprehensive set of four medical datasets with highly variant topological characteristics. Our loss formulation significantly enhances topological correctness in cardiac, cell, artery-vein, and Circle of Willis segmentation.
Topologically faithful multi-class segmentation in medical images
[ "Berger, Alexander H.", "Lux, Laurin", "Stucki, Nico", "Bürgin, Vincent", "Shit, Suprosanna", "Banaszak, Anna", "Rueckert, Daniel", "Bauer, Ulrich", "Paetzold, Johannes C." ]
Conference
2403.11001
[ "https://github.com/AlexanderHBerger/multiclass-BettiMatching" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
837
null
https://papers.miccai.org/miccai-2024/paper/3550_paper.pdf
@InProceedings{ Meu_NeuroConText_MICCAI2024, author = { Meudec, Raphaël and Ghayem, Fateme and Dockès, Jérôme and Wassermann, Demian and Thirion, Bertrand }, title = { { NeuroConText: Contrastive Text-to-Brain Mapping for Neuroscientific Literature } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Neuroscientific literature faces challenges in reliability due to limited statistical power, reproducibility issues, and inconsistent terminology. To address these challenges, we introduce NeuroConText, the first brain meta-analysis model that uses a contrastive approach to enhance the association between textual data and brain activation coordinates reported in 20K neuroscientific articles from PubMed Central. NeuroConText integrates the capabilities of recent advancements in large language models (LLM) such as Mistral-7B instead of traditional bag-of-words methods, to better capture the text semantic, and improve the association with brain activations. Our method is adapted to processing neuroscientific textual data regardless of length and generalizes well across various textual content—titles, abstracts, and full-body—. Our experiments show that NeuroConText significantly outperforms state-of-the-art methods by a threefold increase in linking text to brain activations regarding recall@10. Also, NeuroConText allows decoding brain images from text latent representations, successfully maintaining the quality of brain image reconstruction compared to the state-of-the-art.
NeuroConText: Contrastive Text-to-Brain Mapping for Neuroscientific Literature
[ "Meudec, Raphaël", "Ghayem, Fateme", "Dockès, Jérôme", "Wassermann, Demian", "Thirion, Bertrand" ]
Conference
[ "https://github.com/ghayem/NeuroConText" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
838
null
https://papers.miccai.org/miccai-2024/paper/3487_paper.pdf
@InProceedings{ Zal_Improving_MICCAI2024, author = { Zalevskyi, Vladyslav and Sanchez, Thomas and Roulet, Margaux and Aviles Verdera, Jordina and Hutter, Jana and Kebiri, Hamza and Bach Cuadra, Meritxell }, title = { { Improving cross-domain brain tissue segmentation in fetal MRI with synthetic data } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Segmentation of fetal brain tissue from magnetic resonance imaging (MRI) plays a crucial role in the study of in-utero neurodevelopment. However, automated tools face substantial domain shift challenges as they must be robust to highly heterogeneous clinical data, often limited in numbers and lacking annotations. Indeed, high variability of the fetal brain morphology, MRI acquisition parameters, and super-resolution reconstruction (SR) algorithms adversely affect the model’s performance when evaluated out-of-domain. In this work, we introduce FetalSynthSeg, a domain randomization method to segment fetal brain MRI, inspired by SynthSeg. Our results show that models trained solely on synthetic data outperform models trained on real data in out-of-domain settings, validated on a 120-subject cross-domain dataset. Furthermore, we extend our evaluation to 40 subjects acquired using low-field (0.55T) MRI and reconstructed with novel SR models, showcasing robustness across different magnetic field strengths and SR algorithms. Leveraging a generative synthetic approach, we tackle the domain shift problem in fetal brain MRI and offer compelling prospects for applications in fields with limited and highly heterogeneous data.
Improving cross-domain brain tissue segmentation in fetal MRI with synthetic data
[ "Zalevskyi, Vladyslav", "Sanchez, Thomas", "Roulet, Margaux", "Aviles Verdera, Jordina", "Hutter, Jana", "Kebiri, Hamza", "Bach Cuadra, Meritxell" ]
Conference
2403.15103
[ "https://github.com/Medical-Image-Analysis-Laboratory/FetalSynthSeg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
839
null
https://papers.miccai.org/miccai-2024/paper/3253_paper.pdf
@InProceedings{ Guo_BGDiffSeg_MICCAI2024, author = { Guo, Yilin and Cai, Qingling }, title = { { BGDiffSeg: a Fast Diffusion Model for Skin Lesion Segmentation via Boundary Enhancement and Global Recognition Guidance } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
In the study of skin lesion segmentation, models based on convolution neural networks (CNN) and vision transformers (ViT) have been extensively explored but face challenges in capturing fine details near boundaries. The advent of Diffusion Probabilistic Model (DPM) offers significant promise for this task which demands precise boundary segmentation. In this study, we propose BGDiffSeg, a novel skin lesion segmentation model utilizing a wavelet-transform-based diffusion approach to speed up training and denoising, along with specially designed Diffusion Boundary Enhancement Module (DBEM) and Interactive Bidirectional Attention Module (IBAM) to enhance segmentation accuracy. DBEM enhances boundary features in the diffusion process by integrating extracted boundary information into the decoder. Concurrently, IBAM facilitates dynamic interactions between conditional and generated images at the feature level, thus enhancing the global recognition of target area boundaries. Comprehensive experiments on the ISIC 2016, ISIC 2017, and ISIC 2018 datasets demonstrate BGDiffSeg’s superiority in precision and clarity under limited computational resources and inference time, outperforming existing state-of-the-art methods. Our code will be available at https://github.com/erlingzz/BGDiffSeg.
BGDiffSeg: a Fast Diffusion Model for Skin Lesion Segmentation via Boundary Enhancement and Global Recognition Guidance
[ "Guo, Yilin", "Cai, Qingling" ]
Conference
[ "https://github.com/erlingzz/BGDiffSeg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
840
null
https://papers.miccai.org/miccai-2024/paper/0622_paper.pdf
@InProceedings{ Zha_Prompting_MICCAI2024, author = { Zhang, Ling and Yun, Boxiang and Xie, Xingran and Li, Qingli and Li, Xinxing and Wang, Yan }, title = { { Prompting Whole Slide Image Based Genetic Biomarker Prediction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Prediction of genetic biomarkers, e.g., microsatellite instability and BRAF in colorectal cancer is crucial for clinical decision making. In this paper, we propose a whole slide image (WSI) based genetic biomarker prediction method via prompting techniques. Our work aims at addressing the following challenges: (1) extracting foreground instances related to genetic biomarkers from gigapixel WSIs, and (2) the interaction among the fine-grained pathological components in WSIs. Specifically, we leverage large language models to generate medical prompts that serve as prior knowledge in extracting instances associated with genetic biomarkers. We adopt a coarse-to-fine approach to mine biomarker information within the tumor microenvironment. This involves extracting instances related to genetic biomarkers using coarse medical prior knowledge, grouping pathology instances into fine-grained pathological components and mining their interactions. Experimental results on two colorectal cancer datasets show the superiority of our method, achieving 91.49% in AUC for MSI classification. The analysis further shows the clinical interpretability of our method. Code is publicly available at https://github.com/DeepMed-Lab-ECNU/PromptBio.
Prompting Whole Slide Image Based Genetic Biomarker Prediction
[ "Zhang, Ling", "Yun, Boxiang", "Xie, Xingran", "Li, Qingli", "Li, Xinxing", "Wang, Yan" ]
Conference
2407.09540
[ "https://github.com/DeepMed-Lab-ECNU/PromptBio" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
841
null
https://papers.miccai.org/miccai-2024/paper/2402_paper.pdf
@InProceedings{ Fra_SlicerTMS_MICCAI2024, author = { Franke, Loraine and Luo, Jie and Park, Tae Young and Kim, Nam Wook and Rathi, Yogesh and Pieper, Steve and Ning, Lipeng and Haehn, Daniel }, title = { { SlicerTMS: Real-Time Visualization of Transcranial Magnetic Stimulation for Mental Health Treatment } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
We present a real-time visualization system for Transcranial Magnetic Stimulation (TMS), a non-invasive neuromodulation technique for treating various brain disorders and mental health diseases. Our solution targets the current challenges of slow and labor-intensive practices in treatment planning. Integrating Deep Learning (DL), our system rapidly predicts electric field (E-field) distributions in 0.2 seconds for precise and effective brain stimulation. The core advancement lies in our tool’s real-time neuronavigation visualization capabilities, which support clinicians in making more informed decisions quickly and effectively. We assess our system’s performance through three studies: First, a real-world use case scenario in a clinical setting, providing concrete feedback on applicability and usability in a practical environment. Second, a comparative analysis with another TMS tool focusing on computational efficiency across various hardware platforms. Lastly, we conducted an expert user study to measure usability and influence in optimizing TMS treatment planning. The system is openly available for community use and further development on GitHub: https://github.com/lorifranke/SlicerTMS.
SlicerTMS: Real-Time Visualization of Transcranial Magnetic Stimulation for Mental Health Treatment
[ "Franke, Loraine", "Luo, Jie", "Park, Tae Young", "Kim, Nam Wook", "Rathi, Yogesh", "Pieper, Steve", "Ning, Lipeng", "Haehn, Daniel" ]
Conference
2305.06459
[ "https://github.com/lorifranke/SlicerTMS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
842
null
https://papers.miccai.org/miccai-2024/paper/4049_paper.pdf
@InProceedings{ Ruf_MultiDataset_MICCAI2024, author = { Ruffini, Filippo and Tronchin, Lorenzo and Wu, Zhuoru and Chen, Wenting and Soda, Paolo and Shen, Linlin and Guarrasi, Valerio }, title = { { Multi-Dataset Multi-Task Learning for COVID-19 Prognosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
In the fight against the COVID-19 pandemic, leveraging artificial intelligence to predict disease outcomes from chest radiographic images represents a significant scientific aim. The challenge, however, lies in the scarcity of large, labeled datasets with compatible tasks for training deep learning models without leading to overfitting. Addressing this issue, we introduce a novel multi-dataset multi-task training framework that predicts COVID-19 prognostic outcomes from chest X-rays (CXR) by integrating correlated datasets from disparate sources, distant from conventional multi-task learning approaches, which rely on datasets with multiple and correlated labeling schemes. Our framework hypothesizes that assessing severity scores enhances the model’s ability to classify prognostic severity groups, thereby improving its robustness and predictive power. The proposed architecture comprises a deep convolutional network that receives inputs from two publicly available CXR datasets, AIforCOVID for severity prognostic prediction and BRIXIA for severity score assessment, and branches into task-specific fully connected output networks. Moreover, we propose a multi-task loss function, incorporating an indicator function, to exploit multi-dataset integration. The effectiveness and robustness of the proposed approach are demonstrated through significant performance improvements in prognosis classification tasks across 18 different convolutional neural network backbones in different evaluation strategies. This improvement is evident over single-task baselines and standard transfer learning strategies, supported by extensive statistical analysis, showing great application potential.
Multi-Dataset Multi-Task Learning for COVID-19 Prognosis
[ "Ruffini, Filippo", "Tronchin, Lorenzo", "Wu, Zhuoru", "Chen, Wenting", "Soda, Paolo", "Shen, Linlin", "Guarrasi, Valerio" ]
Conference
2405.13771
[ "https://github.com/cosbidev/Multi-Dataset-Multi-Task-Learning-for-COVID-19-Prognosis" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
843
null
https://papers.miccai.org/miccai-2024/paper/3887_paper.pdf
@InProceedings{ Yan_Deform3DGS_MICCAI2024, author = { Yang, Shuojue and Li, Qian and Shen, Daiyun and Gong, Bingchen and Dou, Qi and Jin, Yueming }, title = { { Deform3DGS: Flexible Deformation for Fast Surgical Scene Reconstruction with Gaussian Splatting } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Tissue deformation poses a key challenge for accurate surgical scene reconstruction. Despite yielding high reconstruction quality, existing methods suffer from slow rendering speeds and long training times, limiting their intraoperative applicability. Motivated by recent progress in 3D Gaussian Splatting, an emerging technology in real-time 3D rendering, this work presents a novel fast reconstruction framework, termed Deform3DGS, for deformable tissues during endoscopic surgery. Specifically, we introduce 3D GS into surgical scenes by integrating a point cloud initialization to improve reconstruction. Furthermore, we propose a novel flexible deformation modeling scheme (FDM) to learn tissue deformation dynamics at the level of individual Gaussians. Our FDM can model the surface deformation with efficient representations, allowing for real-time rendering performance. More importantly, FDM significantly accelerates surgical scene reconstruction, demonstrating considerable clinical values, particularly in intraoperative settings where time efficiency is crucial. Experiments on DaVinci robotic surgery videos indicate the efficacy of our approach, showcasing superior reconstruction fidelity PSNR: (37.90) and rendering speed (338.8 FPS) while substantially reducing training time to only 1 minute/scene. Our code is available at https://github.com/jinlab-imvr/Deform3DGS.
Deform3DGS: Flexible Deformation for Fast Surgical Scene Reconstruction with Gaussian Splatting
[ "Yang, Shuojue", "Li, Qian", "Shen, Daiyun", "Gong, Bingchen", "Dou, Qi", "Jin, Yueming" ]
Conference
2405.17835
[ "https://github.com/jinlab-imvr/Deform3DGS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
844
null
https://papers.miccai.org/miccai-2024/paper/1370_paper.pdf
@InProceedings{ Wu_MMFusion_MICCAI2024, author = { Wu, Chengyu and Wang, Chengkai and Zhou, Huiyu and Zhang, Yatao and Wang, Qifeng and Wang, Yaqi and Wang, Shuai }, title = { { MMFusion: Multi-modality Diffusion Model for Lymph Node Metastasis Diagnosis in Esophageal Cancer } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Esophageal cancer is one of the most common types of cancer worldwide and ranks sixth in cancer-related mortality. Accurate computer-assisted diagnosis of cancer progression can help physicians effectively customize personalized treatment plans. Currently, CT-based cancer diagnosis methods have received much attention for their comprehensive ability to examine patients’ conditions. However, multi-modal based methods may likely introduce information redundancy, leading to underperformance. In addition, efficient and effective interactions between multi-modal representations need to be further explored, lacking insightful exploration of prognostic correlation in multi-modality features. In this work, we introduce a multi-modal heterogeneous graph-based conditional feature-guided diffusion model for lymph node metastasis diagnosis based on CT images as well as clinical measurements and radiomics data. To explore the intricate relationships between multi-modal features, we construct a heterogeneous graph. Following this, a conditional feature-guided diffusion approach is applied to eliminate information redundancy. Moreover, we propose a masked relational representation learning strategy, aiming to uncover the latent prognostic correlations and priorities of primary tumor and lymph node image representations. Various experimental results validate the effectiveness of our proposed method.
MMFusion: Multi-modality Diffusion Model for Lymph Node Metastasis Diagnosis in Esophageal Cancer
[ "Wu, Chengyu", "Wang, Chengkai", "Zhou, Huiyu", "Zhang, Yatao", "Wang, Qifeng", "Wang, Yaqi", "Wang, Shuai" ]
Conference
2405.09539
[ "https://github.com/wuchengyu123/MMFusion" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
845
null
https://papers.miccai.org/miccai-2024/paper/3647_paper.pdf
@InProceedings{ Lin_Zeroshot_MICCAI2024, author = { Lin, Xiyue and Du, Chenhe and Wu, Qing and Tian, Xuanyu and Yu, Jingyi and Zhang, Yuyao and Wei, Hongjiang }, title = { { Zero-shot Low-field MRI Enhancement via Denoising Diffusion Driven Neural Representation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Recently, there have been significant advancements in the development of portable low-field (LF) magnetic resonance imaging (MRI) systems. These systems aim to provide low-cost, unshielded, and bedside diagnostic solutions. MRI experiences a diminished signal-to-noise ratio (SNR) at reduced field strengths, which results in severe signal deterioration and poor reconstruction. Therefore, reconstructing a high-field-equivalent image from a low-field MRI is a complex challenge due to the ill-posed nature of the task. In this paper, we introduce diffusion model driven neural representation. We decompose the low-field MRI enhancement problem into a data consistency subproblem and a prior subproblem and solve them in an iterative framework. The diffusion model provides high-quality high-field (HF) MR images prior, while the implicit neural representation ensures data consistency. Experimental results on simulated LF data and clinical LF data indicate that our proposed method capable of achieving zero-shot LF MRI enhancement, showing some potential for clinical applications.
Zero-shot Low-field MRI Enhancement via Denoising Diffusion Driven Neural Representation
[ "Lin, Xiyue", "Du, Chenhe", "Wu, Qing", "Tian, Xuanyu", "Yu, Jingyi", "Zhang, Yuyao", "Wei, Hongjiang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
846
null
https://papers.miccai.org/miccai-2024/paper/1311_paper.pdf
@InProceedings{ Jai_MMBCD_MICCAI2024, author = { Jain, Kshitiz and Bansal, Aditya and Rangarajan, Krithika and Arora, Chetan }, title = { { MMBCD: Multimodal Breast Cancer Detection from Mammograms with Clinical History } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Mammography serves as a vital tool for breast cancer detection, with screening and diagnostic modalities catering to distinct patient populations. However, in resource-constrained settings, screening mammography may not be feasible, necessitating reliance on diagnostic approaches. Recent advances in deep learning have shown promise in automated malignancy prediction, yet existing methodologies often overlook crucial clinical context inherent in diagnostic mammography. In this study, we propose a novel approach to integrate mammograms and clinical history to enhance breast cancer detection accuracy. To achieve our objective, we leverage recent advances in foundational models, where we use \vit for mammograms, and RoBERTa for encoding text based clinical history. Since, current implementations of Vit can not handle large 4Kx4K mammography scans, we device a novel framework to first detect region-of-interests, and then classify using multi-instance-learning strategy, while allowing text embedding from clinical history to attend to the visual regions of interest from the mammograms. Extensive experimentation demonstrates that our model, MMBCD, successfully incorporates contextual information while preserving image resolution and context, leading to superior results over existing methods, and showcasing its potential to significantly improve breast cancer screening practices. We report an (Accuracy, F1) of (0.96,0.82), and (0.95,0.68) on our two in-house test datasets by MMBCD, against (0.91,0.41), and (0.87,0.39) by Lava, and (0.84,0.50), and (0.91,0.27) by CLIP-ViT; both state-of-the-art multi-modal foundational models.
MMBCD: Multimodal Breast Cancer Detection from Mammograms with Clinical History
[ "Jain, Kshitiz", "Bansal, Aditya", "Rangarajan, Krithika", "Arora, Chetan" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
847
null
https://papers.miccai.org/miccai-2024/paper/0518_paper.pdf
@InProceedings{ Zha_SegNeuron_MICCAI2024, author = { Zhang, Yanchao and Guo, Jinyue and Zhai, Hao and Liu, Jing and Han, Hua }, title = { { SegNeuron: 3D Neuron Instance Segmentation in Any EM Volume with a Generalist Model } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Building a generalist model for neuron instance segmentation from electron microscopy (EM) volumes holds great potential to accelerate data processing and analysis in connectomics. However, the diversity in visual appearances and voxel resolutions present obstacles to model development. Meanwhile, prompt-based foundation models for segmentation struggle to achieve satisfactory performance due to the inherent complexity and volumetric continuity of neuronal structures. To address this, this paper introduces SegNeuron, a generalist model for dense neuron instance segmentation with strong zero-shot generalizability. To this end, we first construct a multi-resolution, multi-modality, and multi-species volume EM database, named EMNeuron, consisting of over 22 billion voxels, with over 3 billion densely labeled. On this basis, we devise a novel workflow to build the model with customized strategies, including pretraining via multi-scale Gaussian mask reconstruction, domain-mixing finetuning, and foreground-restricted instance segmentation. Experimental results on unseen datasets indicate that SegNeuron not only significantly surpasses existing generalist models, but also achieves competitive or even superior results with specialist models. Datasets, codes, and models are available at https://github.com/yanchaoz/SegNeuron.
SegNeuron: 3D Neuron Instance Segmentation in Any EM Volume with a Generalist Model
[ "Zhang, Yanchao", "Guo, Jinyue", "Zhai, Hao", "Liu, Jing", "Han, Hua" ]
Conference
[ "https://github.com/yanchaoz/SegNeuron" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
848
null
https://papers.miccai.org/miccai-2024/paper/0460_paper.pdf
@InProceedings{ Wat_Hierarchical_MICCAI2024, author = { Watawana, Hasindri and Ranasinghe, Kanchana and Mahmood, Tariq and Naseer, Muzammal and Khan, Salman and Shahbaz Khan, Fahad }, title = { { Hierarchical Text-to-Vision Self Supervised Alignment for Improved Histopathology Representation Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Self-supervised representation learning has been highly promising for histopathology image analysis with numerous approaches leveraging their patient-slide-patch hierarchy to learn better representations. In this paper, we explore how the combination of domain specific natural language information with such hierarchical visual representations can benefit rich representation learning for medical image tasks. Building on automated language description generation for features visible in histopathology images, we present a novel language-tied self-supervised learning framework, Hierarchical Language-tied Self-Supervision (HLSS) for histopathology images. We explore contrastive objectives and granular language description based text alignment at multiple hierarchies to inject language modality information into the visual representations. Our resulting model achieves state-of-the-art performance on two medical imaging benchmarks, OpenSRH and TCGA datasets. Our framework also provides better interpretability with our language aligned representation space. The code is available at https://github.com/Hasindri/HLSS.
Hierarchical Text-to-Vision Self Supervised Alignment for Improved Histopathology Representation Learning
[ "Watawana, Hasindri", "Ranasinghe, Kanchana", "Mahmood, Tariq", "Naseer, Muzammal", "Khan, Salman", "Shahbaz Khan, Fahad" ]
Conference
2403.14616
[ "https://github.com/Hasindri/HLSS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
849
null
https://papers.miccai.org/miccai-2024/paper/1951_paper.pdf
@InProceedings{ Liu_GEM_MICCAI2024, author = { Liu, Shaonan and Chen, Wenting and Liu, Jie and Luo, Xiaoling and Shen, Linlin }, title = { { GEM: Context-Aware Gaze EstiMation with Visual Search Behavior Matching for Chest Radiograph } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Gaze estimation is pivotal in human scene comprehension tasks, particularly in medical diagnostic analysis. Eye-tracking technology facilitates the recording of physicians’ ocular movements during image interpretation, thereby elucidating their visual attention patterns and information-processing strategies. In this paper, we initially define the context-aware gaze estimation problem in medical radiology report settings. To understand the attention allocation and cognitive behavior of radiologists during the medical image interpretation process, we propose a context-aware Gaze Estimation (GEM) network that utilizes eye gaze data collected from radiologists to simulate their visual search behavior patterns throughout the image interpretation process. It consists of a context-awareness module, visual behavior graph construction, and visual behavior matching. Within the context-awareness module, we achieve intricate multimodal registration by establishing connections between medical reports and images. Subsequently, for a more accurate simulation of genuine visual search behavior patterns, we introduce a visual behavior graph structure, capturing such behavior through high-order relationships (edges) between gaze points (nodes). To maintain the authenticity of visual behavior, we devise a visual behavior-matching approach, adjusting the high-order relationships between them by matching the graph constructed from real and estimated gaze points. Extensive experiments on four publicly available datasets demonstrate the superiority of GEM over existing methods and its strong generalizability, which also provides a new direction for the effective utilization of diverse modalities in medical image interpretation and enhance the interpretability of models in the field of medical imaging. https://github.com/Tiger-SN/GEM
GEM: Context-Aware Gaze EstiMation with Visual Search Behavior Matching for Chest Radiograph
[ "Liu, Shaonan", "Chen, Wenting", "Liu, Jie", "Luo, Xiaoling", "Shen, Linlin" ]
Conference
2408.05502
[ "https://github.com/Tiger-SN/GEM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
850
null
https://papers.miccai.org/miccai-2024/paper/0751_paper.pdf
@InProceedings{ San_Voxel_MICCAI2024, author = { Sanner, Antoine P. and Grauhan, Nils F. and Brockmann, Marc A. and Othman, Ahmed E. and Mukhopadhyay, Anirban }, title = { { Voxel Scene Graph for Intracranial Hemorrhage } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Patients with Intracranial Hemorrhage (ICH) face a potentially life-threatening condition and patient-centered individualized treatment remains challenging due to possible clinical complications. Deep-Learning-based methods can efficiently analyze the routinely acquired head CTs to support the clinical decision-making. The majority of early work focuses on the detection and segmentation of ICH, but do not model the complex relations between ICH and adjacent brain structures. In this work, we design a tailored object detection method for ICH, which we unite with segmentation-grounded Scene Graph Generation (SGG) methods to learn a holistic representation of the clinical cerebral scene. To the best of our knowledge, this is the first application of SGG for 3D voxel images. We evaluate our method on two head-CT datasets and demonstrate that our model can recall up to 74% of clinically relevant relations. This work lays the foundation towards SGG for 3D voxel data. The generated Scene Graphs can already provide insights for the clinician, but are also valuable for all downstream tasks as a compact and interpretable representation.
Voxel Scene Graph for Intracranial Hemorrhage
[ "Sanner, Antoine P.", "Grauhan, Nils F.", "Brockmann, Marc A.", "Othman, Ahmed E.", "Mukhopadhyay, Anirban" ]
Conference
2407.21580
[ "https://github.com/MECLabTUDA/VoxelSceneGraph" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
851
null
https://papers.miccai.org/miccai-2024/paper/3431_paper.pdf
@InProceedings{ Dur_Probabilistic_MICCAI2024, author = { Durso-Finley, Joshua and Barile, Berardino and Falet, Jean-Pierre and Arnold, Douglas L. and Pawlowski, Nick and Arbel, Tal }, title = { { Probabilistic Temporal Prediction of Continuous Disease Trajectories and Treatment Effects Using Neural SDEs } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Personalized medicine based on medical images, including predicting future individualized clinical disease progression and treatment response, would have an enormous impact on healthcare and drug development, particularly for diseases (e.g. multiple sclerosis (MS)) with long term, complex, heterogeneous evolutions and no cure. In this work, we present the first stochastic causal temporal framework to model the continuous temporal evolution of disease progression via Neural Stochastic Differential Equations (NSDE). The proposed causal inference model takes as input the patient’s high dimensional images (MRI) and tabular data, and predicts both factual and counterfactual progression trajectories on different treatments in latent space. The NSDE permits the estimation of high-confidence personalized trajectories and treatment effects. Extensive experiments were performed on a large, multi-centre, proprietary dataset of patient 3D MRI and clinical data acquired during several randomized clinical trials for MS treatments. Our results present the first successful uncertainty-based causal Deep Learning (DL) model to: (a) accurately predict future patient MS disability evolution (e.g. EDSS) and treatment effects leveraging baseline MRI, and (b) permit the discovery of subgroups of patients for which the model has high confidence in their response to treatment even in clinical trials which did not reach their clinical endpoints.
Probabilistic Temporal Prediction of Continuous Disease Trajectories and Treatment Effects Using Neural SDEs
[ "Durso-Finley, Joshua", "Barile, Berardino", "Falet, Jean-Pierre", "Arnold, Douglas L.", "Pawlowski, Nick", "Arbel, Tal" ]
Conference
2406.12807
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
852
null
https://papers.miccai.org/miccai-2024/paper/1228_paper.pdf
@InProceedings{ Su_Crossgraph_MICCAI2024, author = { Su, Huaqiang and Lei, Haijun and Guoliang, Chen and Lei, Baiying }, title = { { Cross-graph Interaction and Diffusion Probability Models for Lung Nodule Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Accurate segmentation of lung nodules in computed tomography (CT) images is crucial to advance the treatment of lung cancer. Methods based on diffusion probabilistic models (DPMs) are widely used in medical image segmentation tasks. Nevertheless, conventional DPM encounters challenges when addressing medical image segmentation issues, primarily attributed to the irregular structure of lung nodules and the inherent resemblance between lung nodules and their surrounding environments. Consequently, this study introduces an innovative architecture known as the dual-branch Diff-UNet to address the challenges associated with lung nodule segmentation effectively. Specifically, the denoising UNet in this architecture interactively processes the semantic information captured by the branches of the Transformer and the convolutional neural network (CNN) through bidirectional connection units. Furthermore, the feature fusion module (FFM) helps integrate the semantic features extracted by DPM with the locally detailed features captured by the segmentation network. Simultaneously, a lightweight cross-graph interaction (CGI) module is introduced in the decoder, which uses region and edge features as graph nodes to update and propagate cross-domain features and capture the characteristics of object boundaries. Finally, the multi-scale cross module (MCM) synergizes the deep features from the DPM with the edge features from the segmentation network, augmenting the network’s capability to comprehend images. The Diff-UNet has been proven effective through experiments on challenging datasets, including self-collected datasets and LUNA16.
Cross-graph Interaction and Diffusion Probability Models for Lung Nodule Segmentation
[ "Su, Huaqiang", "Lei, Haijun", "Guoliang, Chen", "Lei, Baiying" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
853
null
https://papers.miccai.org/miccai-2024/paper/3978_paper.pdf
@InProceedings{ Tan_VertFound_MICCAI2024, author = { Tang, Jinzhou and Wu, Yinhao and Yao, Zequan and Li, Mingjie and Hong, Yuan and Yu, Dongdong and Gao, Zhifan and Chen, Bin and Zhao, Shen }, title = { { VertFound: Synergizing Semantic and Spatial Understanding for Fine-grained Vertebrae Classification via Foundation Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Achieving automated vertebrae classification in spine images is a crucial yet challenging task due to the repetitive nature of adjacent vertebrae and limited fields of view (FoV). Different from previous methods that leverage the serial information of vertebrae to optimize classification results, we propose VertFound, a framework that harnesses the inherent adaptability and versatility of foundation models for fine-grained vertebrae classification. Specifically, VertFound designs a vertebral positioning with cross-model synergy (VPS) module that efficiently merges semantic information from CLIP and spatial features from SAM, leading to richer feature representations that capture vertebral spatial relationships. Moreover, a novel Wasserstein loss is designed to minimize disparities between image and text feature distributions by continuously optimizing the transport distance between the two distributions, resulting in a more discriminative alignment capability of CLIP for vertebral classification. Extensive evaluations on our vertebral MRI dataset show VertFound exhibits significant improvements in both identification rate (IDR) and identification accuracy (IRA), which underscores its efficacy and further shows the remarkable potential of foundation models for fine-grained recognition tasks in the medical domain. Our code is available at https://github.com/inhaowu/VertFound.
VertFound: Synergizing Semantic and Spatial Understanding for Fine-grained Vertebrae Classification via Foundation Models
[ "Tang, Jinzhou", "Wu, Yinhao", "Yao, Zequan", "Li, Mingjie", "Hong, Yuan", "Yu, Dongdong", "Gao, Zhifan", "Chen, Bin", "Zhao, Shen" ]
Conference
[ "https://github.com/inhaowu/VertFound" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
854
null
https://papers.miccai.org/miccai-2024/paper/1638_paper.pdf
@InProceedings{ Che_HybridStructureOriented_MICCAI2024, author = { Chen, Lingyu and Wang, Yue and Zhao, Zhe and Liao, Hongen and Zhang, Daoqiang and Han, Haojie and Chen, Fang }, title = { { Hybrid-Structure-Oriented Transformer for Arm Musculoskeletal Ultrasound Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Segmenting complex layer structures, including subcutaneous fat, skeletal muscle, and bone in arm musculoskeletal ultrasound (MSKUS), is vital for diagnosing and monitoring the progression of Breast-Cancer-Related Lymphedema (BCRL). Nevertheless, previous researches primarily focus on individual muscle or bone segmentation in MSKUS, overlooking the intricate and hybrid-layer morphology that characterizes these structures. To address this limitation, we propose a novel approach called the hybrid structure-oriented Transformer (HSformer), which effectively captures hierarchical structures with diverse morphology in MSKUS. Specifically, HSformer combines a hierarchical-consistency relative position encoding and a structure-biased constraint for hierarchical structure attention. Our experiments on arm MSKUS datasets demonstrate that HSformer achieves state-of-the-art performance in segmenting subcutaneous fat, skeletal muscle and bone.
Hybrid-Structure-Oriented Transformer for Arm Musculoskeletal Ultrasound Segmentation
[ "Chen, Lingyu", "Wang, Yue", "Zhao, Zhe", "Liao, Hongen", "Zhang, Daoqiang", "Han, Haojie", "Chen, Fang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
855