Datasets:

bibtex_url
null
proceedings
stringlengths
58
58
bibtext
stringlengths
511
974
abstract
stringlengths
92
2k
title
stringlengths
30
207
authors
sequencelengths
1
22
id
stringclasses
1 value
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
14 values
n_linked_authors
int64
-1
1
upvotes
int64
-1
1
num_comments
int64
-1
0
n_authors
int64
-1
10
Models
sequencelengths
0
4
Datasets
sequencelengths
0
1
Spaces
sequencelengths
0
0
old_Models
sequencelengths
0
4
old_Datasets
sequencelengths
0
1
old_Spaces
sequencelengths
0
0
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
unique_id
int64
0
855
null
https://papers.miccai.org/miccai-2024/paper/0704_paper.pdf
@InProceedings{ Du_CLEFT_MICCAI2024, author = { Du, Yuexi and Chang, Brian and Dvornek, Nicha C. }, title = { { CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Recent advancements in Contrastive Language-Image Pre-training (CLIP) have demonstrated notable success in self-supervised representation learning across various tasks. However, the existing CLIP-like approaches often demand extensive GPU resources and prolonged training times due to the considerable size of the model and dataset, making them poor for medical applications, in which large datasets are not always common. Meanwhile, the language model prompts are mainly manually derived from labels tied to images, potentially overlooking the richness of information within training samples. We introduce a novel language-image Contrastive Learning method with an Efficient large language model and prompt Fine-Tuning (CLEFT) that harnesses the strengths of the extensive pre-trained language and visual models. Furthermore, we present an efficient strategy for learning context-based prompts that mitigates the gap between informative clinical diagnostic data and simple class labels. Our method demonstrates state-of-the-art performance on multiple chest X-ray and mammography datasets compared with various baselines. The proposed parameter efficient framework can reduce the total trainable model size by 39% and reduce the trainable language model to only 4% compared with the current BERT encoder.
CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning
[ "Du, Yuexi", "Chang, Brian", "Dvornek, Nicha C." ]
Conference
2407.21011
[ "https://github.com/XYPB/CLEFT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
600
null
https://papers.miccai.org/miccai-2024/paper/0669_paper.pdf
@InProceedings{ Par_SSAM_MICCAI2024, author = { Paranjape, Jay N. and Sikder, Shameema and Vedula, S. Swaroop and Patel, Vishal M. }, title = { { S-SAM: SVD-based Fine-Tuning of Segment Anything Model for Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Medical image segmentation has been traditionally approached by training or fine-tuning the entire model to cater to any new modality or dataset. However, this approach often requires tuning a large number of parameters during training. With the introduction of the Segment Anything Model (SAM) for prompted segmentation of natural images, many efforts have been made towards adapting it efficiently for medical imaging, thus reducing the training time and resources. However, these methods still require expert annotations for every image in the form of point prompts or bounding box prompts during training and inference, making it tedious to employ them in practice. In this paper, we propose an adaptation technique, called S-SAM, that only trains parameters equal to 0.4% of SAM’s parameters and at the same time uses simply the label names as prompts for producing precise masks. This not only makes tuning SAM more efficient than the existing adaptation methods but also removes the burden of providing expert prompts. We call this modified version S-SAM and evaluate it on five different modalities including endoscopic images, x-ray, ultrasound, CT, and histology images. Our experiments show that S-SAM outperforms state-of-the-art methods as well as existing SAM adaptation methods while tuning a significantly less number of parameters. We release the code for S-SAM at https://github.com/JayParanjape/SVDSAM.
S-SAM: SVD-based Fine-Tuning of Segment Anything Model for Medical Image Segmentation
[ "Paranjape, Jay N.", "Sikder, Shameema", "Vedula, S. Swaroop", "Patel, Vishal M." ]
Conference
2408.06447
[ "https://github.com/JayParanjape/SVDSAM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
601
null
https://papers.miccai.org/miccai-2024/paper/1009_paper.pdf
@InProceedings{ Xia_Customized_MICCAI2024, author = { Xia, Zhengwang and Wang, Huan and Zhou, Tao and Jiao, Zhuqing and Lu, Jianfeng }, title = { { Customized Relationship Graph Neural Network for Brain Disorder Identification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
The connectivity structure of brain networks/graphs provides insights into the segregation and integration patterns among diverse brain regions. Numerous studies have demonstrated that specific brain disorders are associated with abnormal connectivity patterns within distinct regions. Consequently, several Graph Neural Network (GNN) models have been developed to automatically identify irregular integration patterns in brain graphs. However, the inputs for these GNN-based models, namely brain networks/graphs, are typically constructed using statistical-specific metrics and cannot be trained. This limitation might render them ineffective for downstream tasks, potentially leading to suboptimal outcomes. To address this issue, we propose a Customized Relationship Graph Neural Network (CRGNN) that can bridge the gap between the graph structure and the downstream task. The proposed method can dynamically learn the optimal brain networks/graphs for each task. Specifically, we design a block that contains multiple parameterized gates to preserve causal relationships among different brain regions. In addition, we devise a novel node aggregation rule and an appropriate constraint to improve the robustness of the model. The proposed method is evaluated on two publicly available datasets, demonstrating superior performance compared to existing methods. The implementation code is available at https://github.com/NJUSTxiazw/CRGNN.
Customized Relationship Graph Neural Network for Brain Disorder Identification
[ "Xia, Zhengwang", "Wang, Huan", "Zhou, Tao", "Jiao, Zhuqing", "Lu, Jianfeng" ]
Conference
[ "https://github.com/NJUSTxiazw/CRGNN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
602
null
https://papers.miccai.org/miccai-2024/paper/2477_paper.pdf
@InProceedings{ Sin_TrIND_MICCAI2024, author = { Sinha, Ashish and Hamarneh, Ghassan }, title = { { TrIND: Representing Anatomical Trees by Denoising Diffusion of Implicit Neural Fields } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Anatomical trees play a central role in clinical diagnosis and treatment planning. However, accurately representing anatomical trees is challenging due to their varying and complex topology and geometry. Traditional methods for representing tree structures, captured using medical imaging, while invaluable for visualizing vascular and bronchial networks, exhibit drawbacks in terms of limited resolution, flexibility, and efficiency. Recently, implicit neural representations (INRs) have emerged as a powerful tool for representing shapes accurately and efficiently. We propose a novel approach, TrIND, for representing anatomical trees using INR, while also capturing the distribution of a set of trees via denoising diffusion in the space of INRs. We accurately capture the intricate geometries and topologies of anatomical trees at any desired resolution. Through extensive qualitative and quantitative evaluation, we demonstrate high-fidelity tree reconstruction with arbitrary resolution yet compact storage, and versatility across anatomical sites and tree complexities. Our code is available \href{https://github.com/sfu-mial/TreeDiffusion}{here}.
TrIND: Representing Anatomical Trees by Denoising Diffusion of Implicit Neural Fields
[ "Sinha, Ashish", "Hamarneh, Ghassan" ]
Conference
[ "https://github.com/sfu-mial/TreeDiffusion" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
603
null
https://papers.miccai.org/miccai-2024/paper/0498_paper.pdf
@InProceedings{ Zha_CoarsetoFine_MICCAI2024, author = { Zhang, Yuhan and Huang, Kun and Yang, Xikai and Ma, Xiao and Wu, Jian and Wang, Ningli and Wang, Xi and Heng, Pheng-Ann }, title = { { Coarse-to-Fine Latent Diffusion Model for Glaucoma Forecast on Sequential Fundus Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Glaucoma is one of the leading causes of irreversible blindness worldwide. Predicting the future status of glaucoma is essential for early detection and timely intervention of potential patients and avoiding the outcome of blindness. Based on historical fundus images from patients, existing glaucoma forecast methods directly predict the probability of developing glaucoma in the future. In this paper, we propose a novel glaucoma forecast method called Coarse-to-Fine Latent Diffusion Model (C2F-LDM) to generatively predict the possible features at any future time point in the latent space based on sequential fundus images. After obtaining the predicted features, we can detect the probability of developing glaucoma and reconstruct future fundus images for visualization. Since all fundus images in the sequence are sampled at irregular time points, we propose a time-adaptive sequence encoder that encodes the sequential fundus images with their irregular time intervals as the historical condition to guide the latent diffusion model, making the model capable of capturing the status changes of glaucoma over time. Furthermore, a coarse-to-fine diffusion strategy improves the quality of the predicted features. We verify C2F-LDM on the public glaucoma forecast dataset SIGF. C2F-LDM presents better quantitative results than other state-of-the-art forecast methods and provides visual results for qualitative evaluations.
Coarse-to-Fine Latent Diffusion Model for Glaucoma Forecast on Sequential Fundus Images
[ "Zhang, Yuhan", "Huang, Kun", "Yang, Xikai", "Ma, Xiao", "Wu, Jian", "Wang, Ningli", "Wang, Xi", "Heng, Pheng-Ann" ]
Conference
[ "https://github.com/ZhangYH0502/C2F-LDM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
604
null
https://papers.miccai.org/miccai-2024/paper/1339_paper.pdf
@InProceedings{ Zep_Laplacian_MICCAI2024, author = { Zepf, Kilian and Wanna, Selma and Miani, Marco and Moore, Juston and Frellsen, Jes and Hauberg, Søren and Warburg, Frederik and Feragen, Aasa }, title = { { Laplacian Segmentation Networks Improve Epistemic Uncertainty Quantification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Image segmentation relies heavily on neural networks which are known to be overconfident, especially when making predictions on out-of-distribution (OOD) images. This is a common scenario in the medical domain due to variations in equipment, acquisition sites, or image corruptions. This work addresses the challenge of OOD detection by proposing Laplacian Segmentation Networks (LSN): methods which jointly model epistemic (model) and aleatoric (data) uncertainty for OOD detection. In doing so, we propose the first Laplace approximation of the weight posterior that scales to large neural networks with skip connections that have high-dimensional outputs. We demonstrate on three datasets that the LSN-modeled parameter distributions, in combination with suitable uncertainty measures, gives superior OOD detection.
Laplacian Segmentation Networks Improve Epistemic Uncertainty Quantification
[ "Zepf, Kilian", "Wanna, Selma", "Miani, Marco", "Moore, Juston", "Frellsen, Jes", "Hauberg, Søren", "Warburg, Frederik", "Feragen, Aasa" ]
Conference
2303.13123
[ "https://github.com/kilianzepf/laplacian_segmentation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
605
null
https://papers.miccai.org/miccai-2024/paper/1818_paper.pdf
@InProceedings{ Guo_FreeSurGS_MICCAI2024, author = { Guo, Jiaxin and Wang, Jiangliu and Kang, Di and Dong, Wenzhen and Wang, Wenting and Liu, Yun-hui }, title = { { Free-SurGS: SfM-Free 3D Gaussian Splatting for Surgical Scene Reconstruction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Real-time 3D reconstruction of surgical scenes plays a vital role in computer-assisted surgery, holding a promise to enhance surgeons’ visibility. Recent advancements in 3D Gaussian Splatting (3DGS) have shown great potential for real-time novel view synthesis of general scenes, which relies on accurate poses and point clouds generated by Structure-from-Motion (SfM) for initialization. However, 3DGS with SfM fails to recover accurate camera poses and geometry in surgical scenes due to the challenges of minimal textures and photometric inconsistencies. To tackle this problem, in this paper, we propose the first SfM-free 3DGS-based method for surgical scene reconstruction by jointly optimizing the camera poses and scene representation. Based on the video continuity, the key of our method is to exploit the immediate optical flow priors to guide the projection flow derived from 3D Gaussians. Unlike most previous methods relying on photometric loss only, we formulate the pose estimation problem as minimizing the flow loss between the projection flow and optical flow. A consistency check is further introduced to filter the flow outliers by detecting the rigid and reliable points that satisfy the epipolar geometry. During 3D Gaussian optimization, we randomly sample frames to optimize the scene representations to grow the 3D Gaussian progressively. Experiments on the SCARED dataset demonstrate our superior performance than existing methods in novel view synthesis and pose estimation with high efficiency.
Free-SurGS: SfM-Free 3D Gaussian Splatting for Surgical Scene Reconstruction
[ "Guo, Jiaxin", "Wang, Jiangliu", "Kang, Di", "Dong, Wenzhen", "Wang, Wenting", "Liu, Yun-hui" ]
Conference
2407.02918
[ "https://github.com/wrld/Free-SurGS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
606
null
https://papers.miccai.org/miccai-2024/paper/1348_paper.pdf
@InProceedings{ Zen_Tackling_MICCAI2024, author = { Zeng, Shuang and Guo, Pengxin and Wang, Shuai and Wang, Jianbo and Zhou, Yuyin and Qu, Liangqiong }, title = { { Tackling Data Heterogeneity in Federated Learning via Loss Decomposition } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Federated Learning (FL) is a rising approach towards collaborative and privacy-preserving machine learning where large-scale medical datasets remain localized to each client. However, the issue of data heterogeneity among clients often compels local models to diverge, leading to suboptimal global models. To mitigate the impact of data heterogeneity on FL performance, we start with analyzing how FL training influence FL performance by decomposing the global loss into three terms: local loss, distribution shift loss and aggregation loss. Remarkably, our loss decomposition reveals that existing local training-based FL methods attempt to further reduce the distribution shift loss, while the global aggregation-based FL methods propose better aggregation strategies to reduce the aggregation loss. Nevertheless, a comprehensive joint effort to minimize all three terms is currently limited in the literature, leading to subpar performance when dealing with data heterogeneity challenges. To fill this gap, we propose a novel FL method based on global loss decomposition, called FedLD, to jointly reduce these three loss terms. Our FedLD involves a margin control regularization in local training to reduce the distribution shift loss, and a principal gradient-based server aggregation strategy to reduce the aggregation loss. Notably, under different levels of data heterogeneity, our strategies achieve better and more robust performance on retinal and chest X-ray classification compared to other FL algorithms. Our code is available at https://github.com/Zeng-Shuang/FedLD.
Tackling Data Heterogeneity in Federated Learning via Loss Decomposition
[ "Zeng, Shuang", "Guo, Pengxin", "Wang, Shuai", "Wang, Jianbo", "Zhou, Yuyin", "Qu, Liangqiong" ]
Conference
2408.12300
[ "https://github.com/Zeng-Shuang/FedLD" ]
https://huggingface.co/papers/2408.12300
1
0
0
6
[]
[]
[]
[]
[]
[]
1
Poster
607
null
https://papers.miccai.org/miccai-2024/paper/2798_paper.pdf
@InProceedings{ Jia_Intrapartum_MICCAI2024, author = { Jiang, Jianmei and Wang, Huijin and Bai, Jieyun and Long, Shun and Chen, Shuangping and Campello, Victor M. and Lekadir, Karim }, title = { { Intrapartum Ultrasound Image Segmentation of Pubic Symphysis and Fetal Head Using Dual Student-Teacher Framework with CNN-ViT Collaborative Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
The segmentation of the pubic symphysis and fetal head (PSFH) constitutes a pivotal step in monitoring labor progression and identifying potential delivery complications. Despite the advances in deep learning, the lack of annotated medical images hinders the training of segmentation. Traditional semi-supervised learning approaches primarily utilize a unified network model based on Convolutional Neural Networks (CNNs) and apply consistency regularization to mitigate the reliance on extensive annotated data. However, these methods often fall short in capturing the discriminative features of unlabeled data and in delineating the long-range dependencies inherent in the ambiguous boundaries of PSFH within ultrasound images. To address these limitations, we introduce a novel framework, the Dual-Student and Teacher Combining CNN and Transformer (DSTCT), which synergistically integrates the capabilities of CNNs and Transformers. Our framework comprises a tripartite architecture featuring a Vision Transformer (ViT) as the ‘teacher’ and two ‘student’ models — one ViT and one CNN. This dual-student setup enables mutual supervision through the generation of both hard and soft pseudo-labels, with the consistency in their predictions being refined by minimizing the classifier determinacy discrepancy. The teacher model further reinforces learning within this architecture through the imposition of consistency regularization constraints. To augment the generalization abilities of our approach, we employ a blend of data and model perturbation techniques. Comprehensive evaluations on the benchmark dataset of the PSFH Segmentation Grand Challenge at MICCAI 2023 demonstrate our DSTCT framework outperformed 10 contemporary semi-supervised segmentation methods.
Intrapartum Ultrasound Image Segmentation of Pubic Symphysis and Fetal Head Using Dual Student-Teacher Framework with CNN-ViT Collaborative Learning
[ "Jiang, Jianmei", "Wang, Huijin", "Bai, Jieyun", "Long, Shun", "Chen, Shuangping", "Campello, Victor M.", "Lekadir, Karim" ]
Conference
2409.06928
[ "https://github.com/jjm1589/DSTCT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
608
null
https://papers.miccai.org/miccai-2024/paper/0134_paper.pdf
@InProceedings{ Zha_Variational_MICCAI2024, author = { Zhang, Qi and Liu, Xiujian and Zhang, Heye and Xu, Chenchu and Yang, Guang and Yuan, Yixuan and Tan, Tao and Gao, Zhifan }, title = { { Variational Field Constraint Learning for Degree of Coronary Artery Ischemia Assessment } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
The crucial role in diagnosing ischemic coronary artery disease is played by fractional flow reserve evaluation. Machine learning based fractional flow reserve evaluation has become the most important method due to it effectiveness and high computation efficiency. However, it still suffers from lacking of the proper description for the coronary artery fluid. This study presents a variational field constraint learning method for assessing fractional flow reserve from digital subtraction angiography images. Our method offers a promising approach by integrating governing equations and boundary conditions as unified constraints. Moreover, we also provide a multi-vessels neural network for the prediction of FFR for coronary artery. By leveraging a holistic consideration of the fluid dynamics, our method achieves more accurate fractional flow reserve prediction compared to existing methods. Our VFCLM is evaluated over 8000 virtual subjects produced by 1D hemodynamic models and 180 in-vivo cases. VFCLM achieves the MAE of 1.17 mmHg and MAPE of 1.20% for quantification.
Variational Field Constraint Learning for Degree of Coronary Artery Ischemia Assessment
[ "Zhang, Qi", "Liu, Xiujian", "Zhang, Heye", "Xu, Chenchu", "Yang, Guang", "Yuan, Yixuan", "Tan, Tao", "Gao, Zhifan" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
609
null
https://papers.miccai.org/miccai-2024/paper/1595_paper.pdf
@InProceedings{ She_APSUSCT_MICCAI2024, author = { Sheng, Yi and Wang, Hanchen and Liu, Yipei and Yang, Junhuan and Jiang, Weiwen and Lin, Youzuo and Yang, Lei }, title = { { APS-USCT: Ultrasound Computed Tomography on Sparse Data via AI-Physic Synergy } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Ultrasound computed tomography (USCT) is a promising technique that achieves superior medical imaging reconstruction resolution by fully leveraging waveform information, outperforming conventional ultrasound methods. Despite its advantages, high-quality USCT reconstruction relies on extensive data acquisition by a large number of transducers, leading to increased costs, computational demands, extended patient scanning times, and manufacturing complexities. To mitigate these issues, we propose a new USCT method called APS-USCT, which facilitates imaging with sparse data, substantially reducing dependence on high-cost dense data acquisition. Our APS-USCT method consists of two primary components: APS-wave and APS-FWI. The APS-wave component, an encoder-decoder system, preprocesses the waveform data, converting sparse data into dense waveforms to augment sample density prior to reconstruction. The APS-FWI component, utilizing the InversionNet, directly reconstructs the speed of sound (SOS) from the ultrasound waveform data. We further improve the model’s performance by incorporating Squeeze-and-Excitation (SE) Blocks and source encoding techniques. Testing our method on a breast cancer dataset yielded promising results. It demonstrated outstanding performance with an average Structural Similarity Index (SSIM) of 0.8431. Notably, over 82% of samples achieved an SSIM above 0.8, with nearly 61% exceeding 0.85, highlighting the significant potential of our approach in improving USCT image reconstruction by efficiently utilizing sparse data.
APS-USCT: Ultrasound Computed Tomography on Sparse Data via AI-Physic Synergy
[ "Sheng, Yi", "Wang, Hanchen", "Liu, Yipei", "Yang, Junhuan", "Jiang, Weiwen", "Lin, Youzuo", "Yang, Lei" ]
Conference
2407.14564
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
610
null
https://papers.miccai.org/miccai-2024/paper/1358_paper.pdf
@InProceedings{ Fan_Aligning_MICCAI2024, author = { Fang, Xiao and Lin, Yi and Zhang, Dong and Cheng, Kwang-Ting and Chen, Hao }, title = { { Aligning Medical Images with General Knowledge from Large Language Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Pre-trained large vision-language models (VLMs) like CLIP have revolutionized visual representation learning using natural language as supervisions, and have demonstrated promising generalization ability. In this work, we propose ViP, a novel visual symptom-guided prompt learning framework for medical image analysis, which facilitates general knowledge transfer from CLIP. ViP consists of two key components: a visual symptom generator (VSG) and a dual-prompt network. Specifically, VSG aims to extract explicable visual symptoms from pre-trained large language models, while the dual-prompt network uses these visual symptoms to guide the training on two learnable prompt modules, i.e., context prompt and merge prompt, to better adapt our framework to medical image analysis via large VLMs. Extensive experimental results demonstrate that ViP can achieve competitive performance compared to the state-of-the-art methods on two challenging datasets. We provide the source code in the supplementary material.
Aligning Medical Images with General Knowledge from Large Language Models
[ "Fang, Xiao", "Lin, Yi", "Zhang, Dong", "Cheng, Kwang-Ting", "Chen, Hao" ]
Conference
2409.00341
[ "https://github.com/xiaofang007/ViP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
611
null
https://papers.miccai.org/miccai-2024/paper/1871_paper.pdf
@InProceedings{ Wåh_Explainable_MICCAI2024, author = { Wåhlstrand Skärström, Victor and Johansson, Lisa and Alvén, Jennifer and Lorentzon, Mattias and Häggström, Ida }, title = { { Explainable vertebral fracture analysis with uncertainty estimation using differentiable rule-based classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
We present a novel method for explainable vertebral fracture assessment (XVFA) in low-dose radiographs using deep neural networks, incorporating vertebra detection and keypoint localization with uncertainty estimates. We incorporate Genant’s semi-quantitative criteria as a differentiable rule-based means of classifying both vertebra fracture grade and morphology. Unlike previous work, XVFA provides explainable classifications relatable to current clinical methodology, as well as uncertainty estimations, while at the same time surpassing state-of-the art methods with a vertebra-level sensitivity of 93% and end-to-end AUC of 97% in a challenging setting. Moreover, we compare intra-reader agreement with model uncertainty estimates, with model reliability on par with human annotators.
Explainable vertebral fracture analysis with uncertainty estimation using differentiable rule-based classification
[ "Wåhlstrand Skärström, Victor", "Johansson, Lisa", "Alvén, Jennifer", "Lorentzon, Mattias", "Häggström, Ida" ]
Conference
2407.02926
[ "https://github.com/waahlstrand/xvfa" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
612
null
https://papers.miccai.org/miccai-2024/paper/3401_paper.pdf
@InProceedings{ Dha_VideoCutMix_MICCAI2024, author = { Dhanakshirur, Rohan Raju and Tyagi, Mrinal and Baby, Britty and Suri, Ashish and Kalra, Prem and Arora, Chetan }, title = { { VideoCutMix: Temporal Segmentation of Surgical Videos in Scarce Data Scenarios } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Temporal Action Segmentation (TAS) of a surgical video is an important first step for a variety of video analysis tasks such as skills assessment, surgical assistance and robotic surgeries. Limited data availability due to costly acquisition and annotation makes data augmentation imperative in such a scenario. However, extending directly from an image-augmentation strategy, most video augmentation techniques disturb the optical flow information in the process of generating an augmented sample. This creates difficulty in training. In this paper, we propose a simple-yet-efficient, flow-consistent, video-specific data augmentation technique suitable for TAS in scarce data conditions. This is the first augmentation for data-scarce TAS in surgical scenarios. We observe that TAS errors commonly occur at the action boundaries due to their scarcity in the datasets. Hence, we propose a novel strategy that generates pseudo-action boundaries without affecting optical flow elsewhere. Further, we also propose a sample-hardness-inspired curriculum where we train the model on easy samples first with only a single label observed in the temporal window. Additionally, we contribute the first-ever non-robotic Neuro-endoscopic Trainee Simulator (NETS) dataset for the task of TAS. We validate our approach on the proposed NETS, along with publicly available JIGSAWS and Cholec T-50 datasets. Compared to without the use of any data augmentation, we report an average improvement of 7.89%, 5.53%, 2.80%, respectively, on the 3 datasets in terms of edit score using our technique. The reported numbers are improvements averaged over 9 state-of-the-art (SOTA) action segmentation models using two different temporal feature extractors (I3D and VideoMAE). On average, the proposed technique outperforms the best-performing SOTA data augmentation technique by 3.94%, thus enabling us to set up a new SOTA for action segmentation in each of these datasets. https://aineurosurgery.github.io/VideoCutMix
VideoCutMix: Temporal Segmentation of Surgical Videos in Scarce Data Scenarios
[ "Dhanakshirur, Rohan Raju", "Tyagi, Mrinal", "Baby, Britty", "Suri, Ashish", "Kalra, Prem", "Arora, Chetan" ]
Conference
[ "https://github.com/AINeurosurgery/VideoCutMix" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
613
null
https://papers.miccai.org/miccai-2024/paper/0900_paper.pdf
@InProceedings{ Zhu_LowRank_MICCAI2024, author = { Zhu, Vince and Ji, Zhanghexuan and Guo, Dazhou and Wang, Puyang and Xia, Yingda and Lu, Le and Ye, Xianghua and Zhu, Wei and Jin, Dakai }, title = { { Low-Rank Continual Pyramid Vision Transformer: Incrementally Segment Whole-Body Organs in CT with Light-Weighted Adaptation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Deep segmentation networks achieve high performance when trained on specific datasets. However, in clinical practice, it is often desirable that pretrained segmentation models can be dynamically extended to enable segmenting new organs without access to previous training datasets or without training from scratch. This would ensure a much more efficient model development and deployment paradigm accounting for the patient privacy and data storage issues. This clinically preferred process can be viewed as a continual semantic segmentation (CSS) problem. Previous CSS works would either experience catastrophic forgetting or lead to unaffordable memory costs as models expand. In this work, we propose a new continual whole-body organ segmentation model with light-weighted low-rank adaptation (LoRA). We first train and freeze a pyramid vision transformer (PVT) base segmentation model on the initial task, then continually add light-weighted trainable LoRA parameters to the frozen model for each new learning task. Through a holistically exploration of the architecture modification, we identify three most important layers (i.e., patch-embedding, multi-head attention and feed forward layers) that are critical in adapting to the new segmentation tasks, while retaining the majority of the pre-trained parameters fixed. Our proposed model continually segments new organs without catastrophic forgetting and meanwhile maintaining a low parameter increasing rate. Continually trained and tested on four datasets covering different body parts of a total of 121 organs, results show that our model achieves high segmentation accuracy, closely reaching the PVT and nnUNet upper bounds, and significantly outperforms other regularization-based CSS methods. When comparing to the leading architecture-based CSS method, our model has a substantial lower parameter increasing rate (16.7\% versus 96.7\%) while achieving comparable performance.
Low-Rank Continual Pyramid Vision Transformer: Incrementally Segment Whole-Body Organs in CT with Light-Weighted Adaptation
[ "Zhu, Vince", "Ji, Zhanghexuan", "Guo, Dazhou", "Wang, Puyang", "Xia, Yingda", "Lu, Le", "Ye, Xianghua", "Zhu, Wei", "Jin, Dakai" ]
Conference
2410.04689
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
614
null
https://papers.miccai.org/miccai-2024/paper/1135_paper.pdf
@InProceedings{ Liu_Auxiliary_MICCAI2024, author = { Liu, Yikang and Zhao, Lin and Chen, Eric Z. and Chen, Xiao and Chen, Terrence and Sun, Shanhui }, title = { { Auxiliary Input in Training: Incorporating Catheter Features into Deep Learning Models for ECG-Free Dynamic Coronary Roadmapping } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Dynamic coronary roadmapping is a technology that overlays the vessel maps (the “roadmap”) extracted from an offline image sequence of X-ray angiography onto a live stream of X-ray fluoroscopy in real-time. It aims to offer navigational guidance for interventional surgeries without the need for repeated contrast agent injections, thereby reducing the risks associated with radiation exposure and kidney failure. The precision of the roadmaps is contingent upon the accurate alignment of angiographic and fluoroscopic images based on their cardiac phases, as well as precise catheter tip tracking. The former ensures the selection of a roadmap that closely matches the vessel shape in the current frame, while the latter uses catheter tips as reference points to adjust for translational motion between the roadmap and the present vessel tree. Training deep learning models for both tasks is challenging and underexplored. However, incorporating catheter features into the models could offer substantial benefits, given humans heavily rely on catheters to complete the tasks. To this end, we introduce a simple but effective method, auxiliary input in training (AIT), and demonstrate that it enhances model performance across both tasks, outperforming baseline methods in knowledge incorporation and transfer learning.
Auxiliary Input in Training: Incorporating Catheter Features into Deep Learning Models for ECG-Free Dynamic Coronary Roadmapping
[ "Liu, Yikang", "Zhao, Lin", "Chen, Eric Z.", "Chen, Xiao", "Chen, Terrence", "Sun, Shanhui" ]
Conference
2408.15947
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
615
null
https://papers.miccai.org/miccai-2024/paper/2774_paper.pdf
@InProceedings{ Wan_AClinicaloriented_MICCAI2024, author = { Wang, Yaqi and Chen, Leqi and Hou, Qingshan and Cao, Peng and Yang, Jinzhu and Liu, Xiaoli and Zaiane, Osmar R. }, title = { { A Clinical-oriented Lightweight Network for High-resolution Medical Image Enhancement } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Medical images captured in less-than-optimal conditions may suffer from quality degradation, such as blur, artifacts, and low lighting, which potentially leads to misdiagnosis. Unfortunately, state-of-the-art medical image enhancement methods face challenges in both high-resolution image quality enhancement and local distinct anatomical structure preservation. To address these issues, we propose a Clinical-oriented High-resolution Lightweight Medical Image Enhancement Network, termed CHLNet, which proficiently addresses high-resolution medical image enhancement, detailed pathological characteristics, and lightweight network design simultaneously. More specifically, CHLNet comprises two main components: 1) High-resolution Assisted Quality Enhancement Network for removing global low-quality factors in high-resolution images thus enhancing overall image quality; 2) High-quality-semantic Guided Quality Enhancement Network for capturing semantic knowledge from high-quality images such that detailed structure preservation is enforced. Moreover, thanks to its lightweight design, CHLNet can be easily deployed on medical edge devices. Extensive experiments on three public medical image datasets demonstrate the effectiveness and superiority of CHLNet over the state-of-the-art.
A Clinical-oriented Lightweight Network for High-resolution Medical Image Enhancement
[ "Wang, Yaqi", "Chen, Leqi", "Hou, Qingshan", "Cao, Peng", "Yang, Jinzhu", "Liu, Xiaoli", "Zaiane, Osmar R." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
616
null
https://papers.miccai.org/miccai-2024/paper/0914_paper.pdf
@InProceedings{ Kim_MaskFree_MICCAI2024, author = { Kim, Hyeon Bae and Ahn, Yong Hyun and Kim, Seong Tae }, title = { { Mask-Free Neuron Concept Annotation for Interpreting Neural Networks in Medical Domain } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Recent advancements in deep neural networks have shown promise in aiding disease diagnosis and medical decision-making. However, ensuring transparent decision-making processes of AI models in compliance with regulations requires a comprehensive understanding of the model’s internal workings. However, previous methods heavily rely on expensive pixel-wise annotated datasets for interpreting the model, presenting a significant drawback in medical domains. In this paper, we propose a novel medical neuron concept annotation method, named Mask-free Medical Model Interpretation (MAMMI), addresses these challenges. By using a vision-language model, our method relaxes the need for pixel-level masks for neuron concept annotation. MAMMI achieves superior performance compared to other interpretation methods, demonstrating its efficacy in providing rich representations for neurons in medical image analysis. Our experiments on a model trained on NIH chest X-rays validate the effectiveness of MAMMI, showcasing its potential for transparent clinical decision-making in the medical domain. The code is available at https://github.com/ailab-kyunghee/MAMMI.
Mask-Free Neuron Concept Annotation for Interpreting Neural Networks in Medical Domain
[ "Kim, Hyeon Bae", "Ahn, Yong Hyun", "Kim, Seong Tae" ]
Conference
2407.11375
[ "https://github.com/ailab-kyunghee/MAMMI" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
617
null
https://papers.miccai.org/miccai-2024/paper/0762_paper.pdf
@InProceedings{ Shi_MaskEnhanced_MICCAI2024, author = { Shi, Hairong and Han, Songhao and Huang, Shaofei and Liao, Yue and Li, Guanbin and Kong, Xiangxing and Zhu, Hua and Wang, Xiaomu and Liu, Si }, title = { { Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Tumor lesion segmentation on CT or MRI images plays a critical role in cancer diagnosis and treatment planning. Considering the inherent differences in tumor lesion segmentation data across various medical imaging modalities and equipment, integrating medical knowledge into the Segment Anything Model (SAM) presents promising capability due to its versatility and generalization potential. Recent studies have attempted to enhance SAM with medical expertise by pre-training on large-scale medical segmentation datasets. However, challenges still exist in 3D tumor lesion segmentation owing to tumor complexity and the imbalance in foreground and background regions. Therefore, we introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation. We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks, facilitating the generation of more precise segmentation masks. Furthermore, an iterative refinement scheme is implemented in M-SAM to refine the segmentation masks progressively, leading to improved performance. Extensive experiments on seven tumor lesion segmentation datasets indicate that our M-SAM not only achieves high segmentation accuracy but also exhibits robust generalization. The code is available at https://github.com/nanase1025/M-SAM.
Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation
[ "Shi, Hairong", "Han, Songhao", "Huang, Shaofei", "Liao, Yue", "Li, Guanbin", "Kong, Xiangxing", "Zhu, Hua", "Wang, Xiaomu", "Liu, Si" ]
Conference
2403.05912
[ "https://github.com/nanase1025/M-SAM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
618
null
https://papers.miccai.org/miccai-2024/paper/0308_paper.pdf
@InProceedings{ Gao_CrossDimensional_MICCAI2024, author = { Gao, Fei and Wang, Siwen and Zhang, Fandong and Zhou, Hong-Yu and Wang, Yizhou and Wang, Churan and Yu, Gang and Yu, Yizhou }, title = { { Cross-Dimensional Medical Self-Supervised Representation Learning Based on a Pseudo-3D Transformation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Medical image analysis suffers from a shortage of data, whether annotated or not. This becomes even more pronounced when it comes to 3D medical images. Self-Supervised Learning (SSL) can partially ease this situation by utilizing unlabeled data. However, most existing SSL methods can only make use of data in a single dimensionality (e.g. 2D or 3D), and are incapable of enlarging the training dataset by using data with differing dimensionalities jointly. In this paper, we propose a new cross-dimensional SSL framework based on a pseudo-3D transformation (CDSSL-P3D), that can leverage both 2D and 3D data for joint pre-training. Specifically, we introduce an image transformation based on the im2col algorithm, which converts 2D images into a format consistent with 3D data. This transformation enables seamless integration of 2D and 3D data, and facilitates cross-dimensional self-supervised learning for 3D medical image analysis. We run extensive experiments on 13 downstream tasks, including 2D and 3D classification and segmentation. The results indicate that our CDSSL-P3D achieves superior performance, outperforming other advanced SSL methods.
Cross-Dimensional Medical Self-Supervised Representation Learning Based on a Pseudo-3D Transformation
[ "Gao, Fei", "Wang, Siwen", "Zhang, Fandong", "Zhou, Hong-Yu", "Wang, Yizhou", "Wang, Churan", "Yu, Gang", "Yu, Yizhou" ]
Conference
2406.00947
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
619
null
https://papers.miccai.org/miccai-2024/paper/2686_paper.pdf
@InProceedings{ Fad_EchoFM_MICCAI2024, author = { Fadnavis, Shreyas and Parmar, Chaitanya and Emaminejad, Nastaran and Ulloa Cerna, Alvaro and Malik, Areez and Selej, Mona and Mansi, Tommaso and Dunnmon, Preston and Yardibi, Tarik and Standish, Kristopher and Damasceno, Pablo F. }, title = { { EchoFM: A View-Independent Echocardiogram Model for the Detection of Pulmonary Hypertension } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Transthoracic Echocardiography (TTE) is the most widely-used screening method for the detection of pulmonary hypertension (PH), a life-threatening cardiopulmonary disorder that requires accurate and timely detection for effective management. Automated PH risk detection from TTE can flag subtle indicators of PH that might be easily missed, thereby decreasing variability between operators and enhancing the positive predictive value of the screening test. Previous algorithms for assessing PH risk still rely on pre-identified, single TTE views which might ignore useful information contained in other recordings. Additionally, these methods focus on discerning PH from healthy controls, limiting their utility as a tool to differentiate PH from conditions that mimic its cardiovascular or respiratory presentation. To address these issues, we propose EchoFM, an architecture that combines self-supervised learning (SSL) and a transformer model for view-independent detection of PH from TTE. EchoFM 1) incorporates a powerful encoder for feature extraction from frames, 2) overcomes the need for explicit TTE view classification by merging features from all available views, 3) uses a transformer to attend to frames of interest without discarding others, and 4) is trained on a realistic clinical dataset which includes mimicking conditions as controls. Extensive experimentation demonstrates that EchoFM significantly improves PH risk detection over state-of-the-art Convolutional Neural Networks (CNNs).
EchoFM: A View-Independent Echocardiogram Model for the Detection of Pulmonary Hypertension
[ "Fadnavis, Shreyas", "Parmar, Chaitanya", "Emaminejad, Nastaran", "Ulloa Cerna, Alvaro", "Malik, Areez", "Selej, Mona", "Mansi, Tommaso", "Dunnmon, Preston", "Yardibi, Tarik", "Standish, Kristopher", "Damasceno, Pablo F." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
620
null
https://papers.miccai.org/miccai-2024/paper/0740_paper.pdf
@InProceedings{ Doe_Selfsupervised_MICCAI2024, author = { Doerrich, Sebastian and Di Salvo, Francesco and Ledig, Christian }, title = { { Self-supervised Vision Transformer are Scalable Generative Models for Domain Generalization } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Despite notable advancements, the integration of deep learning (DL) techniques into impactful clinical applications, particularly in the realm of digital histopathology, has been hindered by challenges associated with achieving robust generalization across diverse imaging domains and characteristics. Traditional mitigation strategies in this field such as data augmentation and stain color normalization have proven insufficient in addressing this limitation, necessitating the exploration of alternative methodologies. To this end, we propose a novel generative method for domain generalization in histopathology images. Our method employs a generative, self-supervised Vision Transformer to dynamically extract characteristics of image patches and seamlessly infuse them into the original images, thereby creating novel, synthetic images with diverse attributes. By enriching the dataset with such synthesized images, we aim to enhance its holistic nature, facilitating improved generalization of DL models to unseen domains. Extensive experiments conducted on two distinct histopathology datasets demonstrate the effectiveness of our proposed approach, outperforming the state of the art substantially, on the Camelyon17-WILDS challenge dataset (+2%) and on a second epithelium-stroma dataset (+26%). Furthermore, we emphasize our method’s ability to readily scale with increasingly available unlabeled data samples and more complex, higher parametric architectures. Source code is available at github.com/sdoerrich97/vits-are-generative-models.
Self-supervised Vision Transformer are Scalable Generative Models for Domain Generalization
[ "Doerrich, Sebastian", "Di Salvo, Francesco", "Ledig, Christian" ]
Conference
2407.02900
[ "https://github.com/sdoerrich97/vits-are-generative-models" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
621
null
https://papers.miccai.org/miccai-2024/paper/2796_paper.pdf
@InProceedings{ Kim_Best_MICCAI2024, author = { Kim, SaeHyun and Choi, Yongjin and Na, Jincheol and Song, In-Seok and Lee, You-Sun and Hwang, Bo-Yeon and Lim, Ho-Kyung and Baek, Seung Jun }, title = { { Best of Both Modalities: Fusing CBCT and Intraoral Scan Data into a Single Tooth Image } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Cone-Beam CT (CBCT) and Intraoral Scan (IOS) are dental imaging techniques widely used for surgical planning and simulation. However, the spatial resolution of crowns is low in CBCT, and roots are not visible in IOS. We propose to take the best of both modalities: a seamless fusion of the crown from IOS and the root from CBCT into a single image in a watertight mesh, unlike prior works that compromise the resolution or simply overlay two images. The main challenges are aligning two images (registration) and fusing them (stitching) despite a large gap in the spatial resolution between two modalities. For effective registration, we propose centroid matching followed by coarse- and fine-registration based on the point-to-plane ICP method. Next, stitching of registered images is done to create a watertight mesh, for which we recursively interpolate the boundary points to seamlessly fill the gap between the registered images. Experiments show that the proposed method incurs low registration error, and the fused images are of high quality and accuracy according to the evaluation by experts.
Best of Both Modalities: Fusing CBCT and Intraoral Scan Data into a Single Tooth Image
[ "Kim, SaeHyun", "Choi, Yongjin", "Na, Jincheol", "Song, In-Seok", "Lee, You-Sun", "Hwang, Bo-Yeon", "Lim, Ho-Kyung", "Baek, Seung Jun" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
622
null
https://papers.miccai.org/miccai-2024/paper/3065_paper.pdf
@InProceedings{ Liu_ACLNet_MICCAI2024, author = { Liu, Chao and Yu, Xueqing and Wang, Dingyu and Jiang, Tingting }, title = { { ACLNet: A Deep Learning Model for ACL Rupture Classification Combined with Bone Morphology } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Magnetic Resonance Imaging (MRI) is widely used in diagnosing anterior cruciate ligament (ACL) injuries due to its ability to provide detailed image data. However, existing deep learning approaches often overlook additional factors beyond the image itself. In this study, we aim to bridge this gap by exploring the relationship between ACL rupture and the bone morphology of the femur and tibia. Leveraging extensive clinical experience, we acknowledge the significance of this morphological data, which is not readily observed manually. To effectively incorporate this vital information, we introduce ACLNet, a novel model that combines the convolutional representation of MRI images with the transformer representation of bone morphological point clouds. This integration significantly enhances ACL injury predictions by leveraging both imaging and geometric data. Our methodology demonstrated an enhancement in diagnostic precision on the in-house dataset compared to image-only methods, elevating the accuracy from 87.59% to 92.57%. This strategy of utilizing implicitly relevant information to enhance performance holds promise for a variety of medical-related tasks.
ACLNet: A Deep Learning Model for ACL Rupture Classification Combined with Bone Morphology
[ "Liu, Chao", "Yu, Xueqing", "Wang, Dingyu", "Jiang, Tingting" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
623
null
https://papers.miccai.org/miccai-2024/paper/1596_paper.pdf
@InProceedings{ Lai_From_MICCAI2024, author = { Lai, Yuxiang and Chen, Xiaoxi and Wang, Angtian and Yuille, Alan and Zhou, Zongwei }, title = { { From Pixel to Cancer: Cellular Automata in Computed Tomography } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
AI for cancer detection encounters the bottleneck of data scarcity, annotation difficulty, and low prevalence of early tumors. Tumor synthesis seeks to create artificial tumors in medical images, which can greatly diversify the data and annotations for AI training. However, current tumor synthesis approaches are not applicable across different organs due to their need for specific expertise and design. This paper establishes a set of generic rules to simulate tumor development. Each cell (pixel) is initially assigned a state between zero and ten to represent the tumor population, and a tumor can be developed based on three rules to describe the process of growth, invasion, and death. We apply these three generic rules to simulate tumor development—from pixel to cancer—using cellular automata. We then integrate the tumor state into the original computed tomography (CT) images to generate synthetic tumors across different organs. This tumor synthesis approach allows for sampling tumors at multiple stages and analyzing tumor-organ interaction. Clinically, a reader study involving three expert radiologists reveals that the synthetic tumors and their developing trajectories are convincingly realistic. Technically, we analyze and simulate tumor development at various stages using 9,262 raw, unlabeled CT images sourced from 68 hospitals worldwide. The performance in segmenting tumors in the liver, pancreas, and kidneys exceeds prevailing literature benchmarks, underlining the immense potential of tumor synthesis, especially for earlier cancer detection.
From Pixel to Cancer: Cellular Automata in Computed Tomography
[ "Lai, Yuxiang", "Chen, Xiaoxi", "Wang, Angtian", "Yuille, Alan", "Zhou, Zongwei" ]
Conference
2403.06459
[ "https://github.com/MrGiovanni/Pixel2Cancer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
624
null
https://papers.miccai.org/miccai-2024/paper/1171_paper.pdf
@InProceedings{ Ngu_TrainingFree_MICCAI2024, author = { Nguyen, Van Phi and Luong Ha, Tri Nhan and Pham, Huy Hieu and Tran, Quoc Long }, title = { { Training-Free Condition Video Diffusion Models for single frame Spatial-Semantic Echocardiogram Synthesis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Condition video diffusion models (CDM) have shown promising results for video synthesis, potentially enabling the generation of realistic echocardiograms to address the problem of data scarcity. However, current CDMs require a paired segmentation map and echocardiogram dataset. We present a new method called Free-Echo for generating realistic echocardiograms from a single end-diastolic segmentation map without additional training data. Our method is based on the 3D-Unet with Temporal Attention Layers model and is conditioned on the segmentation map using a training-free conditioning method based on SDEdit. We evaluate our model on two public echocardiogram datasets, CAMUS and EchoNet-Dynamic. We show that our model can generate plausible echocardiograms that are spatially aligned with the input segmentation map, achieving performance comparable to training-based CDMs. Our work opens up new possibilities for generating echocardiograms from a single segmentation map, which can be used for data augmentation, domain adaptation, and other applications in medical imaging. Our code is available at \url{https://github.com/gungui98/echo-free}
Training-Free Condition Video Diffusion Models for single frame Spatial-Semantic Echocardiogram Synthesis
[ "Nguyen, Van Phi", "Luong Ha, Tri Nhan", "Pham, Huy Hieu", "Tran, Quoc Long" ]
Conference
2408.03035
[ "https://github.com/gungui98/echo-free" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
625
null
https://papers.miccai.org/miccai-2024/paper/1396_paper.pdf
@InProceedings{ Töl_FUNAvg_MICCAI2024, author = { Tölle, Malte and Navarro, Fernando and Eble, Sebastian and Wolf, Ivo and Menze, Bjoern and Engelhardt, Sandy }, title = { { FUNAvg: Federated Uncertainty Weighted Averaging for Datasets with Diverse Labels } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Federated learning is one popular paradigm to train a joint model in a distributed, privacy-preserving environment. But partial annotations pose an obstacle meaning that categories of labels are heterogeneous over clients. We propose to learn a joint backbone in a federated manner, while each site receives its own multi-label segmentation head. By using Bayesian techniques we observe that the different segmentation heads although only trained on the individual client’s labels also learn information about the other labels not present at the respective site. This information is encoded in their predictive uncertainty. To obtain a final prediction we leverage this uncertainty and perform a weighted averaging of the ensemble of distributed segmentation heads, which allows us to segment “locally unknown” structures. With our method, which we refer to as FUNAvg, we are even on-par with the models trained and tested on the same dataset on average. The code is publicly available at https://github.com/Cardio-AI/FUNAvg.
FUNAvg: Federated Uncertainty Weighted Averaging for Datasets with Diverse Labels
[ "Tölle, Malte", "Navarro, Fernando", "Eble, Sebastian", "Wolf, Ivo", "Menze, Bjoern", "Engelhardt, Sandy" ]
Conference
2407.07488
[ "https://github.com/Cardio-AI/FUNAvg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
626
null
https://papers.miccai.org/miccai-2024/paper/1334_paper.pdf
@InProceedings{ Lu_DiffVPS_MICCAI2024, author = { Lu, Yingling and Yang, Yijun and Xing, Zhaohu and Wang, Qiong and Zhu, Lei }, title = { { Diff-VPS: Video Polyp Segmentation via a Multi-task Diffusion Network with Adversarial Temporal Reasoning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Diffusion Probabilistic Models have recently attracted significant attention in the community of computer vision due to their outstanding performance. However, while a substantial amount of diffusion-based research has focused on generative tasks, no work introduces diffusion models to advance the results of polyp segmentation in videos, which is frequently challenged by polyps’ high camouflage and redundant temporal cues. In this paper, we present a novel diffusion-based network for video polyp segmentation task, dubbed as Diff-VPS. We incorporate multi-task supervision into diffusion models to promote the discrimination of diffusion models on pixel-by-pixel segmentation. This integrates the contextual high-level information achieved by the joint classification and detection tasks. To explore the temporal dependency, Temporal Reasoning Module (TRM) is devised via reasoning and reconstructing the target frame from the previous frames. We further equip TRM with a generative adversarial self-supervised strategy to produce more realistic frames and thus capture better dynamic cues. Extensive experiments are conducted on SUN-SEG, and the results indicate that our proposed Diff-VPS significantly achieves state-of-the-art performance. Code is available at https://github.com/lydia-yllu/Diff-VPS.
Diff-VPS: Video Polyp Segmentation via a Multi-task Diffusion Network with Adversarial Temporal Reasoning
[ "Lu, Yingling", "Yang, Yijun", "Xing, Zhaohu", "Wang, Qiong", "Zhu, Lei" ]
Conference
2409.07238
[ "https://github.com/lydia-yllu/Diff-VPS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
627
null
https://papers.miccai.org/miccai-2024/paper/3750_paper.pdf
@InProceedings{ Yu_CPCLIP_MICCAI2024, author = { Yu, Xiaowei and Wu, Zihao and Zhang, Lu and Zhang, Jing and Lyu, Yanjun and Zhu, Dajiang }, title = { { CP-CLIP: Core-Periphery Feature Alignment CLIP for Zero-Shot Medical Image Analysis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Multi-modality learning, exemplified by the language and image pair pre-trained CLIP model, has demonstrated remarkable performance in enhancing zero-shot capabilities and has gained significant attention in the field. However, simply applying language-image pre-trained CLIP to medical image analysis encounters substantial domain shifts, resulting in significant performance degradation due to inherent disparities between natural (non-medical) and medical image characteristics. To address this challenge and uphold or even enhance CLIP’s zero-shot capability in medical image analysis, we develop a novel framework, Core-Periphery feature alignment for CLIP (CP-CLIP), tailored for handling medical images and corresponding clinical reports. Leveraging the foundational core-periphery organization that has been widely observed in brain networks, we augment CLIP by integrating a novel core-peripheryguided neural network. This auxiliary CP network not only aligns text and image features into a unified latent space more efficiently but also ensures the alignment is driven by domain-specific core information, e.g., in medical images and clinical reports. In this way, our approach effectively mitigates and further enhances CLIP’s zero-shot performance in medical image analysis. More importantly, our designed CP-CLIP exhibits excellent explanatory capability, enabling the automatic identification of critical regions in clinical analysis. Extensive experimentation and evaluation across five public datasets underscore the superiority of our CP-CLIP in zero-shot medical image prediction and critical area detection, showing its promising utility in multimodal feature alignment in current medical applications.
CP-CLIP: Core-Periphery Feature Alignment CLIP for Zero-Shot Medical Image Analysis
[ "Yu, Xiaowei", "Wu, Zihao", "Zhang, Lu", "Zhang, Jing", "Lyu, Yanjun", "Zhu, Dajiang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
628
null
https://papers.miccai.org/miccai-2024/paper/0234_paper.pdf
@InProceedings{ Elb_FDSOS_MICCAI2024, author = { Elbatel, Marawan and Liu, Keyuan and Yang, Yanqi and Li, Xiaomeng }, title = { { FD-SOS: Vision-Language Open-Set Detectors for Bone Fenestration and Dehiscence Detection from Intraoral Images } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Accurate detection of bone fenestration and dehiscence (FD) is of utmost importance for effective treatment planning in dentistry. While cone-beam computed tomography (CBCT) is the gold standard for evaluating FD, it comes with limitations such as radiation exposure, limited accessibility, and higher cost compared to intraoral images. In intraoral images, dentists face challenges in the differential diagnosis of FD. This paper presents a novel and clinically significant application of FD detection solely from intraoral images, eliminating the need for CBCT. To achieve this, we propose FD-SOS, a novel open-set object detector for FD detection from intraoral images. FD-SOS has two novel components: conditional contrastive denoising (CCDN) and teeth-specific matching assignment (TMA). These modules enable FD-SOS to effectively leverage external dental semantics. Experimental results showed that our method outperformed existing detection methods and surpassed dental professionals by 35% recall under the same level of precision. Code is available at {https://github.com/xmed-lab/FD-SOS.
FD-SOS: Vision-Language Open-Set Detectors for Bone Fenestration and Dehiscence Detection from Intraoral Images
[ "Elbatel, Marawan", "Liu, Keyuan", "Yang, Yanqi", "Li, Xiaomeng" ]
Conference
2407.09088
[ "https://github.com/xmed-lab/FD-SOS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
629
null
https://papers.miccai.org/miccai-2024/paper/3737_paper.pdf
@InProceedings{ Zho_Gradient_MICCAI2024, author = { Zhou, Li and Wang, Dayang and Xu, Yongshun and Han, Shuo and Morovati, Bahareh and Fan, Shuyi and Yu, Hengyong }, title = { { Gradient Guided Co-Retention Feature Pyramid Network for LDCT Image Denoising } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Low-dose computed tomography (LDCT) reduces the risks of radiation exposure but introduces noise and artifacts into CT images. The Feature Pyramid Network (FPN) is a conventional method for extracting multi-scale feature maps from input images. While upper layers in FPN enhance semantic value, details become generalized with reduced spatial resolution at each layer. In this work, we propose a Gradient Guided Co-Retention Feature Pyramid Network (G2CR-FPN) to address the connection between spatial resolution and semantic value beyond feature maps extracted from LDCT images. The network is structured with three essential paths: the bottom-up path utilizes the FPN structure to generate the hierarchical feature maps, representing multi-scale spatial resolutions and semantic values. Meanwhile, the lateral path serves as a skip connection between feature maps with the same spatial resolution, while also functioning feature maps as directional gradients. This path incorporates a gradient approximation, deriving edge-like enhanced feature maps in horizontal and vertical directions. The top-down path incorporates a proposed co-retention block that learns the high-level semantic value embedded in the preceding map of the path. This learning process is guided by the directional gradient approximation of the high-resolution feature map from the bottom-up path. Experimental results on the clinical CT images demonstrated the promising performance of the model.
Gradient Guided Co-Retention Feature Pyramid Network for LDCT Image Denoising
[ "Zhou, Li", "Wang, Dayang", "Xu, Yongshun", "Han, Shuo", "Morovati, Bahareh", "Fan, Shuyi", "Yu, Hengyong" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
630
null
https://papers.miccai.org/miccai-2024/paper/2553_paper.pdf
@InProceedings{ Jia_AnatomyAware_MICCAI2024, author = { Jiang, Hongchao and Miao, Chunyan }, title = { { Anatomy-Aware Gating Network for Explainable Alzheimer’s Disease Diagnosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Structural Magnetic Resonance Imaging (sMRI) is a non-invasive technique to get a snapshot of the brain for diagnosing Alzheimer’s disease. Existing works have used 3D brain images to train deep learning models for automated diagnosis, but these models are prone to exploit shortcut patterns that might not have clinical relevance. We propose an Anatomy-Aware Gating Network (AAGN) which explicitly extracts features from various anatomical regions using an anatomy-aware squeeze-and-excite operation. By conditioning on the anatomy-aware features, AAGN dynamically selects the regions where atrophy is most discriminative. Once trained, we can interpret the regions selected by AAGN as explicit explanations for a given prediction. Our experiments show that AAGN selects regions well-aligned with medical literature and outperforms various convolutional and attention architectures. The code is available at \url{https://github.com/hongcha0/aagn}.
Anatomy-Aware Gating Network for Explainable Alzheimer’s Disease Diagnosis
[ "Jiang, Hongchao", "Miao, Chunyan" ]
Conference
[ "https://github.com/hongcha0/aagn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
631
null
https://papers.miccai.org/miccai-2024/paper/1315_paper.pdf
@InProceedings{ Ber_Diffusion_MICCAI2024, author = { Bercea, Cosmin I. and Wiestler, Benedikt and Rueckert, Daniel and Schnabel, Julia A. }, title = { { Diffusion Models with Implicit Guidance for Medical Anomaly Detection } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Diffusion models have advanced unsupervised anomaly detection by improving the transformation of pathological images into pseudo-healthy equivalents. Nonetheless, standard approaches may compromise critical information during pathology removal, leading to restorations that do not align with unaffected regions in the original scans. Such discrepancies can inadvertently increase false positive rates and reduce specificity, complicating radiological evaluations. This paper introduces Temporal Harmonization for Optimal Restoration (THOR), which refines the reverse diffusion process by integrating implicit guidance through intermediate masks. THOR aims to preserve the integrity of healthy tissue details in reconstructed images, ensuring fidelity to the original scan in areas unaffected by pathology. Comparative evaluations reveal that THOR surpasses existing diffusion-based methods in retaining detail and precision in image restoration and detecting and segmenting anomalies in brain MRIs and wrist X-rays. Code: https://github.com/compai-lab/2024-miccai-bercea-thor.git.
Diffusion Models with Implicit Guidance for Medical Anomaly Detection
[ "Bercea, Cosmin I.", "Wiestler, Benedikt", "Rueckert, Daniel", "Schnabel, Julia A." ]
Conference
2403.08464
[ "https://github.com/compai-lab/2024-miccai-bercea-thor.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
632
null
https://papers.miccai.org/miccai-2024/paper/1242_paper.pdf
@InProceedings{ Yan_TSBP_MICCAI2024, author = { Yang, Tingting and Xiao, Liang and Zhang, Yizhe }, title = { { TSBP: Improving Object Detection in Histology Images via Test-time Self-guided Bounding-box Propagation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
In this paper, we propose a Test-time Self-guided Bounding-box Propagation (TSBP) method, leveraging Earth Mover’s Distance (EMD) to enhance object detection in histology images. TSBP utilizes bounding boxes with high confidence to influence those with low confidence, leveraging visual similarities between them. This propagation mechanism enables bounding boxes to be selected in a controllable, explainable, and robust manner, which surpasses the effectiveness of using simple thresholds and uncertainty calibration methods. Importantly, TSBP does not necessitate additional labeled samples for model training or parameter estimation, unlike calibration methods. We conduct experiments on gland detection and cell detection tasks in histology images. The results show that our proposed TSBP significantly improves detection outcomes when working in conjunction with state-of-the-art deep learning-based detection networks. Compared to other methods such as uncertainty calibration, TSBP yields more robust and accurate object detection predictions while using no additional labeled samples.
TSBP: Improving Object Detection in Histology Images via Test-time Self-guided Bounding-box Propagation
[ "Yang, Tingting", "Xiao, Liang", "Zhang, Yizhe" ]
Conference
2409.16678
[ "https://github.com/jwhgdeu/TSBP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
633
null
https://papers.miccai.org/miccai-2024/paper/4180_paper.pdf
@InProceedings{ Reh_ALargescale_MICCAI2024, author = { Rehman, Abdul and Meraj, Talha and Minhas, Aiman Mahmood and Imran, Ayisha and Ali, Mohsen and Sultani, Waqas }, title = { { A Large-scale Multi Domain Leukemia Dataset for the White Blood Cells Detection with Morphological Attributes for Explainability } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Earlier diagnosis of Leukemia can save thousands of lives annually. The prognosis of leukemia is challenging without the morphological information of White Blood Cells (WBC) and relies on the accessibility of expensive microscopes and the availability of hematologists to analyze Peripheral Blood Samples (PBS). Deep Learning based methods can be employed to assist hematologists. However, these algorithms require a large amount of labeled data, which is not readily available. To overcome this limitation, we have acquired a realistic, generalized, and {large} dataset. To collect this comprehensive dataset for real-world applications, two microscopes from two different cost spectrum’s (high-cost: HCM and low-cost: LCM) are used for dataset capturing at three magnifications (100x, 40x,10x) through different sensors (high-end camera for HCM, middle-level camera for LCM and mobile-phone’s camera for both). The high-sensor camera is 47 times more expensive than the middle-level camera and HCM is 17 times more expensive than LCM. In this collection, using HCM at high resolution (100x), experienced hematologists annotated 10.3k WBC of 14 types including artifacts, having 55k morphological labels (Cell Size, Nuclear Chromatin, Nuclear Shape, etc) from 2.4k images of several PBS leukemia patients. Later on, these annotations are transferred to other two magnifications of HCM, and three magnifications of LCM, and on each camera captured images. Along with this proposed LeukemiaAttri dataset, we provide baselines over multiple object detectors and Unsupervised Domain Adaptation (UDA) strategies, along with morphological information-based attribute prediction. The dataset is available at: https://tinyurl.com/586vaw3j
A Large-scale Multi Domain Leukemia Dataset for the White Blood Cells Detection with Morphological Attributes for Explainability
[ "Rehman, Abdul", "Meraj, Talha", "Minhas, Aiman Mahmood", "Imran, Ayisha", "Ali, Mohsen", "Sultani, Waqas" ]
Conference
2405.10803
[ "https://github.com/intelligentMachines-ITU/Blood-Cancer-Dataset" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
634
null
https://papers.miccai.org/miccai-2024/paper/2187_paper.pdf
@InProceedings{ Li_FewShot_MICCAI2024, author = { Li, Yi and Zhang, Qixiang and Xiang, Tianqi and Lin, Yiqun and Zhang, Qingling and Li, Xiaomeng }, title = { { Few-Shot Lymph Node Metastasis Classification Meets High Performance on Whole Slide Images via the Informative Non-Parametric Classifier } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Lymph node metastasis (LNM) classification is crucial for breast cancer staging. However, the process of identifying tiny metastatic cancer cells within gigapixel whole slide image (WSI) is tedious, time-consuming, and expensive. To address this challenge, computational pathology methods have emerged, particularly multiple instance learning (MIL) based on deep learning. But these methods require massive amounts of data, while existing few-shot methods severely compromise accuracy for data saving. To simultaneously achieve few-shot and high performance LNM classification, we propose the informative non-parametric classifier (INC). It maintains informative local patch features divided by mask label, then innovatively utilizes non-parametric similarity to classify LNM, avoiding overfitting on a few WSI examples. Experimental results demonstrate that the proposed INC outperforms existing SoTA methods across various settings, with less data and labeling cost. For the same setting, we achieve remarkable AUC improvements over 29.07% on CAMELYON16. Additionally, our approach demonstrates excellent generalizability across multiple medical centers and corrupted WSIs, even surpassing many-shot SoTA methods over 7.55% on CAMELYON16-C.
Few-Shot Lymph Node Metastasis Classification Meets High Performance on Whole Slide Images via the Informative Non-Parametric Classifier
[ "Li, Yi", "Zhang, Qixiang", "Xiang, Tianqi", "Lin, Yiqun", "Zhang, Qingling", "Li, Xiaomeng" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
635
null
https://papers.miccai.org/miccai-2024/paper/0929_paper.pdf
@InProceedings{ Wei_Prompting_MICCAI2024, author = { Wei, Zhikai and Dong, Wenhui and Zhou, Peilin and Gu, Yuliang and Zhao, Zhou and Xu, Yongchao }, title = { { Prompting Segment Anything Model with Domain-Adaptive Prototype for Generalizable Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Deep learning based methods often suffer from performance degradation caused by domain shift. In recent years, many sophisticated network structures have been designed to tackle this problem. However, the advent of large model trained on massive data, with its exceptional segmentation capability, introduces a new perspective for solving medical segmentation problems. In this paper, we propose a novel Domain-Adaptive Prompt framework for fine-tuning the Segment Anything Model (termed as DAPSAM) to address single-source domain generalization (SDG) in segmenting medical images. DAPSAM not only utilizes a more generalization-friendly adapter to fine-tune the large model, but also introduces a self-learning prototype-based prompt generator to enhance model’s generalization ability. Specifically, we first merge the important low-level features into intermediate features before feeding to each adapter, followed by an attention filter to remove redundant information. This yields more robust image embeddings. Then, we propose using a learnable memory bank to construct domain-adaptive prototypes for prompt generation, helping to achieve generalizable medical image segmentation. Extensive experimental results demonstrate that our DAPSAM achieves state-of-the-art performance on two SDG medical image segmentation tasks with different modalities. The code is available at https://github.com/wkklavis/DAPSAM.
Prompting Segment Anything Model with Domain-Adaptive Prototype for Generalizable Medical Image Segmentation
[ "Wei, Zhikai", "Dong, Wenhui", "Zhou, Peilin", "Gu, Yuliang", "Zhao, Zhou", "Xu, Yongchao" ]
Conference
2409.12522
[ "https://github.com/wkklavis/DAPSAM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
636
null
https://papers.miccai.org/miccai-2024/paper/0908_paper.pdf
@InProceedings{ Kan_BGFYOLO_MICCAI2024, author = { Kang, Ming and Ting, Chee-Ming and Ting, Fung Fung and Phan, Raphaël C.-W. }, title = { { BGF-YOLO: Enhanced YOLOv8 with Multiscale Attentional Feature Fusion for Brain Tumor Detection } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
You Only Look Once (YOLO)-based object detectors have shown remarkable accuracy for automated brain tumor detection. In this paper, we develop a novel BGF-YOLO architecture by incorporating Bi-level Routing Attention (BRA), Generalized feature pyramid networks (GFPN), and Fourth detecting head into YOLOv8. BGF-YOLO contains an attention mechanism to focus more on important features, and feature pyramid networks to enrich feature representation by merging high-level semantic features with spatial details. Furthermore, we investigate the effect of different attention mechanisms and feature fusions, detection head architectures on brain tumor detection accuracy. Experimental results show that BGF-YOLO gives a 4.7% absolute increase of mAP50 compared to YOLOv8x, and achieves state-of-the-art on the brain tumor detection dataset Br35H. The code is available at https://github.com/mkang315/BGF-YOLO.
BGF-YOLO: Enhanced YOLOv8 with Multiscale Attentional Feature Fusion for Brain Tumor Detection
[ "Kang, Ming", "Ting, Chee-Ming", "Ting, Fung Fung", "Phan, Raphaël C.-W." ]
Conference
2309.12585
[ "https://github.com/mkang315/BGF-YOLO" ]
https://huggingface.co/papers/2309.12585
0
0
0
4
[]
[]
[]
[]
[]
[]
1
Poster
637
null
https://papers.miccai.org/miccai-2024/paper/1184_paper.pdf
@InProceedings{ Yu_I2Net_MICCAI2024, author = { Yu, Jiahao and Duan, Fan and Chen, Li }, title = { { I2Net: Exploiting Misaligned Contexts Orthogonally with Implicit-Parameterized Implicit Functions for Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Recent medical image segmentation methods have started to apply implicit neural representation (INR) to segmentation networks to learn continuous data representations. Though effective, they suffer from inferior performance. In this paper, we delve into the inferiority and discover that the underlying reason behind it is the indiscriminate treatment for context fusion that fails to properly exploit misaligned contexts. Therefore, we propose a novel Implicit-parameterized INR Network (I2Net), which dynamically generates the model parameters of INRs to adapt to different misaligned contexts. We further propose novel gate shaping and learner orthogonalization to induce I2Net to handle misaligned contexts in orthogonal ways. We conduct extensive experiments on two medical datasets, i.e. Glas and Synapse, and a generic dataset, i.e. Cityscapes, to show the superiority of our I2Net. Code: https://github.com/ChineseYjh/I2Net.
I2Net: Exploiting Misaligned Contexts Orthogonally with Implicit-Parameterized Implicit Functions for Medical Image Segmentation
[ "Yu, Jiahao", "Duan, Fan", "Chen, Li" ]
Conference
[ "https://github.com/ChineseYjh/I2Net" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
638
null
https://papers.miccai.org/miccai-2024/paper/0791_paper.pdf
@InProceedings{ Li_EndoSparse_MICCAI2024, author = { Li, Chenxin and Feng, Brandon Y. and Liu, Yifan and Liu, Hengyu and Wang, Cheng and Yu, Weihao and Yuan, Yixuan }, title = { { EndoSparse: Real-Time Sparse View Synthesis of Endoscopic Scenes using Gaussian Splatting } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
3D reconstruction of biological tissues from a collection of endoscopic images is a key to unlock various important downstream surgical applications with 3D capabilities. Existing methods employ various advanced neural rendering techniques for photorealistic view synthesis, but they often struggle to recover accurate 3D representations when only sparse observations are available, which is often the case in real-world clinical scenarios. To tackle this sparsity challenge, we propose a framework leveraging the prior knowledge from multiple foundation models during the reconstruction process. Experimental results indicate that our proposed strategy significantly improves the geometric and appearance quality under challenging sparse-view conditions, including using only three views. In rigorous benchmarking experiments against the state-of-the-art methods, EndoSparse achieves superior results in terms of accurate geometry, realistic appearance, and rendering efficiency, confirming the robustness to the sparse-view limitation in endoscopic reconstruction. EndoSparse signifies a steady step towards the practical deployment of neural 3D reconstruction in real-world clinical scenarios. Project page: \https://endo-sparse.github.io/.
EndoSparse: Real-Time Sparse View Synthesis of Endoscopic Scenes using Gaussian Splatting
[ "Li, Chenxin", "Feng, Brandon Y.", "Liu, Yifan", "Liu, Hengyu", "Wang, Cheng", "Yu, Weihao", "Yuan, Yixuan" ]
Conference
2407.01029
[ "https://endo-sparse.github.io/\nhttps://github.com/CUHK-AIM-Group/EndoSparse" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
639
null
https://papers.miccai.org/miccai-2024/paper/1220_paper.pdf
@InProceedings{ Yan_Surgformer_MICCAI2024, author = { Yang, Shu and Luo, Luyang and Wang, Qiong and Chen, Hao }, title = { { Surgformer: Surgical Transformer with Hierarchical Temporal Attention for Surgical Phase Recognition } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Existing state-of-the-art methods for surgical phase recognition either rely on the extraction of spatial-temporal features at a short-range temporal resolution or adopt the sequential extraction of the spatial and temporal features across the entire temporal resolution. However, these methods have limitations in modeling spatial-temporal dependency and addressing spatial-temporal redundancy: 1) These methods fail to effectively model spatial-temporal dependency, due to the lack of long-range information or joint spatial-temporal modeling. 2) These methods utilize dense spatial features across the entire temporal resolution, resulting in significant spatial-temporal redundancy. In this paper, we propose the Surgical Transformer (Surgformer) to address the issues of spatial-temporal modeling and redundancy in an end-to-end manner, which employs divided spatial-temporal attention and takes a limited set of sparse frames as input. Moreover, we propose a novel Hierarchical Temporal Attention (HTA) to capture both global and local information within varied temporal resolutions from a target frame-centric perspective. Distinct from conventional temporal attention that primarily emphasizes dense long-range similarity, HTA not only captures long-term information but also considers local latent consistency among informative frames. HTA then employs pyramid feature aggregation to effectively utilize temporal information across diverse temporal resolutions, thereby enhancing the overall temporal representation. Extensive experiments on two challenging benchmark datasets verify that our proposed Surgformer performs favorably against the state-of-the-art methods. The code is released at https://github.com/isyangshu/Surgformer.
Surgformer: Surgical Transformer with Hierarchical Temporal Attention for Surgical Phase Recognition
[ "Yang, Shu", "Luo, Luyang", "Wang, Qiong", "Chen, Hao" ]
Conference
2408.03867
[ "https://github.com/isyangshu/Surgformer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
640
null
https://papers.miccai.org/miccai-2024/paper/0658_paper.pdf
@InProceedings{ Hag_Deep_MICCAI2024, author = { Hagag, Amr and Gomaa, Ahmed and Kornek, Dominik and Maier, Andreas and Fietkau, Rainer and Bert, Christoph and Huang, Yixing and Putz, Florian }, title = { { Deep Learning for Cancer Prognosis Prediction Using Portrait Photos by StyleGAN Embedding } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Survival prediction for cancer patients is critical for optimal treatment selection and patient management. Current patient survival prediction methods typically extract survival information from patients’ clinical record data or biological and imaging data. In practice, experienced clinicians can have a preliminary assessment of patients’ health status based on patients’ observable physical appearances, which are mainly facial features. However, such assessment is highly subjective. In this work, the efficacy of objectively capturing and using prognostic information contained in conventional portrait photographs using deep learning for survival prediction purposes is investigated for the first time. A pre-trained StyleGAN2 model is fine-tuned on a custom dataset of our cancer patients’ photos to empower its generator with generative ability suitable for patients’ photos. The StyleGAN2 is then used to embed the photographs to its highly expressive latent space. Utilizing state-of- the-art survival analysis models and StyleGAN’s latent space embeddings, this approach predicts the overall survival for single as well as pancancer, achieving a C-index of 0.680 in a pan-cancer analysis, showcasing the prognostic value embedded in simple 2D facial images. In addition, thanks to StyleGAN’s interpretable latent space, our survival prediction model can be validated for relying on essential facial features, eliminating any biases from extraneous information like clothing or background. Moreover, our approach provides a novel health attribute obtained from StyleGAN’s extracted features, allowing the modification of face photographs to either a healthier or more severe illness appearance, which has significant prognostic value for patient care and societal perception, underscoring its potential important clinical value.
Deep Learning for Cancer Prognosis Prediction Using Portrait Photos by StyleGAN Embedding
[ "Hagag, Amr", "Gomaa, Ahmed", "Kornek, Dominik", "Maier, Andreas", "Fietkau, Rainer", "Bert, Christoph", "Huang, Yixing", "Putz, Florian" ]
Conference
2306.14596
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
641
null
https://papers.miccai.org/miccai-2024/paper/2724_paper.pdf
@InProceedings{ Kun_Training_MICCAI2024, author = { Kunanbayev, Kassymzhomart and Shen, Vyacheslav and Kim, Dae-Shik }, title = { { Training ViT with Limited Data for Alzheimer’s Disease Classification: an Empirical Study } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
In this paper, we conduct an extensive exploration of a Vision Transformer (ViT) in brain medical imaging in a low-data regime. The recent and ongoing success of Vision Transformers in computer vision has motivated its development in medical imaging, but trumping it with inductive bias in a brain imaging domain imposes a real challenge since collecting and accessing large amounts of brain medical data is a labor-intensive process. Motivated by the need to bridge this data gap, we embarked on an investigation into alternative training strategies ranging from self-supervised pre-training to knowledge distillation to determine the feasibility of producing a practical plain ViT model. To this end, we conducted an intensive set of experiments using a small amount of labeled 3D brain MRI data for the task of Alzheimer’s disease classification. As a result, our experiments yield an optimal training recipe, thus paving the way for Vision Transformer-based models for other low-data medical imaging applications. To bolster further development, we release our assortment of pre-trained models for a variety of MRI-related applications: https://github.com/qasymjomart/ViT_recipe_for_AD
Training ViT with Limited Data for Alzheimer’s Disease Classification: an Empirical Study
[ "Kunanbayev, Kassymzhomart", "Shen, Vyacheslav", "Kim, Dae-Shik" ]
Conference
[ "https://github.com/qasymjomart/ViT_recipe_for_AD" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
642
null
https://papers.miccai.org/miccai-2024/paper/0311_paper.pdf
@InProceedings{ Özs_ORacle_MICCAI2024, author = { Özsoy, Ege and Pellegrini, Chantal and Keicher, Matthias and Navab, Nassir }, title = { { ORacle: Large Vision-Language Models for Knowledge-Guided Holistic OR Domain Modeling } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Every day, countless surgeries are performed worldwide, each within the distinct settings of operating rooms (ORs) that vary not only in their setups but also in the personnel, tools, and equipment used. This inherent diversity poses a substantial challenge for achieving a holistic understanding of the OR, as it requires models to generalize beyond their initial training datasets. To reduce this gap, we introduce ORacle, an advanced vision-language model designed for holistic OR domain modeling, which incorporates multi-view and temporal capabilities and can leverage external knowledge during inference, enabling it to adapt to previously unseen surgical scenarios. This capability is further enhanced by our novel data augmentation framework, which significantly diversifies the training dataset, ensuring ORacle’s proficiency in applying the provided knowledge effectively. In rigorous testing, in scene graph generation, and downstream tasks on the 4D-OR dataset, ORacle not only demonstrates state-of-the-art performance but does so requiring less data than existing models. Furthermore, its adaptability is displayed through its ability to interpret unseen views, actions, and appearances of tools and equipment. This demonstrates ORacle’s potential to significantly enhance the scalability and affordability of OR domain modeling and opens a pathway for future advancements in surgical data science. We will release our code and data upon acceptance.
ORacle: Large Vision-Language Models for Knowledge-Guided Holistic OR Domain Modeling
[ "Özsoy, Ege", "Pellegrini, Chantal", "Keicher, Matthias", "Navab, Nassir" ]
Conference
2404.07031
[ "https://github.com/egeozsoy/ORacle" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
643
null
https://papers.miccai.org/miccai-2024/paper/1998_paper.pdf
@InProceedings{ Ala_Jumpstarting_MICCAI2024, author = { Alapatt, Deepak and Murali, Aditya and Srivastav, Vinkle and AI4SafeChole Consortium and Mascagni, Pietro and Padoy, Nicolas }, title = { { Jumpstarting Surgical Computer Vision } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Consensus amongst researchers and industry points to a lack of large, representative annotated datasets as the biggest obstacle to progress in the field of surgical data science. Advances in Self-Supervised Learning (SSL) represent a solution, reducing the dependence on large labeled datasets by providing task-agnostic initializations. However, the robustness of current self-supervised learning methods to domain shifts remains unclear, limiting our understanding of its utility for leveraging diverse sources of surgical data. Shifting the focus from methods to data, we demonstrate that the downstream value of SSL-based initializations is intricately intertwined with the composition of pre-training datasets. These results underscore an important gap that needs to be filled as we scale self-supervised approaches toward building general-purpose “foundation models” that enable diverse use-cases within the surgical domain. Through several stages of controlled experimentation, we develop recommendations for pretraining dataset composition evidenced through over 300 experiments spanning 20 pre-training datasets, 9 surgical procedures, 7 centers (hospitals), 3 labeled-data settings, 3 downstream tasks, and multiple runs. Using the approaches here described, we outperform state-of-the-art pre-trainings on two public benchmarks for phase recognition: up to 2.2% on Cholec80 and 5.1% on AutoLaparo.
Jumpstarting Surgical Computer Vision
[ "Alapatt, Deepak", "Murali, Aditya", "Srivastav, Vinkle", "AI4SafeChole Consortium", "Mascagni, Pietro", "Padoy, Nicolas" ]
Conference
2312.05968
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
644
null
https://papers.miccai.org/miccai-2024/paper/3398_paper.pdf
@InProceedings{ Szc_Let_MICCAI2024, author = { Szczepański, Tomasz and Grzeszczyk, Michal K. and Płotka, Szymon and Adamowicz, Arleta and Fudalej, Piotr and Korzeniowski, Przemysław and Trzciński, Tomasz and Sitek, Arkadiusz }, title = { { Let Me DeCode You: Decoder Conditioning with Tabular Data } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Training deep neural networks for 3D segmentation tasks can be challenging, often requiring efficient and effective strategies to improve model performance. In this study, we introduce a novel approach, DeCode, that utilizes label-derived features for model conditioning to support the decoder in the reconstruction process dynamically, aiming to enhance the efficiency of the training process. DeCode focuses on improving 3D segmentation performance through the incorporation of conditioning embedding with learned numerical representation of 3D-label shape features. Specifically, we develop an approach, where conditioning is applied during the training phase to guide the network toward robust segmentation. When labels are not available during inference, our model infers the necessary conditioning embedding directly from the input data, thanks to a feed-forward network learned during the training phase. This approach is tested using synthetic data and cone-beam computed tomography (CBCT) images of teeth. For CBCT, three datasets are used: one publicly available and two in-house. Our results show that DeCode significantly outperforms traditional, unconditioned models in terms of generalization to unseen data, achieving higher accuracy at a reduced computational cost. This work represents the first of its kind to explore conditioning strategies in 3D data segmentation, offering a novel and more efficient method for leveraging annotated data. Our code, pre-trained models are publicly available at https://github.com/SanoScience/DeCode.
Let Me DeCode You: Decoder Conditioning with Tabular Data
[ "Szczepański, Tomasz", "Grzeszczyk, Michal K.", "Płotka, Szymon", "Adamowicz, Arleta", "Fudalej, Piotr", "Korzeniowski, Przemysław", "Trzciński, Tomasz", "Sitek, Arkadiusz" ]
Conference
2407.09437
[ "https://github.com/SanoScience/DeCode" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
645
null
https://papers.miccai.org/miccai-2024/paper/1520_paper.pdf
@InProceedings{ Wan_Structurepreserving_MICCAI2024, author = { Wang, Shuxian and Paruchuri, Akshay and Zhang, Zhaoxi and McGill, Sarah and Sengupta, Roni }, title = { { Structure-preserving Image Translation for Depth Estimation in Colonoscopy } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Monocular depth estimation in colonoscopy video aims to overcome the unusual lighting properties of the colonoscopic environment. One of the major challenges in this area is the domain gap between annotated but unrealistic synthetic data and unannotated but realistic clinical data. Previous attempts to bridge this domain gap directly target the depth estimation task itself. We propose a general pipeline of structure-preserving synthetic-to-real (sim2real) image translation (producing a modified version of the input image) to retain depth geometry through the translation process. This allows us to generate large quantities of realistic-looking synthetic images for supervised depth estimation with improved generalization to the clinical domain. We also propose a dataset of hand-picked sequences from clinical colonoscopies to improve the image translation process. We demonstrate the simultaneous realism of the translated images and preservation of depth maps via the performance of downstream depth estimation on various datasets.
Structure-preserving Image Translation for Depth Estimation in Colonoscopy
[ "Wang, Shuxian", "Paruchuri, Akshay", "Zhang, Zhaoxi", "McGill, Sarah", "Sengupta, Roni" ]
Conference
[ "github.com/sherry97/struct-preserving-cyclegan" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
646
null
https://papers.miccai.org/miccai-2024/paper/0950_paper.pdf
@InProceedings{ Xu_Dynamic_MICCAI2024, author = { Xu, Fangqiang and Tu, Wenxuan and Feng, Fan and Gunawardhana, Malitha and Yang, Jiayuan and Gu, Yun and Zhao, Jichao }, title = { { Dynamic Position Transformation and Boundary Refinement Network for Left Atrial Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Left atrial (LA) segmentation is a crucial technique for irregular heartbeat (i.e., atrial fibrillation) diagnosis. Most current methods for LA segmentation strictly assume that the input data is acquired using object-oriented center cropping, while this assumption may not always hold in practice due to the high cost of manual object annotation. Random cropping is a straightforward data pre-processing approach. However, it 1) introduces significant irregularities and incompleteness in the input data and 2) disrupts the coherence and continuity of object boundary regions. To tackle these issues, we propose a novel Dynamic Position transformation and Boundary refinement Network (DPBNet). The core idea is to dynamically adjust the relative position of irregular targets to construct their contextual relationships and prioritize difficult boundary pixels to enhance foreground-background distinction. Specifically, we design a shuffle-then-reorder attention module to adjust the position of disrupted objects in the latent space using dynamic generation ratios, such that the vital dependencies among these random cropping targets could be well captured and preserved. Moreover, to improve the accuracy of boundary localization, we introduce a dual fine-grained boundary loss with scenario-adaptive weights to handle the ambiguity of the dual boundary at a fine-grained level, promoting the clarity and continuity of the obtained results. Extensive experimental results on benchmark dataset have demonstrated that DPBNet consistently outperforms existing state-of-the-art methods.
Dynamic Position Transformation and Boundary Refinement Network for Left Atrial Segmentation
[ "Xu, Fangqiang", "Tu, Wenxuan", "Feng, Fan", "Gunawardhana, Malitha", "Yang, Jiayuan", "Gu, Yun", "Zhao, Jichao" ]
Conference
2407.05505
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
647
null
https://papers.miccai.org/miccai-2024/paper/0527_paper.pdf
@InProceedings{ Tia_uniGradICON_MICCAI2024, author = { Tian, Lin and Greer, Hastings and Kwitt, Roland and Vialard, François-Xavier and San José Estépar, Raúl and Bouix, Sylvain and Rushmore, Richard and Niethammer, Marc }, title = { { uniGradICON: A Foundation Model for Medical Image Registration } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Conventional medical image registration approaches directly optimize over the parameters of a transformation model. These approaches have been highly successful and are used generically for registrations of different anatomical regions. Recent deep registration networks are incredibly fast and accurate but are only trained for specific tasks. Hence, they are no longer generic registration approaches. We therefore propose uniGradICON, a first step toward a foundation model for registration providing 1) great performance across multiple datasets which is not feasible for current learning-based registration methods, 2) zero-shot capabilities for new registration tasks suitable for different acquisitions, anatomical regions, and modalities compared to the training dataset, and 3) a strong initialization for finetuning on out-of-distribution registration tasks. UniGradICON unifies the speed and accuracy benefits of learning-based registration algorithms with the generic applicability of conventional non-deep-learning approaches. We extensively trained and evaluated uniGradICON on twelve different public datasets. Our code and weight are available at https://github.com/uncbiag/uniGradICON.
uniGradICON: A Foundation Model for Medical Image Registration
[ "Tian, Lin", "Greer, Hastings", "Kwitt, Roland", "Vialard, François-Xavier", "San José Estépar, Raúl", "Bouix, Sylvain", "Rushmore, Richard", "Niethammer, Marc" ]
Conference
2403.05780
[ "https://github.com/uncbiag/uniGradICON" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
648
null
https://papers.miccai.org/miccai-2024/paper/0845_paper.pdf
@InProceedings{ Xia_Enhancing_MICCAI2024, author = { Xia, Yuexuan and Ma, Benteng and Dou, Qi and Xia, Yong }, title = { { Enhancing Federated Learning Performance Fairness via Collaboration Graph-based Reinforcement Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Federated learning has recently developed into a pivotal distributed learning paradigm, wherein a server aggregates numerous client-trained models into a global model without accessing any client data directly. It is acknowledged that the impact of statistical heterogeneity in client local data on the pace of global model convergence, but it is often underestimated that this heterogeneity also engenders a biased global model with notable variance in accuracy across clients. Contextually, the prevalent solutions entail modifying the optimization objective. However, these solutions often overlook implicit relationships, such as the pairwise distances of site data distributions, which makes pairwise exclusive or synergistic optimization among client models. Such optimization conflicts compromise the efficacy of earlier methods, leading to performance imbalance or even negative transfer. To tackle this issue, we propose a novel aggregation strategy called Collaboration Graph-based Reinforcement Learning (FedGraphRL). By deploying a reinforcement learning (RL) agent equipped with a multi-layer adaptive graph convolutional network (AGCN) on the server-side, we can learn a collaboration graph from client state vectors, revealing the collaborative relationships among clients during optimization. Guided by an introduced reward that balances fairness and performance, the agent allocates aggregation weights, thereby promoting automated decision-making and improvements in fairness. The experimental results on two real-world multi-center medical datasets suggest the effectiveness and superiority of the proposed FedGraphRL.
Enhancing Federated Learning Performance Fairness via Collaboration Graph-based Reinforcement Learning
[ "Xia, Yuexuan", "Ma, Benteng", "Dou, Qi", "Xia, Yong" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
649
null
https://papers.miccai.org/miccai-2024/paper/1851_paper.pdf
@InProceedings{ Liu_LMUNet_MICCAI2024, author = { Liu, Anglin and Jia, Dengqiang and Sun, Kaicong and Meng, Runqi and Zhao, Meixin and Jiang, Yongluo and Dong, Zhijian and Gao, Yaozong and Shen, Dinggang }, title = { { LM-UNet: Whole-body PET-CT Lesion Segmentation with Dual-Modality-based Annotations Driven by Latent Mamba U-Net } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
PET-CT integrates metabolic information with anatomical structures and plays a vital role in revealing systemic metabolic abnormalities. Automatic segmentation of lesions from whole-body PET-CT could assist diagnostic workflow, support quantitative diagnosis, and increase the detection rate of microscopic lesions. However, automatic lesion segmentation from PET-CT images still faces challenges due to 1) limitations of single-modality-based annotations in public PET-CT datasets, 2) difficulty in distinguishing between pathological and physiological high metabolism, and 3) lack of effective utilization of CT’s structural information. To address these challenges, we propose a threefold strategy. First, we develop an in-house dataset with dual-modality-based annotations to improve clinical applicability; Second, we introduce a model called Latent Mamba U-Net (LM-UNet), to more accurately identify lesions by modeling long-range dependencies; Third, we employ an anatomical enhancement module to better integrate tissue structural features. Experimental results show that our comprehensive framework achieves improved performance over the state-of-the-art methods on both public and in-house datasets, further advancing the development of AI-assisted clinical applications. Our code is available at https://github.com/Joey-S-Liu/LM-UNet.
LM-UNet: Whole-body PET-CT Lesion Segmentation with Dual-Modality-based Annotations Driven by Latent Mamba U-Net
[ "Liu, Anglin", "Jia, Dengqiang", "Sun, Kaicong", "Meng, Runqi", "Zhao, Meixin", "Jiang, Yongluo", "Dong, Zhijian", "Gao, Yaozong", "Shen, Dinggang" ]
Conference
[ "https://github.com/Joey-S-Liu/LM-UNet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
650
null
https://papers.miccai.org/miccai-2024/paper/3697_paper.pdf
@InProceedings{ Guo_FairQuantize_MICCAI2024, author = { Guo, Yuanbo and Jia, Zhenge and Hu, Jingtong and Shi, Yiyu }, title = { { FairQuantize: Achieving Fairness Through Weight Quantization for Dermatological Disease Diagnosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Recent studies have demonstrated that deep learning (DL) models for medical image classification may exhibit biases toward certain demographic attributes such as race, gender, and age. Existing bias mitigation strategies often require sensitive attributes for inference, which may not always be available, or achieve moderate fairness enhancement at the cost of significant accuracy decline. To overcome these obstacles, we propose FairQuantize, a novel approach that ensures fairness by quantizing model weights. We reveal that quantization can be used not as a tool for model compression but as a means to improve model fairness. It is based on the observation that different weights in a model impact performance on various demographic groups differently. FairQuantize selectively quantizes certain weights to enhance fairness while only marginally impacting accuracy. In addition, resulting quantized models can work without sensitive attributes as input. Experimental results on two skin disease datasets demonstrate that FairQuantize can significantly enhance fairness among sensitive attributes while minimizing the impact on overall performance.
FairQuantize: Achieving Fairness Through Weight Quantization for Dermatological Disease Diagnosis
[ "Guo, Yuanbo", "Jia, Zhenge", "Hu, Jingtong", "Shi, Yiyu" ]
Conference
[ "https://github.com/guoyb17/FairQuantize" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
651
null
https://papers.miccai.org/miccai-2024/paper/3154_paper.pdf
@InProceedings{ San_FissionFusion_MICCAI2024, author = { Sanjeev, Santosh and Zhaksylyk, Nuren and Almakky, Ibrahim and Hashmi, Anees Ur Rehman and Qazi, Mohammad Areeb and Yaqub, Mohammad }, title = { { FissionFusion: Fast Geometric Generation and Hierarchical Souping for Medical Image Analysis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
The scarcity of well-annotated medical datasets requires leveraging transfer learning from broader datasets like ImageNet or pre-trained models like CLIP. Model soups averages multiple fine-tuned models aiming to improve performance on In-Domain (ID) tasks and enhance robustness on Out-of-Distribution (OOD) datasets. However, applying these methods to the medical imaging domain faces challenges and results in suboptimal performance. This is primarily due to differences in error surface characteristics that stem from data complexities such as heterogeneity, domain shift, class imbalance, and distributional shifts between training and testing phases. To address this issue, we propose a hierarchical merging approach that involves local and global aggregation of models at various levels based on models’ hyperparameter configurations. Furthermore, to alleviate the need for training a large number of models in the hyperparameter search, we introduce a computationally efficient method using a cyclical learning rate scheduler to produce multiple models for aggregation in the weight space. Our method demonstrates significant improvements over the model souping approach across multiple datasets (around 6% gain in HAM10000 and CheXpert datasets) while maintaining low computational costs for model generation and selection. Moreover, we achieve better results on OOD datasets compared to model soups. Code is available at https://github.com/BioMedIA-MBZUAI/FissionFusion.
FissionFusion: Fast Geometric Generation and Hierarchical Souping for Medical Image Analysis
[ "Sanjeev, Santosh", "Zhaksylyk, Nuren", "Almakky, Ibrahim", "Hashmi, Anees Ur Rehman", "Qazi, Mohammad Areeb", "Yaqub, Mohammad" ]
Conference
2403.13341
[ "https://github.com/BioMedIA-MBZUAI/FissionFusion" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
652
null
https://papers.miccai.org/miccai-2024/paper/0132_paper.pdf
@InProceedings{ Wan_3DGPS_MICCAI2024, author = { Wang, Ce and Huang, Xiaoyu and Kong, Yaqing and Li, Qian and Hao, You and Zhou, Xiang }, title = { { 3DGPS: A 3D Differentiable-Gaussian-based Planning Strategy for Liver Tumor Cryoablation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Effective preoperative planning is crucial for successful cryoablation of liver tumors. However, conventional planning methods rely heavily on clinicians’ experience, which may not always lead to an optimal solution due to the intricate 3D anatomical structures and clinical constraints. Lots of planning methods have been proposed, but lack interactivity between multiple probes and are difficult to adapt to diverse clinical scenarios. To bridge the gap, we present a novel 3D Differentiable-Gaussian-based Planning Strategy (3DGPS) for cryoablation of liver tumor considering both the probe interactivity and several clinical constraints. Especially, the problem is formulated to search the minimal circumscribed tumor ablation region, which is generated by multiple 3D ellipsoids, each from one cryoprobe. These ellipsoids are parameterized by the differentiable Gaussians and optimized mainly within two stages, fitting and circumscribing, with formulated clinical constraints in an end-to-end manner. Quantitative and qualitative experiments on LiTS and in-house datasets verify the effectiveness of 3DGPS.
3DGPS: A 3D Differentiable-Gaussian-based Planning Strategy for Liver Tumor Cryoablation
[ "Wang, Ce", "Huang, Xiaoyu", "Kong, Yaqing", "Li, Qian", "Hao, You", "Zhou, Xiang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
653
null
https://papers.miccai.org/miccai-2024/paper/0893_paper.pdf
@InProceedings{ Zhu_SRECNN_MICCAI2024, author = { Zhu, Yuliang and Cheng, Jing and Cui, Zhuo-Xu and Ren, Jianfeng and Wang, Chengbo and Liang, Dong }, title = { { SRE-CNN: A Spatiotemporal Rotation-Equivariant CNN for Cardiac Cine MR Imaging } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Dynamic MR images possess various transformation symmetries, including the rotation symmetry of local features within the image and along the temporal dimension. Utilizing these symmetries as prior knowledge can facilitate dynamic MR imaging with high spatiotemporal resolution. Equivariant CNN is an effective tool to leverage the symmetry priors. However, current equivariant CNN methods fail to fully exploit these symmetry priors in dynamic MR imaging. In this work, we propose a novel framework of Spatiotemporal Rotation-Equivariant CNN (SRE-CNN), spanning from the underlying high-precision filter design to the construction of the temporal-equivariant convolutional module and imaging model, to fully harness the rotation symmetries inherent in dynamic MR images. The temporal-equivariant convolutional module enables exploitation the rotation symmetries in both spatial and temporal dimensions, while the high-precision convolutional filter, based on parametrization strategy, enhances the utilization of rotation symmetry of local features to improve the reconstruction of detailed anatomical structures. Experiments conducted on highly undersampled dynamic cardiac cine data (up to 20X) have demonstrated the superior performance of our proposed approach, both quantitatively and qualitatively.
SRE-CNN: A Spatiotemporal Rotation-Equivariant CNN for Cardiac Cine MR Imaging
[ "Zhu, Yuliang", "Cheng, Jing", "Cui, Zhuo-Xu", "Ren, Jianfeng", "Wang, Chengbo", "Liang, Dong" ]
Conference
2409.08537
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
654
null
https://papers.miccai.org/miccai-2024/paper/3289_paper.pdf
@InProceedings{ Mor_MultiVarNet_MICCAI2024, author = { Morel, Louis-Oscar and Muzammel, Muhammad and Vinçon, Nathan and Derangère, Valentin and Ladoire, Sylvain and Rittscher, Jens }, title = { { MultiVarNet - Predicting Tumour Mutational status at the Protein Level } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Deep learning research in medical image analysis demonstrated the capability of predicting molecular information, including tumour mutational status, from cell and tissue morphology extracted from standard histology images. While this capability holds the promise of revolutionising pathology, it is of critical importance to go beyond gene-level mutations and develop methodologies capable of predicting precise variant mutations. Only then will it be possible to support important clinical applications, including specific targeted therapies.
MultiVarNet - Predicting Tumour Mutational status at the Protein Level
[ "Morel, Louis-Oscar", "Muzammel, Muhammad", "Vinçon, Nathan", "Derangère, Valentin", "Ladoire, Sylvain", "Rittscher, Jens" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
655
null
https://papers.miccai.org/miccai-2024/paper/3611_paper.pdf
@InProceedings{ Has_Myocardial_MICCAI2024, author = { Hasny, Marta and Demirel, Omer B. and Amyar, Amine and Faghihroohi, Shahrooz and Nezafat, Reza }, title = { { Myocardial Scar Enhancement in LGE Cardiac MRI using Localized Diffusion } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Late gadolinium enhancement (LGE) imaging is considered the gold-standard technique for evaluating myocardial scar/fibrosis. In LGE, an in-version pulse is played before imaging to create a contrast between healthy and scarred regions. However, several factors can impact the contrast quali-ty, impacting diagnostic interpretation. Furthermore, the quantification of scar burden is highly dependent on image quality. Deep learning-based au-tomated segmentation algorithms often fail when there is no clear boundary between healthy and scarred tissue. This study sought to develop a genera-tive model for improving the contrast of healthy-scarred myocardium in LGE. We propose a localized conditional diffusion model, in which only a region-of-interest (ROI), in this case heart, is subjected to the noising pro-cess, adapting the learning process to the local nature of our proposed en-hancement. The scar-enhanced images, used as training targets, are generated via tissue-specific gamma correction. A segmentation model is trained and used to extract the heart regions. The inference speed is improved by lever-aging partial diffusion, applying noise only up to an intermediate step. Fur-thermore, utilizing the stochastic nature of diffusion models, repeated infer-ence leads to improved scar enhancement of ambiguous regions. The pro-posed algorithm was evaluated using LGE images collected in 929 patients with hypertrophic cardiomyopathy, in a multi-center, multi-vendor study. Our results show visual improvements of scar-healthy myocardium con-trast. To further demonstrate the strength of our method, we evaluate our performance against various image enhancement models where the proposed approach shows higher contrast enhancement. The code is available at: https://github.com/HMS-CardiacMR/Scar_enhancement.
Myocardial Scar Enhancement in LGE Cardiac MRI using Localized Diffusion
[ "Hasny, Marta", "Demirel, Omer B.", "Amyar, Amine", "Faghihroohi, Shahrooz", "Nezafat, Reza" ]
Conference
[ "https://github.com/HMS-CardiacMR/Scar_enhancement" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
656
null
https://papers.miccai.org/miccai-2024/paper/3444_paper.pdf
@InProceedings{ Wu_Efficient_MICCAI2024, author = { Wu, Chenwei and Restrepo, David and Shuai, Zitao and Liu, Zhongming and Shen, Liyue }, title = { { Efficient In-Context Medical Segmentation with Meta-driven Visual Prompt Selection } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
In-context learning with Large Vision Models presents a promising avenue in medical image segmentation by reducing the reliance on extensive labeling. However, the performance of Large Vision Models in in-context learning highly depends on the choices of visual prompts and suffers from domain shifts. While existing works leveraging Large Vision Models for medical tasks have focused mainly on model-centric approaches like fine-tuning, we study an orthogonal data-centric perspective on how to select good visual prompts to facilitate generalization to the medical domain. In this work, we propose a label-efficient in-context medical segmentation method enabled by introducing a novel Meta-driven Visual Prompt Selection (MVPS) mechanism, where a prompt retriever obtained from a meta-learning framework actively selects the optimal images as prompts to promote model performance and generalizability. Evaluated on 8 datasets and 4 tasks across 3 medical imaging modalities, our proposed approach demonstrates consistent gains over existing methods under different scenarios, introducing both computational and label efficiency. Finally, we show that our approach is a flexible, finetuning-free module that could be easily plugged into different backbones and combined with other model-centric approaches.
Efficient In-Context Medical Segmentation with Meta-driven Visual Prompt Selection
[ "Wu, Chenwei", "Restrepo, David", "Shuai, Zitao", "Liu, Zhongming", "Shen, Liyue" ]
Conference
2407.11188
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
657
null
https://papers.miccai.org/miccai-2024/paper/1325_paper.pdf
@InProceedings{ Ger_Interpretablebydesign_MICCAI2024, author = { Gervelmeyer, Julius and Müller, Sarah and Djoumessi, Kerol and Merle, David and Clark, Simon J. and Koch, Lisa and Berens, Philipp }, title = { { Interpretable-by-design Deep Survival Analysis for Disease Progression Modeling } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
In the elderly, degenerative diseases often develop differently over time for individual patients. For optimal treatment, physicians and patients would like to know how much time is left for them until symptoms reach a certain stage. However, compared to simple disease detection tasks, disease progression modeling has received much less attention. In addition, most existing models are black-box models which provide little insight into the mechanisms driving the prediction. Here, we introduce an interpretable-by-design survival model to predict the progression of age-related macular degeneration (AMD) from fundus images. Our model not only achieves state-of-the-art prediction performance compared to black-box models but also provides a sparse map of local evidence of AMD progression for individual patients. Our evidence map faithfully reflects the decision-making process of the model in contrast to widely used post-hoc saliency methods. Furthermore, we show that the identified regions mostly align with established clinical AMD progression markers. We believe that our method may help to inform treatment decisions and may lead to better insights into imaging biomarkers indicative of disease progression. The project’s code is available at github.com/berenslab/interpretable-deep-survival-analysis.
Interpretable-by-design Deep Survival Analysis for Disease Progression Modeling
[ "Gervelmeyer, Julius", "Müller, Sarah", "Djoumessi, Kerol", "Merle, David", "Clark, Simon J.", "Koch, Lisa", "Berens, Philipp" ]
Conference
[ "https://github.com/berenslab/interpretable-deep-survival-analysis" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
658
null
https://papers.miccai.org/miccai-2024/paper/1946_paper.pdf
@InProceedings{ Sør_Spatiotemporal_MICCAI2024, author = { Sørensen, Kristine and Diez, Paula and Margeta, Jan and El Youssef, Yasmin and Pham, Michael and Pedersen, Jonas Jalili and Kühl, Tobias and de Backer, Ole and Kofoed, Klaus and Camara, Oscar and Paulsen, Rasmus }, title = { { Spatio-temporal neural distance fields for conditional generative modeling of the heart } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
The rhythmic pumping motion of the heart stands as a cornerstone in life, as it circulates blood to the entire human body through a series of carefully timed contractions of the individual chambers. Changes in the size, shape and movement of the chambers can be important markers for cardiac disease and modeling this in relation to clinical demography or disease is therefore of interest. Existing methods for spatio-temporal modeling of the human heart require shape correspondence over time or suffer from large memory requirements, making it difficult to use for complex anatomies. We introduce a novel conditional generative model, where the shape and movement is modeled implicitly in the form of a spatio-temporal neural distance field and conditioned on clinical demography. The model is based on an auto-decoder architecture and aims to disentangle the individual variations from that related to the clinical demography. It is tested on the left atrium (including the left atrial appendage), where it outperforms current state-of-the-art methods for anatomical sequence completion and generates synthetic sequences that realistically mimics the shape and motion of the real left atrium. In practice, this means we can infer functional measurements from a static image, generate synthetic populations with specified demography or disease and investigate how non-imaging clinical data effect the shape and motion of cardiac anatomies.
Spatio-temporal neural distance fields for conditional generative modeling of the heart
[ "Sørensen, Kristine", "Diez, Paula", "Margeta, Jan", "El Youssef, Yasmin", "Pham, Michael", "Pedersen, Jonas Jalili", "Kühl, Tobias", "de Backer, Ole", "Kofoed, Klaus", "Camara, Oscar", "Paulsen, Rasmus" ]
Conference
2407.10663
[ "https://github.com/kristineaajuhl/spatio_temporal_generative_cardiac_model.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
659
null
https://papers.miccai.org/miccai-2024/paper/1005_paper.pdf
@InProceedings{ El_AUniversal_MICCAI2024, author = { El Amrani, Nafie and Cao, Dongliang and Bernard, Florian }, title = { { A Universal and Flexible Framework for Unsupervised Statistical Shape Model Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
We introduce a novel unsupervised deep learning framework for constructing statistical shape models (SSMs). Although unsupervised learning-based 3D shape matching methods have made a major leap forward in recent years, the correspondence quality of existing methods does not meet the demanding requirements necessary for the construction of SSMs of complex anatomical structures. We address this shortcoming by proposing a novel deformation coherency loss to effectively enforce smooth and high-quality correspondences during neural network training. We demonstrate that our framework outperforms existing methods in creating high-quality SSMs by conducting extensive experiments on five challenging datasets with varying anatomical complexities. Our proposed method sets the new state of the art in unsupervised SSM learning, offering a universal solution that is both flexible and reliable. Our source code is publicly available at https://github.com/NafieAmrani/FUSS.
A Universal and Flexible Framework for Unsupervised Statistical Shape Model Learning
[ "El Amrani, Nafie", "Cao, Dongliang", "Bernard, Florian" ]
Conference
[ "https://github.com/NafieAmrani/FUSS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
660
null
https://papers.miccai.org/miccai-2024/paper/0338_paper.pdf
@InProceedings{ Par_Towards_MICCAI2024, author = { Park, Jihun and Hong, Jiuk and Yoon, Jihun and Park, Bokyung and Choi, Min-Kook and Jung, Heechul }, title = { { Towards Precise Pose Estimation in Robotic Surgery: Introducing Occlusion-Aware Loss } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Accurate pose estimation of surgical instruments is crucial for analyzing robotic surgery videos using computer vision techniques. However, the scarcity of suitable public datasets poses a challenge in this regard. To address this issue, we have developed a new private dataset extracted from real gastric cancer surgery videos. The primary objective of our research is to develop a more sophisticated pose estimation algorithm for surgical instruments using this private dataset. Additionally, we introduce a novel loss function aimed at enhancing the accuracy of pose estimation, with a specific emphasis on minimizing root mean squared error. Leveraging the YOLOv8 model, our approach significantly outperforms existing methods and state-of-the-art techniques, thanks to the enhanced occlusion-aware loss function. These findings hold promise for improving the precision and safety of robotic-assisted surgeries.
Towards Precise Pose Estimation in Robotic Surgery: Introducing Occlusion-Aware Loss
[ "Park, Jihun", "Hong, Jiuk", "Yoon, Jihun", "Park, Bokyung", "Choi, Min-Kook", "Jung, Heechul" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
661
null
https://papers.miccai.org/miccai-2024/paper/3700_paper.pdf
@InProceedings{ Mat_Ocular_MICCAI2024, author = { Matinfar, Sasan and Dehghani, Shervin and Sommersperger, Michael and Faridpooya, Koorosh and Fairhurst, Merle and Navab, Nassir }, title = { { Ocular Stethoscope: Auditory Support for Retinal Membrane Peeling } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
The peeling of an epiretinal membrane (ERM) is a complex procedure wherein a membrane, only a few micrometers thick, that develops on the retinal surface is delicately removed using microsurgical forceps. Insights regarding small gaps between the ERM and the retinal tissue are valuable for surgical decision-making, particularly in determining a suitable location to initiate the peeling. Depth-resolved imaging of the retina provided by intraoperative Optical Coherence Tomography (iOCT) enables visualization of this gap and supports decision-making. The common presentation of iOCT images during surgery in juxtaposition with the microscope view however requires surgeons to move their gaze from the surgical site, affecting proprioception and cognitive load. In this work, we introduce an alternative method utilizing auditory feedback as a sensory channel, designed to intuitively enhance the perception of ERM elevations. Our approach establishes an unsupervised innovative mapping between real-time OCT A-scans and the parameters of an acoustic model. This acoustic model conveys the physical characteristics of tissue structure through distinctive sound textures, at microtemporal resolution. Our experiments show that even subtle ERM elevations can be sonified. Expert clinician feedback confirms the impact of our method and an initial user study with 15 participants demonstrates the potential to perceive the gap between the ERM and the retinal tissue exclusively through auditory cues.
Ocular Stethoscope: Auditory Support for Retinal Membrane Peeling
[ "Matinfar, Sasan", "Dehghani, Shervin", "Sommersperger, Michael", "Faridpooya, Koorosh", "Fairhurst, Merle", "Navab, Nassir" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
662
null
https://papers.miccai.org/miccai-2024/paper/3212_paper.pdf
@InProceedings{ Al_IMMoCo_MICCAI2024, author = { Al-Haj Hemidi, Ziad and Weihsbach, Christian and Heinrich, Mattias P. }, title = { { IM-MoCo: Self-supervised MRI Motion Correction using Motion-Guided Implicit Neural Representations } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Motion artifacts in Magnetic Resonance Imaging (MRI) arise due to relatively long acquisition times and can compromise the clinical utility of acquired images. Traditional motion correction methods often fail to address severe motion, leading to distorted and unreliable results. Deep Learning (DL) alleviated such pitfalls through generalization with the cost of vanishing structures and hallucinations, making it challenging to apply in the medical field where hallucinated structures can tremendously impact the diagnostic outcome. In this work, we present an instance-wise motion correction pipeline that leverages motion-guided Implicit Neural Representations (INRs) to mitigate the impact of motion artifacts while retaining anatomical structure. Our method is evaluated using the NYU fastMRI dataset with different degrees of simulated motion severity. For the correction alone, we can improve over state-of-the-art image reconstruction methods by +5% SSIM, +5 db PSNR, and +14% HaarPSI. Clinical relevance is demonstrated by a subsequent experiment, where our method improves classification outcomes by at least +1.5 accuracy percentage points compared to motion-corrupted images.
IM-MoCo: Self-supervised MRI Motion Correction using Motion-Guided Implicit Neural Representations
[ "Al-Haj Hemidi, Ziad", "Weihsbach, Christian", "Heinrich, Mattias P." ]
Conference
2407.02974
[ "https://github.com/multimodallearning/MICCAI24_IMMoCo.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
663
null
https://papers.miccai.org/miccai-2024/paper/1551_paper.pdf
@InProceedings{ Ola_SpeChrOmics_MICCAI2024, author = { Oladokun, Ajibola S. and Malila, Bessie and Campello, Victor M. and Shey, Muki and Mutsvangwa, Tinashe E. M. }, title = { { SpeChrOmics: A Biomarker Characterization Framework for Medical Hyperspectral Imaging } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
We propose SpeChrOmics, a characterization framework for generating potential biomarkers of pathologies from hyperspectral images of body tissue. We test our model using a novel clinical application – hyperspectral imaging for the diagnosis of latent tuberculosis infection (LTBI). This is a neglected disease state predominantly prevalent in sub-Saharan Africa. Our model identified water, deoxyhemoglobin, and pheomelanin as potential chromophore biomarkers for LTBI with mean cross validation accuracy of 96%. Our framework can potentially be used for pathology characterization in other medical applications.
SpeChrOmics: A Biomarker Characterization Framework for Medical Hyperspectral Imaging
[ "Oladokun, Ajibola S.", "Malila, Bessie", "Campello, Victor M.", "Shey, Muki", "Mutsvangwa, Tinashe E. M." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
664
null
https://papers.miccai.org/miccai-2024/paper/3085_paper.pdf
@InProceedings{ Mou_Evaluating_MICCAI2024, author = { Mouheb, Kaouther and Elbatel, Marawan and Klein, Stefan and Bron, Esther E. }, title = { { Evaluating the Fairness of Neural Collapse in Medical Image Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Deep learning has achieved impressive performance across various medical imaging tasks. However, its inherent bias against specific groups hinders its clinical applicability in equitable healthcare systems. A recently discovered phenomenon, Neural Collapse (NC), has shown potential in improving the generalization of state-of-the-art deep learning models. Nonetheless, its implications on bias in medical imaging remain unexplored. Our study investigates deep learning fairness through the lens of NC. We analyze the training dynamics of models as they approach NC when training using biased datasets, and examine the subsequent impact on test performance, specifically focusing on label bias. We find that biased training initially results in different NC configurations across subgroups, before converging to a final NC solution by memorizing all data samples. Through extensive experiments on three medical imaging datasets—PAPILA, HAM10000, and CheXpert—we find that in biased settings, NC can lead to a significant drop in F1 score across all subgroups. Our code is available at https://gitlab.com/radiology/neuro/neural-collapse-fairness.
Evaluating the Fairness of Neural Collapse in Medical Image Classification
[ "Mouheb, Kaouther", "Elbatel, Marawan", "Klein, Stefan", "Bron, Esther E." ]
Conference
2407.05843
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
665
null
https://papers.miccai.org/miccai-2024/paper/1022_paper.pdf
@InProceedings{ Liu_PAMIL_MICCAI2024, author = { Liu, Jiashuai and Mao, Anyu and Niu, Yi and Zhang, Xianli and Gong, Tieliang and Li, Chen and Gao, Zeyu }, title = { { PAMIL: Prototype Attention-based Multiple Instance Learning for Whole Slide Image Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Digital pathology images are not only crucial for diagnosing cancer but also play a significant role in treatment planning, and research into disease mechanisms. The multiple instance learning (MIL) technique provides an effective weakly-supervised methodology for analyzing gigapixel Whole Slide Image (WSI). Recent advancements in MIL approaches have predominantly focused on predicting a singular diagnostic label for each WSI, simultaneously enhancing interpretability via attention mechanisms. However, given the heterogeneity of tumors, each WSI may contain multiple histotypes. Also, the generated attention maps often fail to offer a comprehensible explanation of the underlying reasoning process. These constraints limit the potential applicability of MIL-based methods in clinical settings. In this paper, we propose a Prototype Attention-based Multiple Instance Learning (PAMIL) method, designed to improve the model’s reasoning interpretability without compromising its classification performance at the WSI level. PAMIL merges prototype learning with attention mechanisms, enabling the model to quantify the similarity between prototypes and instances, thereby providing the interpretability at instance level. Specifically, two branches are equipped in PAMIL, providing prototype and instance-level attention scores, which are aggregated to derive bag-level predictions. Extensive experiments are conducted on four datasets with two diverse WSI classification tasks, demonstrating the effectiveness and interpretability of our PAMIL.
PAMIL: Prototype Attention-based Multiple Instance Learning for Whole Slide Image Classification
[ "Liu, Jiashuai", "Mao, Anyu", "Niu, Yi", "Zhang, Xianli", "Gong, Tieliang", "Li, Chen", "Gao, Zeyu" ]
Conference
[ "https://github.com/Jiashuai-Liu/PAMIL" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
666
null
https://papers.miccai.org/miccai-2024/paper/3403_paper.pdf
@InProceedings{ He_PitVQA_MICCAI2024, author = { He, Runlong and Xu, Mengya and Das, Adrito and Khan, Danyal Z. and Bano, Sophia and Marcus, Hani J. and Stoyanov, Danail and Clarkson, Matthew J. and Islam, Mobarakol }, title = { { PitVQA: Image-grounded Text Embedding LLM for Visual Question Answering in Pituitary Surgery } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Visual Question Answering (VQA) within the surgical domain, utilizing Large Language Models (LLMs), offers a distinct opportunity to improve intra-operative decision-making and facilitate intuitive surgeon-AI interaction. However, the development of LLMs for surgical VQA is hindered by the scarcity of diverse and extensive datasets with complex reasoning tasks. Moreover, contextual fusion of the image and text modalities remains an open research challenge due to the inherent differences between these two types of information and the complexity involved in aligning them. This paper introduces PitVQA, a novel dataset specifically designed for VQA in endonasal pituitary surgery and PitVQA-Net, an adaptation of the GPT2 with a novel image-grounded text embedding for surgical VQA. PitVQA comprises 25 procedural videos and a rich collection of question-answer pairs spanning crucial surgical aspects such as phase and step recognition, context understanding, tool detection and localization, and tool-tissue interactions. PitVQA-Net consists of a novel image-grounded text embedding that projects image and text features into a shared embedding space and GPT2 Backbone with an excitation block classification head to generate contextually relevant answers within the complex domain of endonasal pituitary surgery. Our image-grounded text embedding leverages joint embedding, cross-attention and contextual representation to understand the contextual relationship between questions and surgical images. We demonstrate the effectiveness of PitVQA-Net on both the PitVQA and the publicly available EndoVis18-VQA dataset, achieving improvements in balanced accuracy of 8% and 9% over the most recent baselines, respectively.
PitVQA: Image-grounded Text Embedding LLM for Visual Question Answering in Pituitary Surgery
[ "He, Runlong", "Xu, Mengya", "Das, Adrito", "Khan, Danyal Z.", "Bano, Sophia", "Marcus, Hani J.", "Stoyanov, Danail", "Clarkson, Matthew J.", "Islam, Mobarakol" ]
Conference
2405.13949
[ "https://github.com/mobarakol/PitVQA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
667
null
https://papers.miccai.org/miccai-2024/paper/1549_paper.pdf
@InProceedings{ Wei_Representing_MICCAI2024, author = { Wei, Ziquan and Dan, Tingting and Ding, Jiaqi and Laurienti, Paul and Wu, Guorong }, title = { { Representing Functional Connectivity with Structural Detour: A New Perspective to Decipher Structure-Function Coupling Mechanism } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Modern neuroimaging technologies set the stage for studying structural connectivity (SC) and functional connectivity (FC) \textit{in-vivo}. Due to distinct biological wiring underpinnings in SC and FC, however, it is challenging to understand their coupling mechanism using statistical association approaches. We seek to answer this challenging neuroscience question through the lens of a novel perspective rooted in network topology. Specifically, our assumption is that each FC instance is either locally supported by the direct link of SC or collaboratively sustained by a group of alternative SC pathways which form a topological notion of \textit{detour}. In this regard, we propose a new connectomic representation, coined detour connectivity (DC), to characterize the complex relationship between SC and FC by presenting direct FC with the weighted connectivity strength along in-directed SC routes. Furthermore, we present SC-FC Detour Network (SFDN), a graph neural network that integrates DC embedding through a self-attention mechanism, to optimize detour to the extent that the coupling of SC and FC is closely aligned with the evolution of cognitive states. We have applied the concept of DC in network community detection while the clinical value of our SFDN is evaluated in cognitive task recognition and early diagnosis of Alzheimer’s disease. After benchmarking on three public datasets under various brain parcellations, our detour-based computational approach shows significant improvement over current state-of-the-art counterpart methods.
Representing Functional Connectivity with Structural Detour: A New Perspective to Decipher Structure-Function Coupling Mechanism
[ "Wei, Ziquan", "Dan, Tingting", "Ding, Jiaqi", "Laurienti, Paul", "Wu, Guorong" ]
Conference
[ "https://github.com/Chrisa142857/SC-FC-Detour" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
668
null
https://papers.miccai.org/miccai-2024/paper/1085_paper.pdf
@InProceedings{ Zha_Heteroscedastic_MICCAI2024, author = { Zhang, Xiaoran and Pak, Daniel H. and Ahn, Shawn S. and Li, Xiaoxiao and You, Chenyu and Staib, Lawrence H. and Sinusas, Albert J. and Wong, Alex and Duncan, James S. }, title = { { Heteroscedastic Uncertainty Estimation Framework for Unsupervised Registration } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Deep learning methods for unsupervised registration often rely on objectives that assume a uniform noise level across the spatial domain (e.g. mean-squared error loss), but noise distributions are often heteroscedastic and input-dependent in real-world medical images. Thus, this assumption often leads to degradation in registration performance, mainly due to the undesired influence of noise-induced outliers. To mitigate this, we propose a framework for heteroscedastic image uncertainty estimation that can adaptively reduce the influence of regions with high uncertainty during unsupervised registration. The framework consists of a collaborative training strategy for the displacement and variance estimators, and a novel image fidelity weighting scheme utilizing signal-to-noise ratios. Our approach prevents the model from being driven away by spurious gradients caused by the simplified homoscedastic assumption, leading to more accurate displacement estimation. To illustrate its versatility and effectiveness, we tested our framework on two representative registration architectures across three medical image datasets. Our method consistently outperforms baselines and produces sensible uncertainty estimates. The code is publicly available at \url{https://voldemort108x.github.io/hetero_uncertainty/}.
Heteroscedastic Uncertainty Estimation Framework for Unsupervised Registration
[ "Zhang, Xiaoran", "Pak, Daniel H.", "Ahn, Shawn S.", "Li, Xiaoxiao", "You, Chenyu", "Staib, Lawrence H.", "Sinusas, Albert J.", "Wong, Alex", "Duncan, James S." ]
Conference
2312.00836
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
669
null
https://papers.miccai.org/miccai-2024/paper/2715_paper.pdf
@InProceedings{ Sun_ANew_MICCAI2024, author = { Sun, Minghao and Zhou, Tian and Jiang, Chenghui and Lv, Xiaodan and Yu, Han }, title = { { A New Cine-MRI Segmentation Method of Tongue Dorsum for Postoperative Swallowing Function Analysis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Advantages of cine-MRI include high spatial-temporal resolution and free radia-tion, and the technique has become a new method for analyzing and assessing the swallowing function of patients with head and neck tumors. To reduce the labor work of physicians and improve the robustness of labeling the cine-MRI images, we propose a new swallowing analysis method based on a revised cine-MRI segmentation model. This method aims to automate the calculation of tongue dor-sum motion parameters in the oral and pharyngeal phases of swallowing, fol-lowed by a quantitative analysis. Firstly, based on manually annotated swallow-ing structures, we propose a method for calculating tongue dorsum motion pa-rameters, which enables the quantitative analysis of swallowing capability. Sec-ondly, a spatial-temporal hybrid model composed of convolution and temporal transformer is proposed to extract the tongue dorsum mask sequence from a swallowing cycle MRI sequence. Finally, to fully exploit the advantages of cine-MRI, a Multi-head Temporal Self-Attention (MTSA) mechanism is introduced, which establishes connections among frames and enhances the segmentation re-sults of individual frames. A Temporal Relative Positional Encoding (TRPE) is designed to incorporate the temporal information of different swallowing stages into the network, which enhances the network’s understanding of the swallowing process. Experimental results show that the proposed segmentation model achieves a 1.45% improvement in Dice Score compared to the state-of-the-art methods, and the interclass correlation coefficient (ICC) of the displacement data of swallowing feature points obtained respectively from the model mask and physician annotation exceeds 90%. Our code is available at: https://github.com/MinghaoSam/SwallowingFunctionAnalysis.
A New Cine-MRI Segmentation Method of Tongue Dorsum for Postoperative Swallowing Function Analysis
[ "Sun, Minghao", "Zhou, Tian", "Jiang, Chenghui", "Lv, Xiaodan", "Yu, Han" ]
Conference
[ "https://github.com/MinghaoSam/SwallowingFunctionAnalysis" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
670
null
https://papers.miccai.org/miccai-2024/paper/0817_paper.pdf
@InProceedings{ Zha_Highresolution_MICCAI2024, author = { Zhang, Wei and Hui, Tik Ho and Tse, Pui Ying and Hill, Fraser and Lau, Condon and Li, Xinyue }, title = { { High-resolution Medical Image Translation via Patch Alignment-Based Bidirectional Contrastive Learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Pathology image assessment plays a crucial role in disease diagnosis and treatment. In this study, we propose a Patch alignment-based Paired medical image-to-image Translation (PPT) model that takes the Hematoxylin and Eosin (H&E) stained image as input and generates the corresponding Immunohistochemistry (IHC) stained image in seconds, which can bypass the laborious and time-consuming procedures of IHC staining and facilitate timely and accurate pathology assessment. First, our proposed PPT model introduces FocalNCE loss in patch-wise bidirectional contrastive learning to ensure high consistency between input and output images. Second, we propose a novel patch alignment loss to address the commonly observed misalignment issue in paired medical image datasets. Third, we incorporate content and frequency loss to produce IHC stained images with finer details. Extensive experiments show that our method outperforms state-of-the-art methods, demonstrates clinical utility in pathology expert evaluation using our dataset and achieves competitive performance in two public breast cancer datasets. Lastly, we release our H&E to IHC image Translation (HIT) dataset of canine lymphoma with paired H&E-CD3 and H&E-PAX5 images, which is the first paired pathological image dataset with a high resolution of 2048×2048. Our code and dataset are available at https://github.com/coffeeNtv/PPT.
High-resolution Medical Image Translation via Patch Alignment-Based Bidirectional Contrastive Learning
[ "Zhang, Wei", "Hui, Tik Ho", "Tse, Pui Ying", "Hill, Fraser", "Lau, Condon", "Li, Xinyue" ]
Conference
[ "https://github.com/coffeeNtv/PPT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
671
null
https://papers.miccai.org/miccai-2024/paper/3665_paper.pdf
@InProceedings{ Bha_Analyzing_MICCAI2024, author = { Bhattarai, Ashuta and Jin, Jing and Kambhamettu, Chandra }, title = { { Analyzing Adjacent B-Scans to Localize Sickle Cell Retinopathy In OCTs } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Imaging modalities, such as Optical coherence tomography (OCT), are one of the core components of medical image diagnosis. Deep learning-based object detection and segmentation models have proven efficient and reliable in this field. OCT images have been extensively used in deep learning-based applications, such as retinal layer segmentation and retinal disease detection for conditions such as age-related macular degeneration (AMD) and diabetic macular edema (DME). However, sickle-cell retinopathy (SCR) has yet to receive significant research attention in the deep-learning community, despite its detrimental effects. To address this gap, we present a new detection network called the Cross Scan Attention Transformer (CSAT), which is specifically designed to identify minute irregularities such as SCR in cross-sectional images such as OCTs. Our method employs a contrastive learning framework to pre-train OCT images and a transformer-based detection network that takes advantage of the volumetric nature of OCT scans. Our research demonstrates the effectiveness of the proposed network in detecting SCR from OCT images, with superior results compared to popular object detection networks such as Faster-RCNN and Detection Transformer (DETR). Our code can be found in github.com/VimsLab/CSAT.
Analyzing Adjacent B-Scans to Localize Sickle Cell Retinopathy In OCTs
[ "Bhattarai, Ashuta", "Jin, Jing", "Kambhamettu, Chandra" ]
Conference
[ "https://github.com/VimsLab/CSAT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
672
null
https://papers.miccai.org/miccai-2024/paper/2298_paper.pdf
@InProceedings{ Bon_Gaussian_MICCAI2024, author = { Bonilla, Sierra and Zhang, Shuai and Psychogyios, Dimitrios and Stoyanov, Danail and Vasconcelos, Francisco and Bano, Sophia }, title = { { Gaussian Pancakes: Geometrically-Regularized 3D Gaussian Splatting for Realistic Endoscopic Reconstruction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Within colorectal cancer diagnostics, conventional colonoscopy techniques face critical limitations, including a limited field of view and a lack of depth information, which can impede the detection of pre- cancerous lesions. Current methods struggle to provide comprehensive and accurate 3D reconstructions of the colonic surface which can help minimize the missing regions and reinspection for pre-cancerous polyps. Addressing this, we introduce “Gaussian Pancakes”, a method that lever- ages 3D Gaussian Splatting (3D GS) combined with a Recurrent Neural Network-based Simultaneous Localization and Mapping (RNNSLAM) system. By introducing geometric and depth regularization into the 3D GS framework, our approach ensures more accurate alignment of Gaussians with the colon surface, resulting in smoother 3D reconstructions with novel viewing of detailed textures and structures. Evaluations across three diverse datasets show that Gaussian Pancakes enhances novel view synthesis quality, surpassing current leading methods with a 18% boost in PSNR and a 16% improvement in SSIM. It also delivers over 100× faster rendering and more than 10× shorter training times, making it a practical tool for real-time applications. Hence, this holds promise for achieving clinical translation for better detection and diagnosis of colorectal cancer.
Gaussian Pancakes: Geometrically-Regularized 3D Gaussian Splatting for Realistic Endoscopic Reconstruction
[ "Bonilla, Sierra", "Zhang, Shuai", "Psychogyios, Dimitrios", "Stoyanov, Danail", "Vasconcelos, Francisco", "Bano, Sophia" ]
Conference
2404.06128
[ "https://github.com/smbonilla/GaussianPancakes" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
673
null
https://papers.miccai.org/miccai-2024/paper/2565_paper.pdf
@InProceedings{ Sun_LOMIAT_MICCAI2024, author = { Sun, Yuchen and Li, Kunwei and Chen, Duanduan and Hu, Yi and Zhang, Shuaitong }, title = { { LOMIA-T: A Transformer-based LOngitudinal Medical Image Analysis framework for predicting treatment response of esophageal cancer } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Deep learning models based on medical images have made significant strides in predicting treatment outcomes. However, previous methods have primarily concentrated on single time-point images, neglecting the temporal dynamics and changes inherent in longitudinal medical images. Thus, we propose a Transformer-based longitudinal image analysis framework (LOMIA-T) to contrast and fuse latent representations from pre- and post-treatment medical images for predicting treatment response. Specifically, we first design a treatment response-based contrastive loss to enhance latent representation by discerning evolutionary processes across various disease stages. Then, we integrate latent representations from pre- and post-treatment CT images using a cross-attention mechanism. Considering the redundancy in the dual-branch output features induced by the cross-attention mechanism, we propose a clinically interpretable feature fusion strategy to predict treatment response. Experimentally, the proposed framework outperforms several state-of-the-art longitudinal image analysis methods on an in-house Esophageal Squamous Cell Carcinoma (ESCC) dataset, encompassing 170 pre- and post-treatment contrast-enhanced CT image pairs from ESCC patients underwent neoadjuvant chemoradiotherapy. Ablation experiments validate the efficacy of the proposed treatment response-based contrastive loss and feature fusion strategy. The codes will be made available at https://github.com/syc19074115/LOMIA-T.
LOMIA-T: A Transformer-based LOngitudinal Medical Image Analysis framework for predicting treatment response of esophageal cancer
[ "Sun, Yuchen", "Li, Kunwei", "Chen, Duanduan", "Hu, Yi", "Zhang, Shuaitong" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
674
null
https://papers.miccai.org/miccai-2024/paper/3105_paper.pdf
@InProceedings{ Li_FairDiff_MICCAI2024, author = { Li, Wenyi and Xu, Haoran and Zhang, Guiyu and Gao, Huan-ang and Gao, Mingju and Wang, Mengyu and Zhao, Hao }, title = { { FairDiff: Fair Segmentation with Point-Image Diffusion } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Fairness is an important topic for medical image analysis, driven by the challenge of unbalanced training data among diverse target groups and the societal demand for equitable medical quality. In response to this issue, our research adopts a data-driven strategy—enhancing data balance by integrating synthetic images. However, in terms of generating synthetic images, previous works either lack paired labels or fail to precisely control the boundaries of synthetic images to be aligned with those labels. To address this, we formulate the problem in a joint optimization manner, in which three networks are optimized towards the goal of empirical risk minimization and fairness maximization. On the implementation side, our solution features an innovative Point-Image Diffusion architecture, which leverages 3D point clouds for improved control over mask boundaries through a point-mask-image synthesis pipeline. This method outperforms significantly existing techniques in synthesizing scanning laser ophthalmoscopy (SLO) fundus images. By combining synthetic data with real data during the training phase using a proposed Equal Scale approach, our model achieves superior fairness segmentation performance compared to the state-of-the-art fairness learning models. Code is available at https://github.com/wenyi-li/FairDiff.
FairDiff: Fair Segmentation with Point-Image Diffusion
[ "Li, Wenyi", "Xu, Haoran", "Zhang, Guiyu", "Gao, Huan-ang", "Gao, Mingju", "Wang, Mengyu", "Zhao, Hao" ]
Conference
2407.06250
[ "https://github.com/wenyi-li/FairDiff" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
675
null
https://papers.miccai.org/miccai-2024/paper/0300_paper.pdf
@InProceedings{ Lei_Epicardium_MICCAI2024, author = { Lei, Long and Zhou, Jun and Pei, Jialun and Zhao, Baoliang and Jin, Yueming and Teoh, Yuen-Chun Jeremy and Qin, Jing and Heng, Pheng-Ann }, title = { { Epicardium Prompt-guided Real-time Cardiac Ultrasound Frame-to-volume Registration } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Real-time fusion of intraoperative 2D ultrasound images and the preoperative 3D ultrasound volume based on the frame-to-volume registration can provide a comprehensive guidance view for cardiac interventional surgery. However, cardiac ultrasound images are characterized by a low signal-to-noise ratio and small differences between adjacent frames, coupled with significant dimension variations between 2D frames and 3D volumes to be registered, resulting in real-time and accurate cardiac ultrasound frame-to-volume registration being a very challenging task. This paper introduces a lightweight end-to-end Cardiac Ultrasound frame-to-volume Registration network, termed CU-Reg. Specifically, the proposed model leverages epicardium prompt-guided anatomical clues to reinforce the interaction of 2D sparse and 3D dense features, followed by a voxel-wise local-global aggregation of enhanced features, thereby boosting the cross-dimensional matching effectiveness of low-quality ultrasound modalities. We further embed an inter-frame discriminative regularization term within the hybrid supervised learning to increase the distinction between adjacent slices in the same ultrasound volume to ensure registration stability. Experimental results on the reprocessed CAMUS dataset demonstrate that our CU-Reg surpasses existing methods in terms of registration accuracy and efficiency, meeting the guidance requirements of clinical cardiac interventional surgery. Our code is available at https://github.com/LLEIHIT/CU-Reg.
Epicardium Prompt-guided Real-time Cardiac Ultrasound Frame-to-volume Registration
[ "Lei, Long", "Zhou, Jun", "Pei, Jialun", "Zhao, Baoliang", "Jin, Yueming", "Teoh, Yuen-Chun Jeremy", "Qin, Jing", "Heng, Pheng-Ann" ]
Conference
2406.14534
[ "https://github.com/LLEIHIT/CU-Reg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
676
null
https://papers.miccai.org/miccai-2024/paper/2459_paper.pdf
@InProceedings{ Mej_Enhancing_MICCAI2024, author = { Mejia, Gabriel and Ruiz, Daniela and Cárdenas, Paula and Manrique, Leonardo and Vega, Daniela and Arbeláez, Pablo }, title = { { Enhancing Gene Expression Prediction from Histology Images with Spatial Transcriptomics Completion } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Spatial Transcriptomics is a novel technology that aligns histology images with spatially resolved gene expression profiles. Although groundbreaking, it struggles with gene capture yielding high corruption in acquired data. Given potential applications, recent efforts have focused on predicting transcriptomic profiles solely from histology images. However, differences in databases, preprocessing techniques, and training hyperparameters hinder a fair comparison between methods. To address these challenges, we present a systematically curated and processed database collected from 26 public sources, representing an 8.6-fold increase compared to previous works. Additionally, we propose a state-of-the-art transformer-based completion technique for inferring missing gene expression, which significantly boosts the performance of transcriptomic profile predictions across all datasets. Altogether, our contributions constitute the most comprehensive benchmark of gene expression prediction from histology images to date and a stepping stone for future research on spatial transcriptomics.
Enhancing Gene Expression Prediction from Histology Images with Spatial Transcriptomics Completion
[ "Mejia, Gabriel", "Ruiz, Daniela", "Cárdenas, Paula", "Manrique, Leonardo", "Vega, Daniela", "Arbeláez, Pablo" ]
Conference
[ "https://github.com/BCV-Uniandes/SpaRED" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
677
null
https://papers.miccai.org/miccai-2024/paper/2248_paper.pdf
@InProceedings{ Bra_BackMix_MICCAI2024, author = { Bransby, Kit M. and Beqiri, Arian and Cho Kim, Woo-Jin and Oliveira, Jorge and Chartsias, Agisilaos and Gomez, Alberto }, title = { { BackMix: Mitigating Shortcut Learning in Echocardiography with Minimal Supervision } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Neural networks can learn spurious correlations that lead to the correct prediction in a validation set, but generalise poorly because the predictions are right for the wrong reason. This undesired learning of naive shortcuts (Clever Hans effect) can happen for example in echocardiogram view classification when background cues (e.g. metadata) are biased towards a class and the model learns to focus on those background features instead of on the image content. We propose a simple, yet effective random background augmentation method called BackMix, which samples random backgrounds from other examples in the training set. By enforcing the background to be uncorrelated with the outcome, the model learns to focus on the data within the ultrasound sector and becomes invariant to the regions outside this. We extend our method in a semi-supervised setting, finding that the positive effects of BackMix are maintained with as few as 5% of segmentation labels. A loss weighting mechanism, wBackMix, is also proposed to increase the contribution of the augmented examples. We validate our method on both in-distribution and out-of-distribution datasets, demonstrating significant improvements in classification accuracy, region focus and generalisability. Our source code is available at: https://github.com/kitbransby/BackMix
BackMix: Mitigating Shortcut Learning in Echocardiography with Minimal Supervision
[ "Bransby, Kit M.", "Beqiri, Arian", "Cho Kim, Woo-Jin", "Oliveira, Jorge", "Chartsias, Agisilaos", "Gomez, Alberto" ]
Conference
2406.19148
[ "https://github.com/kitbransby/BackMix" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
678
null
https://papers.miccai.org/miccai-2024/paper/3335_paper.pdf
@InProceedings{ Hu_Boosting_MICCAI2024, author = { Hu, Yihuang and Peng, Qiong and Du, Zhicheng and Zhang, Guojun and Wu, Huisi and Liu, Jingxin and Chen, Hao and Wang, Liansheng }, title = { { Boosting FFPE-to-HE Virtual Staining with Cell Semantics from Pretrained Segmentation Model } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Histopathological samples are typically processed by formalin fixation and paraffin embedding (FFPE) for long-term preservation. To visualize the blurry structures of cells and tissue in FFPE slides, hematoxylin and eosin (HE) staining is commonly utilized, a process that involves sophisticated laboratory facilities and complicated procedures. Recently, virtual staining realized by generative models has been widely utilized. The blurry cell structure in FFPE slides poses challenges to well-studied FFPE-to-HE virtual staining. However, most existing researches overlook this issue. In this paper, we propose a framework for boosting FFPE-to-HE virtual staining with cell semantics from pretrained cell segmentation models (PCSM) as the well-trained PCSM has learned effective representation for cell structure, which contains richer cell semantics than that from a generative model. Thus, we learn from PCSM by utilizing the high-level and low-level semantics of real and virtual images. Specifically, We propose to utilize PCSM to extract multiple-scale latent representations from real and virtual images and align them. Moreover, we introduce the low-level cell location guidance for generative models, informed by PCSM. We conduct extensive experiments on our collected dataset. The results demonstrate a significant improvement of our method over the existing network qualitatively and quantitatively. Code is available at https://github.com/huyihuang/FFPE-to-HE.
Boosting FFPE-to-HE Virtual Staining with Cell Semantics from Pretrained Segmentation Model
[ "Hu, Yihuang", "Peng, Qiong", "Du, Zhicheng", "Zhang, Guojun", "Wu, Huisi", "Liu, Jingxin", "Chen, Hao", "Wang, Liansheng" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
679
null
https://papers.miccai.org/miccai-2024/paper/3885_paper.pdf
@InProceedings{ Cha_Baikal_MICCAI2024, author = { Chaudhary, Shivesh and Sankarapandian, Sivaramakrishnan and Sooknah, Matt and Pai, Joy and McCue, Caroline and Chen, Zhenghao and Xu, Jun }, title = { { Baikal: Unpaired Denoising of Fluorescence Microscopy Images using Diffusion Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Fluorescence microscopy is an indispensable tool for biological discovery but image quality is constrained by desired spatial and temporal resolution, sample sensitivity, and other factors. Computational denoising methods can bypass imaging constraints and improve signal-to-noise ratio in images. However, current state of the art methods are commonly trained in a supervised manner, requiring paired noisy and clean images, limiting their application across diverse datasets. An alternative class of denoising models can be trained in a self-supervised manner, assuming independent noise across samples but are unable to generalize from available unpaired clean images. A method that can be trained without paired data and can use information from available unpaired high-quality images would address both weaknesses. Here, we present Baikal, a first attempt to formulate such a framework using Denoising Diffusion Probabilistic Models (DDPM) for fluorescence microscopy images. We first train a DDPM backbone in an unconditional manner to learn generative priors over complex morphologies in microscopy images, we can then apply various conditioning strategies to sample from the trained model and propose optimal strategy to denoise the desired image. Extensive quantitative comparisons demonstrate better performance of Baikal over state of the art self-supervised methods across multiple datasets. We highlight the advantage of generative priors learnt by DDPMs in denoising complex Flywing morphologies where other methods fail. Overall, our DDPM based denoising framework presents a new class of denoising method for fluorescence microscopy datasets that achieve good performance without collection of paired high-quality images.
Baikal: Unpaired Denoising of Fluorescence Microscopy Images using Diffusion Models
[ "Chaudhary, Shivesh", "Sankarapandian, Sivaramakrishnan", "Sooknah, Matt", "Pai, Joy", "McCue, Caroline", "Chen, Zhenghao", "Xu, Jun" ]
Conference
[ "https://github.com/scelesticsiva/denoising" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
680
null
https://papers.miccai.org/miccai-2024/paper/0519_paper.pdf
@InProceedings{ Xin_OntheFly_MICCAI2024, author = { Xin, Yuelin and Chen, Yicheng and Ji, Shengxiang and Han, Kun and Xie, Xiaohui }, title = { { On-the-Fly Guidance Training for Medical Image Registration } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
This study introduces a novel On-the-Fly Guidance (OFG) training framework for enhancing existing learning-based image registration models, addressing the limitations of weakly-supervised and unsupervised methods. Weakly-supervised methods struggle due to the scarcity of labeled data, and unsupervised methods directly depend on image similarity metrics for accuracy. Our method proposes a supervised fashion for training registration models, without the need for any labeled data. OFG generates pseudo-ground truth during training by refining deformation predictions with a differentiable optimizer, enabling direct supervised learning. OFG optimizes deformation predictions efficiently, improving the performance of registration models without sacrificing inference speed. Our method is tested across several benchmark datasets and leading models, it significantly enhanced performance, providing a plug-and-play solution for training learning-based registration models. Code available at: https://github.com/cilix-ai/on-the-fly-guidance
On-the-Fly Guidance Training for Medical Image Registration
[ "Xin, Yuelin", "Chen, Yicheng", "Ji, Shengxiang", "Han, Kun", "Xie, Xiaohui" ]
Conference
2308.15216
[ "https://github.com/cilix-ai/on-the-fly-guidance" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
681
null
https://papers.miccai.org/miccai-2024/paper/0334_paper.pdf
@InProceedings{ Qu_Multimodal_MICCAI2024, author = { Qu, Linhao and Huang, Dan and Zhang, Shaoting and Wang, Xiaosong }, title = { { Multi-modal Data Binding for Survival Analysis Modeling with Incomplete Data and Annotations } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Survival analysis stands as a pivotal process in cancer treatment research, crucial for predicting patient survival rates accurately. Recent advancements in data collection techniques have paved the way for enhancing survival predictions by integrating information from multiple modalities. However, real-world scenarios often present challenges with incomplete data, particularly when dealing with censored survival labels. Prior works have addressed missing modalities but have overlooked incomplete labels, which can introduce bias and limit model efficacy. To bridge this gap, we introduce a novel framework that simultaneously handles incomplete data across modalities and censored survival labels. Our approach employs advanced foundation models to encode individual modalities and align them into a universal representation space for seamless fusion. By generating pseudo labels and incorporating uncertainty, we significantly enhance predictive accuracy. The proposed method demonstrates outstanding prediction accuracy in two survival analysis tasks on both employed datasets. This innovative approach overcomes limitations associated with disparate modalities and improves the feasibility of comprehensive survival analysis using multiple large foundation models.
Multi-modal Data Binding for Survival Analysis Modeling with Incomplete Data and Annotations
[ "Qu, Linhao", "Huang, Dan", "Zhang, Shaoting", "Wang, Xiaosong" ]
Conference
2407.17726
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
682
null
https://papers.miccai.org/miccai-2024/paper/4008_paper.pdf
@InProceedings{ Bun_Learning_MICCAI2024, author = { Bunnell, Arianna and Glaser, Yannik and Valdez, Dustin and Wolfgruber, Thomas and Altamirano, Aleen and Zamora González, Carol and Hernandez, Brenda Y. and Sadowski, Peter and Shepherd, John A. }, title = { { Learning a Clinically-Relevant Concept Bottleneck for Lesion Detection in Breast Ultrasound } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Detecting and classifying lesions in breast ultrasound images is a promising application of artificial intelligence (AI) for reducing the burden of cancer in regions with limited access to mammography. Such AI systems are more likely to be useful in a clinical setting if their predictions can be explained. This work proposes an explainable AI model that provides interpretable predictions using a standard lexicon from the American College of Radiology’s Breast Imaging and Reporting Data System (BI-RADS). The model is a deep neural network which predicts BI-RADS features in a concept bottleneck layer for cancer classification. This architecture enables radiologists to interpret the predictions of the AI system from the concepts and potentially fix errors in real time by modifying the concept predictions. In experiments, a model is developed on 8,854 images from 994 women with expert annotations and histological cancer labels. The model outperforms state-of-the-art lesion detection frameworks with 48.9 average precision on the held-out testing set. For cancer classification concept intervention increases performance from 0.876 to 0.885 area under the receiver operating characteristic curve. Training and evaluation code is available at https://github.com/hawaii-ai/bus-cbm.
Learning a Clinically-Relevant Concept Bottleneck for Lesion Detection in Breast Ultrasound
[ "Bunnell, Arianna", "Glaser, Yannik", "Valdez, Dustin", "Wolfgruber, Thomas", "Altamirano, Aleen", "Zamora González, Carol", "Hernandez, Brenda Y.", "Sadowski, Peter", "Shepherd, John A." ]
Conference
2407.00267
[ "https://github.com/hawaii-ai/bus-cbm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
683
null
https://papers.miccai.org/miccai-2024/paper/3893_paper.pdf
@InProceedings{ Yi_Hallucinated_MICCAI2024, author = { Yi, Jingjun and Bi, Qi and Zheng, Hao and Zhan, Haolan and Ji, Wei and Huang, Yawen and Li, Shaoxin and Li, Yuexiang and Zheng, Yefeng and Huang, Feiyue }, title = { { Hallucinated Style Distillation for Single Domain Generalization in Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Single domain generalization (single-DG) for medical image segmentation aims to learn a style-invariant representation, which can be generalized to a variety unseen target domains, with the data from a single source. However, due to the limitation of sample diversity in the single source domain, the robustness of generalized features yielded by existing single-DG methods is still unsatisfactory. In this paper, we propose a novel single-DG framework, namely Hallucinated Style Distillation (HSD), to generate the robust style-invariant feature representation. Particularly, our HSD firstly expands the style diversity of the single source domain via hallucinating the samples with random styles. Then, a hallucinated cross-domain distillation paradigm is proposed to distillate the style-invariant knowledge between the original and style-hallucinated medical images. Since the hallucinated styles close to the source domain may over-fit our distillation paradigm, we further propose a learning objective to diversify style-invariant representation, which alleviates the over-fitting issue and smooths the learning process of generalized features. Extensive experiments on two standard domain generalized medical image segmentation datasets show the state-of-the-art performance of our HSD. Source code will be publicly available.
Hallucinated Style Distillation for Single Domain Generalization in Medical Image Segmentation
[ "Yi, Jingjun", "Bi, Qi", "Zheng, Hao", "Zhan, Haolan", "Ji, Wei", "Huang, Yawen", "Li, Shaoxin", "Li, Yuexiang", "Zheng, Yefeng", "Huang, Feiyue" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
684
null
https://papers.miccai.org/miccai-2024/paper/0616_paper.pdf
@InProceedings{ Cai_Rethinking_MICCAI2024, author = { Cai, Yu and Chen, Hao and Cheng, Kwang-Ting }, title = { { Rethinking Autoencoders for Medical Anomaly Detection from A Theoretical Perspective } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Medical anomaly detection aims to identify abnormal findings using only normal training data, playing a crucial role in health screening and recognizing rare diseases. Reconstruction-based methods, particularly those utilizing autoencoders (AEs), are dominant in this field. They work under the assumption that AEs trained on only normal data cannot reconstruct unseen abnormal regions well, thereby enabling the anomaly detection based on reconstruction errors. However, this assumption does not always hold due to the mismatch between the reconstruction training objective and the anomaly detection task objective, rendering these methods theoretically unsound. This study focuses on providing a theoretical foundation for AE-based reconstruction methods in anomaly detection. By leveraging information theory, we elucidate the principles of these methods and reveal that the key to improving AE in anomaly detection lies in minimizing the information entropy of latent vectors. Experiments on four datasets with two image modalities validate the effectiveness of our theory. To the best of our knowledge, this is the first effort to theoretically clarify the principles and design philosophy of AE for anomaly detection. The code is available at \url{https://github.com/caiyu6666/AE4AD}.
Rethinking Autoencoders for Medical Anomaly Detection from A Theoretical Perspective
[ "Cai, Yu", "Chen, Hao", "Cheng, Kwang-Ting" ]
Conference
2403.09303
[ "https://github.com/caiyu6666/AE4AD" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
685
null
https://papers.miccai.org/miccai-2024/paper/2095_paper.pdf
@InProceedings{ Dan_CINA_MICCAI2024, author = { Dannecker, Maik and Kyriakopoulou, Vanessa and Cordero-Grande, Lucilio and Price, Anthony N. and Hajnal, Joseph V. and Rueckert, Daniel }, title = { { CINA: Conditional Implicit Neural Atlas for Spatio-Temporal Representation of Fetal Brains } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
We introduce a conditional implicit neural atlas (CINA) for spatio-temporal atlas generation from Magnetic Resonance Images (MRI) of the neurotypical and pathological fetal brain, that is fully independent of affine or non-rigid registration. During training, CINA learns a general representation of the fetal brain and encodes subject specific information into latent code. After training, CINA can construct a faithful atlas with tissue probability maps of the fetal brain for any gestational age (GA) and anatomical variation covered within the training domain. Thus, CINA is competent to represent both, neurotypical and pathological brains. Furthermore, a trained CINA model can be fit to brain MRI of unseen subjects via test-time optimization of the latent code. CINA can then produce probabilistic tissue maps tailored to a particular subject. We evaluate our method on a total of 198 T2 weighted MRI of normal and abnormal fetal brains from the dHCP and FeTA datasets. We demonstrate CINA’s capability to represent a fetal brain atlas that can be flexibly conditioned on GA and on anatomical variations like ventricular volume or degree of cortical folding, making it a suitable tool for modeling both neurotypical and pathological brains. We quantify the fidelity of our atlas by means of tissue segmentation and age prediction and compare it to an established baseline. CINA demonstrates superior accuracy for neurotypical brains and pathological brains with ventriculomegaly. Moreover, CINA scores a mean absolute error of 0.23 weeks in fetal brain age prediction, further confirming an accurate representation of fetal brain development.
CINA: Conditional Implicit Neural Atlas for Spatio-Temporal Representation of Fetal Brains
[ "Dannecker, Maik", "Kyriakopoulou, Vanessa", "Cordero-Grande, Lucilio", "Price, Anthony N.", "Hajnal, Joseph V.", "Rueckert, Daniel" ]
Conference
2403.08550
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
686
null
https://papers.miccai.org/miccai-2024/paper/3075_paper.pdf
@InProceedings{ Bar_Average_MICCAI2024, author = { Barfoot, Theodore and Garcia Peraza Herrera, Luis C. and Glocker, Ben and Vercauteren, Tom }, title = { { Average Calibration Error: A Differentiable Loss for Improved Reliability in Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Deep neural networks for medical image segmentation often produce overconfident results misaligned with empirical observations. Such miscalibration, challenges their clinical translation. We propose to use marginal L1 average calibration error (mL1-ACE) as a novel auxiliary loss function to improve pixel-wise calibration without compromising segmentation quality. We show that this loss, despite using hard binning, is directly differentiable, bypassing the need for approximate but differentiable surrogate or soft binning approaches. Our work also introduces the concept of dataset reliability histograms which generalises standard reliability diagrams for refined visual assessment of calibration in semantic segmentation aggregated at the dataset level. Using mL1-ACE, we reduce average and maximum calibration error by 45% and 55% respectively, maintaining a Dice score of 87% on the BraTS 2021 dataset. We share our code here: https://github.com/cai4cai/ACE-DLIRIS.
Average Calibration Error: A Differentiable Loss for Improved Reliability in Image Segmentation
[ "Barfoot, Theodore", "Garcia Peraza Herrera, Luis C.", "Glocker, Ben", "Vercauteren, Tom" ]
Conference
2403.06759
[ "https://github.com/cai4cai/ACE-DLIRIS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
687
null
https://papers.miccai.org/miccai-2024/paper/0306_paper.pdf
@InProceedings{ Kim_Quantitative_MICCAI2024, author = { Kim, Young-Min and Kim, Myeong-Gee and Oh, Seok-Hwan and Jung, Guil and Lee, Hyeon-Jik and Kim, Sang-Yun and Kwon, Hyuk-Sool and Choi, Sang-Il and Bae, Hyeon-Min }, title = { { Quantitative Assessment of Thyroid Nodules through Ultrasound Imaging Analysis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Recent studies have proposed quantitative ultrasound (QUS) to extract the acoustic properties of tissues from pulse-echo data obtained through multiple transmissions. In this paper, we introduce a learning-based approach to identify thyroid nodule malignancy by extracting acoustic attenuation and speed of sound from ultrasound imaging. The proposed method employs a neural model that integrates a convolutional neural network (CNN) for detailed local pulse-echo pattern analysis with a Transformer architecture, enhancing the model’s ability to capture complex correlations among multiple beam receptions. B-mode images are employed as both an input and label to guarantee robust performance regardless of the complex structures present in the human neck, such as the thyroid, blood vessels, and trachea. In order to train the proposed deep neural model, a simulation phantom mimicking the structure of human muscle, fat layers, and the shape of the thyroid gland has been designed. The effectiveness of the proposed method is evaluated through numerical simulations and clinical tests.
Quantitative Assessment of Thyroid Nodules through Ultrasound Imaging Analysis
[ "Kim, Young-Min", "Kim, Myeong-Gee", "Oh, Seok-Hwan", "Jung, Guil", "Lee, Hyeon-Jik", "Kim, Sang-Yun", "Kwon, Hyuk-Sool", "Choi, Sang-Il", "Bae, Hyeon-Min" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
688
null
https://papers.miccai.org/miccai-2024/paper/0233_paper.pdf
@InProceedings{ Elb_An_MICCAI2024, author = { Elbatel, Marawan and Kamnitsas, Konstantinos and Li, Xiaomeng }, title = { { An Organism Starts with a Single Pix-Cell: A Neural Cellular Diffusion for High-Resolution Image Synthesis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Generative modeling seeks to approximate the statistical properties of real data, enabling synthesis of new data that closely resembles the original distribution. Generative Adversarial Networks (GANs) and Denoising Diffusion Probabilistic Models (DDPMs) represent significant advancements in generative modeling, drawing inspiration from game theory and thermodynamics, respectively. Nevertheless, the exploration of generative modeling through the lens of biological evolution remains largely untapped. In this paper, we introduce a novel family of models termed Generative Cellular Automata (GeCA), inspired by the evolution of an organism from a single cell. GeCAs are evaluated as an effective augmentation tool for retinal disease classification across two imaging modalities: Fundus and Optical Coherence Tomography (OCT). In the context of OCT imaging, where data is scarce and the distribution of classes is inherently skewed, GeCA significantly boosts the performance of 11 different ophthalmological conditions, achieving a 12% increase in the average F1 score compared to conventional baselines. GeCAs outperform both diffusion methods that incorporate UNet or state-of-the art variants with transformer-based denoising models, under similar parameter constraints. Code is available at: https://github.com/xmed-lab/GeCA.
An Organism Starts with a Single Pix-Cell: A Neural Cellular Diffusion for High-Resolution Image Synthesis
[ "Elbatel, Marawan", "Kamnitsas, Konstantinos", "Li, Xiaomeng" ]
Conference
2407.03018
[ "https://github.com/xmed-lab/GeCA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
689
null
https://papers.miccai.org/miccai-2024/paper/1433_paper.pdf
@InProceedings{ Sie_PULPo_MICCAI2024, author = { Siegert, Leonard and Fischer, Paul and Heinrich, Mattias P. and Baumgartner, Christian F. }, title = { { PULPo: Probabilistic Unsupervised Laplacian Pyramid Registration } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Deformable image registration is fundamental to many medical imaging applications. Registration is an inherently ambiguous task often admitting many viable solutions. While neural network-based registration techniques enable fast and accurate registration, the majority of existing approaches are not able to estimate uncertainty. Here, we present PULPo, a method for probabilistic deformable registration capable of uncertainty quantification. PULPo probabilistically models the distribution of deformation fields on different hierarchical levels combining them using Laplacian pyramids. This allows our method to model global as well as local aspects of the deformation field. We evaluate our method on two widely used neuroimaging datasets and find that it achieves high registration performance as well as substantially better calibrated uncertainty quantification compared to the current state-of-the-art.
PULPo: Probabilistic Unsupervised Laplacian Pyramid Registration
[ "Siegert, Leonard", "Fischer, Paul", "Heinrich, Mattias P.", "Baumgartner, Christian F." ]
Conference
2407.10567
[ "https://github.com/leonardsiegert/PULPo" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
690
null
https://papers.miccai.org/miccai-2024/paper/1176_paper.pdf
@InProceedings{ Sun_FedMLP_MICCAI2024, author = { Sun, Zhaobin and Wu, Nannan and Shi, Junjie and Yu, Li and Cheng, Kwang-Ting and Yan, Zengqiang }, title = { { FedMLP: Federated Multi-Label Medical Image Classification under Task Heterogeneity } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Cross-silo federated learning (FL) enables decentralized organizations to collaboratively train models while preserving data privacy and has made significant progress in medical image classification. One common assumption is task homogeneity where each client has access to all classes during training. However, in clinical practice, given a multi-label classification task, constrained by the level of medical knowledge and the prevalence of diseases, each institution may diagnose only partial categories, resulting in task heterogeneity. How to pursue effective multi-label medical image classification under task heterogeneity is under-explored. In this paper, we first formulate such a realistic label missing setting in the multi-label FL domain and propose a two-stage method FedMLP to combat class missing from two aspects: pseudo label tagging and global knowledge learning. The former utilizes a warmed-up model to generate class prototypes and select samples with high confidence to supplement missing labels, while the latter uses a global model as a teacher for consistency regularization to prevent forgetting missing class knowledge. Experiments on two publicly-available medical datasets validate the superiority of FedMLP against the state-of-the-art both federated semi-supervised and noisy label learning approaches under task heterogeneity. Code is available at https://github.com/szbonaldo/FedMLP.
FedMLP: Federated Multi-Label Medical Image Classification under Task Heterogeneity
[ "Sun, Zhaobin", "Wu, Nannan", "Shi, Junjie", "Yu, Li", "Cheng, Kwang-Ting", "Yan, Zengqiang" ]
Conference
2406.18995
[ "https://github.com/szbonaldo/FedMLP" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
691
null
https://papers.miccai.org/miccai-2024/paper/3070_paper.pdf
@InProceedings{ Che_Ultrasound_MICCAI2024, author = { Chen, Tingxiu and Shi, Yilei and Zheng, Zixuan and Yan, Bingcong and Hu, Jingliang and Zhu, Xiao Xiang and Mou, Lichao }, title = { { Ultrasound Image-to-Video Synthesis via Latent Dynamic Diffusion Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Ultrasound video classification enables automated diagnosis and has emerged as an important research area. However, publicly available ultrasound video datasets remain scarce, hindering progress in developing effective video classification models. We propose addressing this shortage by synthesizing plausible ultrasound videos from readily available, abundant ultrasound images. To this end, we introduce a latent dynamic diffusion model (LDDM) to efficiently translate static images to dynamic sequences with realistic video characteristics. We demonstrate strong quantitative results and visually appealing synthesized videos on the BUSV benchmark. Notably, training video classification models on combinations of real and LDDM-synthesized videos substantially improves performance over using real data alone, indicating our method successfully emulates dynamics critical for discrimination. Our image-to-video approach provides an effective data augmentation solution to advance ultrasound video analysis. Code is available at https://github.com/MedAITech/U_I2V.
Ultrasound Image-to-Video Synthesis via Latent Dynamic Diffusion Models
[ "Chen, Tingxiu", "Shi, Yilei", "Zheng, Zixuan", "Yan, Bingcong", "Hu, Jingliang", "Zhu, Xiao Xiang", "Mou, Lichao" ]
Conference
[ "https://github.com/MedAITech/U_I2V" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
692
null
https://papers.miccai.org/miccai-2024/paper/1676_paper.pdf
@InProceedings{ Liu_Sparsity_MICCAI2024, author = { Liu, Mingyuan and Xu, Lu and Liu, Shengnan and Zhang, Jicong }, title = { { Sparsity- and Hybridity-Inspired Visual Parameter-Efficient Fine-Tuning for Medical Diagnosis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
The success of Large Vision Models (LVMs) is accompanied by vast data volumes, which are prohibitively expensive in medical diagnosis. To address this, recent efforts exploit Parameter-Efficient Fine-Tuning (PEFT), which trains a small number of weights while freezing the rest for knowledge transfer. However, they typically assign trainable weights to the same positions in LVMs in a heuristic manner, regardless of task differences, making them suboptimal for professional applications like medical diagnosis. To address this, we statistically reveal the nature of sparsity and hybridity during diagnostic-targeted fine-tuning, i.e., a small portion of key weights significantly impacts performance, and these key weights are hybrid, including both task-specific and task-agnostic parts. Based on this, we propose a novel Sparsity- and Hybridity-inspired Parameter Efficient Fine-Tuning (SH-PEFT). It selects and trains a small portion of weights based on their importance, which is innovatively estimated by hybridizing both task-specific and task-agnostic strategies. Validated on six medical datasets of different modalities, we demonstrate that SH-PEFT achieves state-of-the-art performance in transferring LVMs to medical diagnosis in terms of accuracy. By tuning around 0.01% number of weights, it outperforms full model fine-tuning. Moreover, SH-PEFT also performs comparably to other models deliberately optimized for specific medical tasks. Extensive experiments demonstrate the effectiveness of each design and reveal the great potential of pre-trained LVM transfer for medical diagnosis.
Sparsity- and Hybridity-Inspired Visual Parameter-Efficient Fine-Tuning for Medical Diagnosis
[ "Liu, Mingyuan", "Xu, Lu", "Liu, Shengnan", "Zhang, Jicong" ]
Conference
2405.17877
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
693
null
https://papers.miccai.org/miccai-2024/paper/3648_paper.pdf
@InProceedings{ Cui_7T_MICCAI2024, author = { Cui, Qiming and Tosun, Duygu and Mukherjee, Pratik and Abbasi-Asl, Reza }, title = { { 7T MRI Synthesization from 3T Acquisitions } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Supervised deep learning techniques can be used to generate synthetic 7T MRIs from 3T MRI inputs. This image enhancement process leverages the advantages of ultra-high-field MRI to improve the signal-to-noise and contrast-to-noise ratios of 3T acquisitions. In this paper, we introduce multiple novel 7T synthesization algorithms based on custom-designed variants of the V-Net convolutional neural network. We demonstrate that the V-Net based model has superior performance in enhancing both single-site and multi-site MRI datasets compared to the existing benchmark model. When trained on 3T-7T MRI pairs from 8 subjects with mild Traumatic Brain Injury (TBI), our model achieves state-of-the-art 7T synthesization performance. Compared to previous works, synthetic 7T images generated from our pipeline also display superior enhancement of pathological tissue. Additionally, we implement and test a data augmentation scheme for training models that are robust to variations in the input distribution. This allows synthetic 7T models to accommodate intra-scanner and inter-scanner variability in multisite datasets. On a harmonized dataset consisting of 18 3T-7T MRI pairs from two institutions, including both healthy subjects and those with mild TBI, our model maintains its performance and can generalize to 3T MRI inputs with lower resolution. Our findings demonstrate the promise of V-Net based models for MRI enhancement and offer a preliminary probe into improving the generalizability of synthetic 7T models with data augmentation.
7T MRI Synthesization from 3T Acquisitions
[ "Cui, Qiming", "Tosun, Duygu", "Mukherjee, Pratik", "Abbasi-Asl, Reza" ]
Conference
2403.08979
[ "https://github.com/abbasilab/Synthetic_7T_MRI" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
694
null
https://papers.miccai.org/miccai-2024/paper/2862_paper.pdf
@InProceedings{ Zha_Towards_MICCAI2024, author = { Zhao, Yisheng and Zhu, Huaiyu and Shu, Qi and Huan, Ruohong and Chen, Shuohui and Pan, Yun }, title = { { Towards a Deeper insight into Face Detection in Neonatal wards } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Neonatal face detection is the prerequisite for face-based intelligent medical applications. Nevertheless, it has been found that this area has received minimal attention in existing research. The paucity of open-source, large-scale datasets significantly constrains current studies, which are further compounded by issues such as large-scale occlusions, class imbalance, and precise localization requirements. This work aims to address these challenges from both data and methodological perspectives. We constructed the first open-source face detection dataset for neonates, involving images from 1,000 neonates in the neonatal wards. Utilizing this dataset and adopting NICUface-RF as the baseline, we introduce two novel modules. The hierarchical contextual classification aims to improve the positive/negative anchor ratios and alleviate large-scale occlusions. Concurrently, the DIoU-aware NMS is designed to preserve bounding boxes of superior localization quality by employing predicted DIoUs as the ranking criterion in NMS procedures. Experimental results illustrate the superiority of our method. The dataset and code is available at https://github.com/neonatal-pain.
Towards a Deeper insight into Face Detection in Neonatal wards
[ "Zhao, Yisheng", "Zhu, Huaiyu", "Shu, Qi", "Huan, Ruohong", "Chen, Shuohui", "Pan, Yun" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
695
null
https://papers.miccai.org/miccai-2024/paper/1467_paper.pdf
@InProceedings{ Li_DiffusionEnhanced_MICCAI2024, author = { Li, Xiang and Fang, Huihui and Liu, Mingsi and Xu, Yanwu and Duan, Lixin }, title = { { Diffusion-Enhanced Transformation Consistency Learning for Retinal Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Retinal image segmentation plays a critical role in rapid disease detection and early detection, such as assisting in the observation of abnormal structures and structural quantification. However, acquiring semantic segmentation labels is both expensive and time-consuming. To improve label utilization efficiency in semantic segmentation models, we propose Diffusion-Enhanced Transformation Consistency Learning (termed as DiffTCL), a semi-supervised segmentation approach. Initially, the model undergoes self-supervised diffusion pre-training, establishing a reasonable initial model to improve the accuracy of early pseudo-labels in the subsequent consistency training, thereby preventing error accumulation. Furthermore, we developed a Transformation Consistency Learning (TCL) method for retinal images, effectively utilizing unlabeled data. In TCL, the prediction of image affine transformations acts as supervision for both image elastic transformations and pixel-level transformations. We carry out evaluations on the REFUGE2 and MS datasets, involving the segmentation of two modalities: optic disc/cup segmentation in color fundus photography, and layer segmentation in optical coherence tomography. The results for both tasks demonstrate that DiffTCL achieves relative improvements of 5.0% and 2.3%, respectively, over other state-of-the-art semi-supervised methods. The code is available at: https://github.com/lixiang007666/DiffTCL.
Diffusion-Enhanced Transformation Consistency Learning for Retinal Image Segmentation
[ "Li, Xiang", "Fang, Huihui", "Liu, Mingsi", "Xu, Yanwu", "Duan, Lixin" ]
Conference
[ "https://github.com/lixiang007666/DiffTCL" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
696
null
https://papers.miccai.org/miccai-2024/paper/3555_paper.pdf
@InProceedings{ Men_PoseGuideNet_MICCAI2024, author = { Men, Qianhui and Guo, Xiaoqing and Papageorghiou, Aris T. and Noble, J. Alison }, title = { { Pose-GuideNet: Automatic Scanning Guidance for Fetal Head Ultrasound from Pose Estimation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
3D pose estimation from a 2D cross-sectional view enables healthcare professionals to navigate through the 3D space, and such techniques initiate automatic guidance in many image-guided radiology applications. In this work, we investigate how estimating 3D fetal pose from freehand 2D ultrasound scanning can guide a sonographer to locate a head standard plane. Fetal head pose is estimated by the proposed Pose-GuideNet, a novel 2D/3D registration approach to align freehand 2D ultrasound to a 3D anatomical atlas without the acquisition of 3D ultrasound. To facilitate the 2D to 3D cross-dimensional projection, we exploit the prior knowledge in the atlas to align the standard plane frame in a freehand scan. A semantic-aware contrastive-based approach is further proposed to align the frames that are off standard planes based on their anatomical similarity. In the experiment, we enhance the existing assessment of freehand image localization by comparing the transformation of its estimated pose towards standard plane with the corresponding probe motion, which reflects the actual view change in 3D anatomy. Extensive results on two clinical head biometry tasks show that Pose-GuideNet not only accurately predicts pose but also successfully predicts the direction of the fetal head. Evaluations with probe motions further demonstrate the feasibility of adopting Pose-GuideNet for freehand ultrasound-assisted navigation in a sensor-free environment.
Pose-GuideNet: Automatic Scanning Guidance for Fetal Head Ultrasound from Pose Estimation
[ "Men, Qianhui", "Guo, Xiaoqing", "Papageorghiou, Aris T.", "Noble, J. Alison" ]
Conference
2408.09931
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
697
null
https://papers.miccai.org/miccai-2024/paper/2315_paper.pdf
@InProceedings{ Jia_Explanationdriven_MICCAI2024, author = { Jiang, Ning and Huang, Zhengyong and Sui, Yao }, title = { { Explanation-driven Cyclic Learning for High-Quality Brain MRI Reconstruction from Unknown Degradation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Spatial resolution, signal-to-noise ratio (SNR), and motion artifacts critically matter in any Magnetic Resonance Imaging (MRI) practices. Unfortunately, it is difficult to achieve a trade-off between these factors. Scans with an increased spatial resolution require prolonged scan times and suffer from drastically reduced SNR. Increased scan time necessarily increases the potential of subject motion. Recently, end-to-end deep learning techniques have emerged as a post-acquisition method to deal with the above issues by reconstructing high-quality MRI images from various sources of degradation such as motion, noise, and low resolution. However, those methods focus on a single known source of degradation, while multiple unknown sources of degradation commonly happen in a single scan. We aimed to develop a new methodology that enables high-quality MRI reconstruction from scans corrupted by a mixture of multiple unknown sources of degradation. We proposed a unified reconstruction framework based on explanation-driven cyclic learning. We designed an interpretation strategy for the neural networks, the Cross-Attention-Gradient (CAG), which generates pixel-level explanations from degraded images to enhance reconstruction with degradation-specific knowledge. We developed a cyclic learning scheme that comprises a front-end classification task and a back-end image reconstruction task, circularly shares knowledge between different tasks and benefits from multi-task learning. We assessed our method on three public datasets, including the real and clean MRI scans from 140 subjects with simulated degradation, and the real and motion-degraded MRI scans from 10 subjects. We identified 5 sources of degradation for the simulated data. Experimental results demonstrated that our approach achieved superior reconstructions in motion correction, SNR improvement, and resolution enhancement, as compared to state-of-the-art methods.
Explanation-driven Cyclic Learning for High-Quality Brain MRI Reconstruction from Unknown Degradation
[ "Jiang, Ning", "Huang, Zhengyong", "Sui, Yao" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
698
null
https://papers.miccai.org/miccai-2024/paper/3796_paper.pdf
@InProceedings{ Liu_kt_MICCAI2024, author = { Liu, Ye and Cui, Zhuo-Xu and Sun, Kaicong and Zhao, Ting and Cheng, Jing and Zhu, Yuliang and Shen, Dinggang and Liang, Dong }, title = { { k-t Self-Consistency Diffusion: A Physics-Informed Model for Dynamic MR Imaging } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Diffusion models exhibit promising prospects in magnetic resonance (MR) image reconstruction due to their robust image generation and generalization capabilities. However, current diffusion models are predominantly customized for 2D image reconstruction tasks. When addressing dynamic MR imaging (dMRI), the challenge lies in accurately generating 2D images while simultaneously adhering to the temporal direction and matching the motion patterns of the scanned regions. In dynamic parallel imaging, motion patterns can be characterized through the self-consistency of k-t data. Motivated by this observation, we propose to design a diffusion model that aligns with k-t self-consistency. Specifically, following a discrete iterative algorithm to optimize k-t self-consistency, we extend it to a continuous formulation, thereby designing a stochastic diffusion equation in line with k-t self-consistency. Finally, by incorporating the score-matching method to estimate prior terms, we construct a diffusion model for dMRI. Experimental results on a cardiac dMRI dataset showcase the superiority of our method over current state-of-the-art techniques. Our approach exhibits remarkable reconstruction potential even at extremely high acceleration factors, reaching up to 24X, and demonstrates robust generalization for dynamic data with temporally shuffled frames.
k-t Self-Consistency Diffusion: A Physics-Informed Model for Dynamic MR Imaging
[ "Liu, Ye", "Cui, Zhuo-Xu", "Sun, Kaicong", "Zhao, Ting", "Cheng, Jing", "Zhu, Yuliang", "Shen, Dinggang", "Liang, Dong" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
699