WDM: 3D Wavelet Diffusion Models for High-Resolution Medical Image Synthesis
This is the officical model repository of the paper "WDM: 3D Wavelet Diffusion Models for High-Resolution Medical Image Synthesis" by Paul Friedrich, Julia Wolleb, Florentin Bieder, Alicia Durrer and Philippe C. Cattin.
WDM is a wavelet-based medical image synthesis framework that can generate high-resolution medical images like CT or MR scans. For more information on our method, we refer to our project page or the paper.
Origial GitHub repository
If you want to use the pre-trained models provided in this repository, download the model weights and follow the instructions in the official GitHub repository.
Pre-trained models
BraTS 2023 (T1-weighted brain MR image generation)
- Download model Resolution: 128 x 128 x 128, Backbone: U-Net, Trained: 1.2M iterations
LIDC-IDRI (Lung CT image generation)
- Download model Resolution: 128 x 128 x 128, Backbone: U-Net, Trained: 1.2M iterations
Hardware requirements
To sample images from the provided models, you require a GPU with at least:
- 3 GB VRAM - for 128 x 128 x 128 (model uses ~2.55 GB)
- 8 GB VRAM - for 256 x 256 x 256 (model uses ~7.27 GB)
The models were trained on a system with an an AMD Epyc 7742 CPU and a NVIDIA A100 (40GB) GPU.
Citation
If you find this work useful, please cite:
@article{friedrich2024wdm,
title={WDM: 3D Wavelet Diffusion Models for High-Resolution Medical Image Synthesis},
author={Paul Friedrich and Julia Wolleb and Florentin Bieder and Alicia Durrer and Philippe C. Cattin},
year={2024},
journal={arXiv preprint arXiv:2402.19043}}