task_categories:
- image-segmentation
language:
- en
tags:
- medical
- blood-vessel
- octa
pretty_name: (Simulation-Based Segmentation of Blood Vessels in Cerebral 3D OCTA Images
size_categories:
- 1K<n<10K
syn-cerebral-octa-seg
Introduction
To accurately segment blood vessels in cerebral 3D Optical Coherence Tomography Angiography (OCTA) images, a vast amount of voxel-level annotations are required. However, the curation of manual annotations is a cumbersome and time-consuming task. To alleviate the need for manual annotation, we provide realistic synthetic data generated via our proposed synthesis pipeline.
Our proposed synthesis pipeline is described in detail in our manuscript (Simulation-Based Segmentation of Blood Vessels in Cerebral 3D OCTA Images). Corresponding code and additional information can be found on GitHub.
TL;DR: First, we selectively extract patches from vessel graphs that match the FOV and morphological characteristics of vasculature contained in cerebral OCTA images and transform them into voxelized volumes; second, we transform the voxelized volumes into synthetic cerebral 3D OCTA images by simulating the most dominant image acquisition artifacts; and third, we use our synthetic cerebral 3D OCTA images paired with their matching ground truth labels to train a blood vessel segmentation network.
Dataset Summary
The voxel size of all provided images is isotropic and corresponds to 2 ΞΌm.
synthetic_cerebral_octa/
βββ axxxx_0/
βββ sim/
βββ sim_data_xx.npy # synthetic cerebral 3D OCTA image
βββ sim_seg_xx.npy # ground truth
βββ ang.npy # metadata angle
βββ occ.npy # metadata occupancy below
βββ rad.npy # metadata radius
βββ seg.npy # voxelized volume
βββ axxxx_1/
...
manual_annotations/
βββ mx_0.nii # real cerebral 3D OCTA image
βββ mx_0_label.nii # ground truth (manual annotations)
...
Citation
If you find our data useful for your research, please consider citing:
@misc{wittmann2024simulationbased,
title={Simulation-Based Segmentation of Blood Vessels in Cerebral 3D OCTA Images},
author={Bastian Wittmann and Lukas Glandorf and Johannes C. Paetzold and Tamaz Amiranashvili and Thomas WΓ€lchli and Daniel Razansky and Bjoern Menze},
year={2024},
eprint={2403.07116},
archivePrefix={arXiv},
primaryClass={eess.IV}
}