AscitesModel / README.md
farrell236's picture
Upload Model
0db6a31
|
raw
history blame
5.95 kB
metadata
license: mit

Ascites Segmentation with nnUNet

Method 1: Run Inference using nnunet_predict.py

  1. Install the latest version of nnUNet and PyTorch.
user@machine:~/ascites_segmentation$ pip install torch torchvision torchaudio nnunet matplotlib
  1. Run inference with command:
user@machine:~/ascites_segmentation$ python nnunet_predict.py -i file_list.txt -t TMP_DIR -o OUTPUT_FOLDER -m /path/to/nnunet/model_weights
usage: tmp.py [-h] [-i INPUT_LIST] -t TMP_FOLDER -o OUTPUT_FOLDER -m MODEL [-v]

Inference using nnU-Net predict_from_folder Python API

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT_LIST, --input_list INPUT_LIST
                        Input image file_list.txt
  -t TMP_FOLDER, --tmp_folder TMP_FOLDER
                        Temporary folder
  -o OUTPUT_FOLDER, --output_folder OUTPUT_FOLDER
                        Output Segmentation folder
  -m MODEL, --model MODEL
                        Trained Model
  -v, --verbose         Verbose Output

N.B.

  • model_weights folder should contain fold0, fold1, etc...
  • WARNING: the program will try to create file links first, but will fallback to filecopy if fails

Method 2: Run Inference using nnUNet_predict from shell

  1. Install the latest version of nnUNet and PyTorch.
user@machine:~/ascites_segmentation$ pip install torch torchvision torchaudio nnunet matplotlib
  1. Place checkpoints in directory tree:
user@machine:~/ascites_segmentation$ tree .
.
β”œβ”€β”€ nnUNet_preprocessed
β”œβ”€β”€ nnUNet_raw_data_base
└── nnUNet_trained_models
    └── nnUNet
        └── 3d_fullres
            └── Task505_TCGA-OV
                └── nnUNetTrainerV2__nnUNetPlansv2.1
                    β”œβ”€β”€ fold_0
                    β”‚   β”œβ”€β”€ debug.json
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model.pkl
                    β”‚   └── progress.png
                    β”œβ”€β”€ fold_1
                    β”‚   β”œβ”€β”€ debug.json
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model.pkl
                    β”‚   └── progress.png
                    β”œβ”€β”€ fold_2
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model.pkl
                    β”‚   └── progress.png
                    β”œβ”€β”€ fold_3
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model.pkl
                    β”‚   └── progress.png
                    β”œβ”€β”€ fold_4
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model.pkl
                    β”‚   └── progress.png
                    └── plans.pkl
  1. Setup environment variables so that nnU-Net knows where to find trained models:
user@machine:~/ascites_segmentation$ export nnUNet_raw_data_base="/absolute/path/to/nnUNet_raw_data_base"
user@machine:~/ascites_segmentation$ export nnUNet_preprocessed="/absolute/path/to/nnUNet_preprocessed"
user@machine:~/ascites_segmentation$ export RESULTS_FOLDER="/absolute/path/to/nnUNet_trained_models"
  1. Run inference with command:
user@machine:~/ascites_segmentation$ nnUNet_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -t 505 -m 3d_fullres -f N --save_npz 

where:

  • -i: input folder of .nii.gz scans to predict. NB, filename needs to end with _0000.nii.gz to tell nnU-Net only one kind of modality
  • -o: output folder to store predicted segmentations, automatically created if not exist
  • -t 505: (do not change) Ascites pretrained model name
  • -m 3d_fullres (do not change) Ascites pretrained model name
  • N: Ascites pretrained model fold, can be [0, 1, 2, 3, 4]
  • --save_npz: save softmax scores, required for ensembling multiple folds

Optional [Additional] Inference Steps

a. use nnUNet_find_best_configuration to automatically get the inference commands needed to run the trained model on data.

b. ensemble predictions using nnUNet_ensemble by running:

user@machine:~/ascites_segmentation$ nnUNet_ensemble -f FOLDER1 FOLDER2 ... -o OUTPUT_FOLDER -pp POSTPROCESSING_FILE

where FOLDER1 and FOLDER2 are predicted outputs by nnUNet (requires --save_npz when running nnUNet_predict).

Method 3: Docker Inference

Requires nvidia-docker to be installed on the system (Installation Guide). This nnunet_docker predicts ascites with all 5 trained folds and ensembles output to a single prediction.

  1. Build the nnunet_docker image from Dockerfile:
user@machine:~/ascites_segmentation$ sudo docker build -t nnunet_docker .
  1. Run docker image on test volumes:
user@machine:~/ascites_segmentation$ sudo docker run \
--gpus 0 \
--volume /absolute/path/to/INPUT_FOLDER:/tmp/INPUT_FOLDER \
--volume /absolute/path/to/OUTPUT_FOLDER:/tmp/OUTPUT_FOLDER \
nnunet_docker /bin/sh inference.sh
  • --gpus parameter:

    • 0, 1, 2, ..., n for integer number of GPUs
    • all for all available GPUs on the system
    • '"device=2,3"' for specific GPU with ID
  • --volume parameter

    • /absolute/path/to/INPUT_FOLDER and /absolute/path/to/OUTPUT_FOLDER folders on the host system needs to be specified
    • INPUT_FOLDER contains all .nii.gz volumes to be predicted
    • predicted results will be written to OUTPUT_FOLDER