File size: 5,952 Bytes
29935ef
 
 
0db6a31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
---
license: mit
---

# Ascites Segmentation with nnUNet

## Method 1: Run Inference using `nnunet_predict.py`

1. Install the latest version of [nnUNet](https://github.com/MIC-DKFZ/nnUNet#installation) and [PyTorch](https://pytorch.org/get-started/locally/).

```shell
user@machine:~/ascites_segmentation$ pip install torch torchvision torchaudio nnunet matplotlib
```

2. Run inference with command:

```shell
user@machine:~/ascites_segmentation$ python nnunet_predict.py -i file_list.txt -t TMP_DIR -o OUTPUT_FOLDER -m /path/to/nnunet/model_weights
```

```shell
usage: tmp.py [-h] [-i INPUT_LIST] -t TMP_FOLDER -o OUTPUT_FOLDER -m MODEL [-v]

Inference using nnU-Net predict_from_folder Python API

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT_LIST, --input_list INPUT_LIST
                        Input image file_list.txt
  -t TMP_FOLDER, --tmp_folder TMP_FOLDER
                        Temporary folder
  -o OUTPUT_FOLDER, --output_folder OUTPUT_FOLDER
                        Output Segmentation folder
  -m MODEL, --model MODEL
                        Trained Model
  -v, --verbose         Verbose Output
```



N.B. 
- `model_weights` folder should contain `fold0`, `fold1`, etc...
- WARNING: the program will try to create file links first, but will fallback to filecopy if fails


## Method 2: Run Inference using `nnUNet_predict` from shell

1. Install the latest version of [nnUNet](https://github.com/MIC-DKFZ/nnUNet#installation) and [PyTorch](https://pytorch.org/get-started/locally/).

```shell
user@machine:~/ascites_segmentation$ pip install torch torchvision torchaudio nnunet matplotlib
```

2. Place checkpoints in directory tree:

```shell
user@machine:~/ascites_segmentation$ tree .
.
β”œβ”€β”€ nnUNet_preprocessed
β”œβ”€β”€ nnUNet_raw_data_base
└── nnUNet_trained_models
    └── nnUNet
        └── 3d_fullres
            └── Task505_TCGA-OV
                └── nnUNetTrainerV2__nnUNetPlansv2.1
                    β”œβ”€β”€ fold_0
                    β”‚   β”œβ”€β”€ debug.json
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model.pkl
                    β”‚   └── progress.png
                    β”œβ”€β”€ fold_1
                    β”‚   β”œβ”€β”€ debug.json
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model.pkl
                    β”‚   └── progress.png
                    β”œβ”€β”€ fold_2
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model.pkl
                    β”‚   └── progress.png
                    β”œβ”€β”€ fold_3
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model.pkl
                    β”‚   └── progress.png
                    β”œβ”€β”€ fold_4
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model
                    β”‚   β”œβ”€β”€ model_final_checkpoint.model.pkl
                    β”‚   └── progress.png
                    └── plans.pkl
```

3. Setup environment variables so that nnU-Net knows where to find trained models: 

```shell
user@machine:~/ascites_segmentation$ export nnUNet_raw_data_base="/absolute/path/to/nnUNet_raw_data_base"
user@machine:~/ascites_segmentation$ export nnUNet_preprocessed="/absolute/path/to/nnUNet_preprocessed"
user@machine:~/ascites_segmentation$ export RESULTS_FOLDER="/absolute/path/to/nnUNet_trained_models"
```

4. Run inference with command:

```shell
user@machine:~/ascites_segmentation$ nnUNet_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -t 505 -m 3d_fullres -f N --save_npz 
```

where:
- `-i`: input folder of `.nii.gz` scans to predict. NB, filename needs to end with `_0000.nii.gz` to tell nnU-Net only one kind of modality
- `-o`: output folder to store predicted segmentations, automatically created if not exist
- `-t 505`: (do not change) Ascites pretrained model name
- `-m 3d_fullres` (do not change) Ascites pretrained model name
- `N`: Ascites pretrained model fold, can be `[0, 1, 2, 3, 4]`
- `--save_npz`: save softmax scores, required for ensembling multiple folds 

### Optional [Additional] Inference Steps

a. use `nnUNet_find_best_configuration` to automatically get the inference commands needed to run the trained model on data.

b. ensemble predictions using `nnUNet_ensemble` by running:

```shell
user@machine:~/ascites_segmentation$ nnUNet_ensemble -f FOLDER1 FOLDER2 ... -o OUTPUT_FOLDER -pp POSTPROCESSING_FILE
```

where `FOLDER1` and `FOLDER2` are predicted outputs by nnUNet (requires `--save_npz` when running `nnUNet_predict`).

## Method 3: Docker Inference

Requires `nvidia-docker` to be installed on the system ([Installation Guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)). This `nnunet_docker` predicts ascites with all 5 trained folds and ensembles output to a single prediction.

1. Build the `nnunet_docker` image from `Dockerfile`:

```shell
user@machine:~/ascites_segmentation$ sudo docker build -t nnunet_docker .
```

2. Run docker image on test volumes:

```shell
user@machine:~/ascites_segmentation$ sudo docker run \
--gpus 0 \
--volume /absolute/path/to/INPUT_FOLDER:/tmp/INPUT_FOLDER \
--volume /absolute/path/to/OUTPUT_FOLDER:/tmp/OUTPUT_FOLDER \
nnunet_docker /bin/sh inference.sh
```



- `--gpus` parameter:
  - `0, 1, 2, ..., n` for integer number of GPUs 
  - `all` for all available GPUs on the system
  - `'"device=2,3"'` for specific GPU with ID

- `--volume` parameter
  - `/absolute/path/to/INPUT_FOLDER` and `/absolute/path/to/OUTPUT_FOLDER` folders on the host system needs to be specified
  - `INPUT_FOLDER` contains all `.nii.gz` volumes to be predicted
  - predicted results will be written to `OUTPUT_FOLDER`