taesiri commited on
Commit
37ba27f
1 Parent(s): 7c0927e

Upload papers/2312/2312.04727.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/2312/2312.04727.tex +827 -0
papers/2312/2312.04727.tex ADDED
@@ -0,0 +1,827 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ \documentclass[nohyperref]{article}
4
+
5
+ \usepackage{microtype}
6
+ \usepackage{graphicx}
7
+ \usepackage{subfigure}
8
+ \usepackage{booktabs} \usepackage{tablefootnote}
9
+
10
+
11
+ \usepackage{hyperref}
12
+
13
+
14
+ \newcommand{\theHalgorithm}{\arabic{algorithm}}
15
+
16
+
17
+
18
+ \usepackage[accepted]{icml2023}
19
+
20
+ \usepackage{amsmath}
21
+ \usepackage{amssymb}
22
+ \usepackage{mathtools}
23
+ \usepackage{amsthm}
24
+ \usepackage{threeparttable}
25
+
26
+ \usepackage[capitalize,noabbrev]{cleveref}
27
+
28
+
29
+
30
+ \newtheorem{assumption}{Assumption}
31
+ \newtheorem{theorem}{Theorem}
32
+ \newtheorem{lemma}{Lemma}
33
+ \newtheorem{definition}{Definition}
34
+ \newtheorem{remark}{Remark}
35
+
36
+ \usepackage{algorithm, algorithmic}
37
+ \usepackage{graphicx}
38
+ \usepackage{amsmath}
39
+ \usepackage{amssymb}
40
+ \usepackage{booktabs}
41
+ \usepackage{bbding}
42
+ \usepackage{multirow}
43
+ \usepackage{footnote}
44
+ \usepackage{amsthm,amsmath,amssymb}
45
+ \usepackage{ftnxtra}
46
+ \usepackage{fnpos}
47
+
48
+
49
+ \usepackage[textsize=tiny]{todonotes}
50
+
51
+
52
+ \icmltitlerunning{E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient
53
+ 3D Medical Image Segmentation}
54
+
55
+ \begin{document}
56
+ \twocolumn[
57
+ \icmltitle{E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient \\ 3D Medical Image Segmentation}
58
+
59
+
60
+
61
+
62
+
63
+ \icmlsetsymbol{equal}{*}
64
+
65
+ \begin{icmlauthorlist}
66
+ \icmlauthor{Boqian Wu}{equal,aaa,ccc}
67
+ \icmlauthor{Qiao Xiao}{equal,bbb}
68
+ \icmlauthor{Shiwei Liu}{bbb,ddd}
69
+ \icmlauthor{Lu Yin}{bbb}
70
+ \icmlauthor{Mykola Pechenizkiy}{bbb}
71
+ \icmlauthor{Decebal Constantin Mocanu}{ccc,bbb,aaa}
72
+ \icmlauthor{Maurice Van Keulen}{aaa}
73
+ \icmlauthor{Elena Mocanu}{aaa}
74
+ \end{icmlauthorlist}
75
+
76
+ \icmlaffiliation{aaa}{University of Twente;}
77
+ \icmlaffiliation{bbb}{Eindhoven University of Technology;}
78
+ \icmlaffiliation{ccc}{University of Luxembourg;}
79
+ \icmlaffiliation{ddd}{University of Oxford}
80
+
81
+ \icmlcorrespondingauthor{Boqian Wu}{b.wu@utwente.nl}
82
+
83
+ \icmlkeywords{Machine Learning, ICML}
84
+
85
+ \vskip 0.3in
86
+ ]
87
+
88
+
89
+
90
+
91
+
92
+ \printAffiliationsAndNotice{\icmlEqualContribution.}
93
+
94
+ \begin{abstract}
95
+ Deep neural networks have evolved as the leading approach in 3D medical image segmentation due to their outstanding performance. However, the ever-increasing model size and computation cost of deep neural networks have become the primary barrier to deploying them on real-world resource-limited hardware. In pursuit of improving performance and efficiency, we propose a 3D medical image segmentation model, named Efficient to Efficient Network (E2ENet), incorporating two parametrically and computationally efficient designs. i. Dynamic sparse feature fusion (DSFF) mechanism: it adaptively learns to fuse informative multi-scale features while reducing redundancy. ii. Restricted depth-shift in 3D convolution: it leverages the 3D spatial information while keeping the model and computational complexity as 2D-based methods. We conduct extensive experiments on BTCV, AMOS-CT and Brain Tumor Segmentation Challenge, demonstrating that E2ENet consistently achieves a superior trade-off between accuracy and efficiency than prior arts across various resource constraints. E2ENet achieves comparable accuracy on the large-scale challenge AMOS-CT, while saving over $68\%$ parameter count and $29\%$ FLOPs in the inference phase, compared with the previous best-performing method. {Our code has been made available at: \hyperlink{https://github.com/boqian333/E2ENet-Medical}{https://github.com/boqian333/E2ENet-Medical}}.
96
+ \end{abstract}
97
+
98
+ \section{Introduction}
99
+ \label{sec:intro}
100
+
101
+ \begin{figure*}[!htb]
102
+ \centering
103
+ \includegraphics[width=\textwidth]{network-comparison.pdf}
104
+ \caption{A comparison of feature fusion schemes. The purple nodes depict features extracted from the backbone, while the green nodes depict the fused features. In particular, in DiNTS (d), red lines indicate information flow paths determined through neural architecture search techniques. In E2ENet (e), the red lines with different widths represent sparse information flows determined by the DSFF mechanism, allowing for efficient feature fusion. E2ENet is capable of dynamically learning how much of the features to fuse is derived from the backbone.}
105
+ \label{fig:comparison}
106
+ \end{figure*}
107
+
108
+
109
+ 3D medical image segmentation, plays an essential role in numerous clinical applications, including computer-aided diagnosis \cite{yu2020c2fnas} and image-guided surgery systems \citep{ronneberger2015u}, etc. Over the past decade, the rapid development of deep neural networks has achieved tremendous breakthroughs and significantly boosted the progress in this area \citep{zhou2018unet++, huang2020unet, isensee2021nnu}. However, the model sizes and computational costs are also exploding, which deter their deployment in many real-world applications, especially for 3D models where resource consumption scales cubically \citep{hu2021pseudo, valanarasu2022unext}. This naturally raises a research question: \textit{Can we design a 3D medical image segmentation method that trades off accuracy and efficiency better subjected to different resource availability?}
110
+
111
+
112
+
113
+ Accurately segmenting organs is a challenging task in 3D medical image segmentation due to the variability in size and shape even among the same type of organ, caused by factors such as patient anatomy and disease stage. This diversity amplifies the difficulty to accurately identify and distinguish the boundaries of different organs, leading to potential errors in segmentation.
114
+ One of the main solutions for accurate medical image segmentation is to favorably leverage the multi-scale features extracted by the backbone network, but this remains a long-standing problem.
115
+ Pioneering work UNet \citep{ronneberger2015u} utilizes skip connection to propagate detailed information to high-level features.
116
+ More recently, UNet++ \citep{zhou2018unet++}, CoTr \citep{xie2021cotr} and DiNTS \citep{he2021dints} have developed more complex neural networks architectures (e.g. densely skip connections and attention mechanism) or optimization techniques (e.g. neural architecture search (NAS) \citep{elsken2019neural}) for cross-scale feature fusion. {NAS initially searches for the network topology and operator (e.g., 3x3 Convolution and Maxpool) and subsequently optimize the model weights. However, NAS typically demands a significant amount of computational resources for exploring the network topology and assessing numerous candidate architectures, making it computationally expensive and time-consuming. For instance, C2FNAS \citep{C2FNAS} necessitates nearly one GPU year to discover a single 3D segmentation architecture, DiNTS \citep{he2021dints} improves the searching efficiency but still requires 5.8 GPU days to search for a single architecture.} {Compared to NAS approaches, our method does not require costly architecture search time and directly optimizes/searches sparse topology within the pre-defined architecture.}
117
+
118
+ In this paper, we propose \textbf{Efficient to Efficient Network (E2ENet)}, a model that can efficiently incorporate both bottom-up and top-down information flows from the backbone network in a dynamic sparse pattern, achieving a much better accuracy-efficiency trade-off. As shown in Figure \ref{fig:comparison} (e), E2ENet incorporates multi-scale features from the backbone into the final output by gradually fusing the adjacent features, allowing the network to fully utilize information from various scales. To prevent unnecessary information aggregation, a \textbf{dynamic sparse feature fusion (DSFF)} mechanism is proposed and embedded in each fusion node. The DSFF mechanism adaptively integrates relevant multi-scale features and filters out unnecessary ones during the course of the training process, significantly reducing the computational overhead without sacrificing performance. Additionally, to further improve efficiency, our E2ENet employs a \textbf{restricted depth-shift} strategy in 3D convolutions, which is derived from temporal shift \citep{lin2019tsm} and 3D-shift \citep{fan2020rubiksnet} used in efficient video action recognition. This allows the 3D convolution operation with kernel size $(1,3,3)$ to capture 3D spatial relationships while maintaining the computation and parameter complexity of 2D convolutions.
119
+
120
+ To evaluate the performance of our proposed E2ENet, we conducted extensive experiments on the three medical image segmentation challenges, including BTCV, AMOS-CT and Brain Tumor Segmentation
121
+ Challenge. E2ENet can effectively trade-off between segmentation accuracy and model efficiency compared with both convolution-based and transformer-based architectures. In particular, as shown in Figure \ref{fig:comparison1}, on the AMOS-CT challenge, E2ENet achieves competitive accuracy with mDice of 90.1\%, while being 69\% smaller and using 29\% fewer FLOPs in the inference phase. Furthermore, experiments demonstrate that even when filtering out 90\% of connections during feature fusion by the DSFF mechanism, E2ENet can maintain a comparable mDice performance with only a minor impact.
122
+
123
+ \begin{figure}[!htb]
124
+ \centering \includegraphics[width=0.45\textwidth]{E2ENet-Page-intro.pdf}
125
+ \vskip -0.15in
126
+ \caption{Performance comparison of E2ENet with three different levels of sparsity $0.9$, $0.8$ and $0.7$ and other models on the AMOS-CT challenge in terms of mDice (\%), Params (M), and inference FLOPs (G). The feature sparsity level, denoted by $S$, is explained in Section \ref{sec:Dynamic Sparse Feature Fusion}.} \vskip -0.1in
127
+ \label{fig:comparison1}
128
+ \end{figure}
129
+
130
+
131
+ \section{Related Work}
132
+ \label{sec:related work}
133
+ \subsection{3D Medical Image Segmentation}
134
+ Convolutional neural networks (CNNs) have become the dominant architecture for 3D medical image segmentation in recent years (e.g. 3D UNet \citep{cciccek20163d}, UNet++ \citep{zhou2018unet++}, UNet3+ \citep{huang2020unet}, PaNN \citep{zhou2019prior} and nnUNet \citep{isensee2021nnu}), due to their ability to capture local and weight-sharing dependencies \citep{d2021convit, dai2021coatnet}. However, some recent methods have attempted to incorporate transformer modules into CNNs (e.g. CoTr \citep{xie2021cotr}, TransBTS \citep{wang2021transbts}), or use pure transformer architectures (e.g. ConvIt \citep{karimi2021convolution}, nnFormer \citep{zhou2021nnformer}, Swin UNet \citep{cao2021swin}), in order to capture long-range dependencies. These transformer-based approaches often require large amounts of training data, longer training times, or specialized training techniques, and can also be computationally expensive. In this paper, we propose an alternative method for efficiently incorporating 3D contextual information using a restricted depth-shift strategy in 3D convolutions, and further improving performance through adaptive multi-scale feature fusion.
135
+
136
+ \subsection{Feature Fusion in Medical Image Segmentation}
137
+ Multi-scale feature fusion is a crucial technique in medical image segmentation that allows a model to detect objects across a range of scales, while also recovering spatial information that is lost during pooling \citep{wang2022uctransnet, xie2021cotr}. However, effectively representing and processing multi-scale hierarchy features can be challenging, and simply summing them up without distinction can lead to semantic gaps and degraded performance \citep{wang2022uctransnet, tan2020efficientdet}. To address this issue, various approaches have been proposed, including adding learnable operations to reduce the gap with residuals \citep{ibtehaz2020multiresunet}, attention blocks \citep{oktay2018attention}. More recently, UNet++ \citep{zhou2018unet++} and its variants \citep{li2020attention, huang2020unet, jha2019resunet++} have adapted the gating signal to dense nesting levels, taking into account as many feature levels as possible. NAS-UNet \citep{weng2019unet} tries to automatically search for better feature fusion topology. While these methods have achieved better performance, they can also incur significant computational and information redundancy. {Dynamic convolution \citep{Dynamiczhuo, YinpengDynamic} utilizes coefficient prediction or attention modules to dynamically aggregate convolution kernels, thereby reducing computation costs. In our paper, we propose an intuitive approach to optimizing multi-scale feature fusion, which enables selective leveraging of \textbf{sparse} feature representations from fine-grained to semantic levels through the proposed dynamic sparse feature fusion mechanism}.
138
+
139
+ \subsection{Sparse Training}
140
+ Recently, sparse training techniques have shown the possibility of training an efficient network with sparse connections that match (or even outperform) the performance of dense counterparts with lower computational cost \citep{mocanu2018scalable, liu2021onemillion}. Beginning with \citep{MocanuMNGL16}, it has been demonstrated that initializing a static sparse network without optimizing its topology during training can also yield comparable performance in certain situations \citep{lee2018snip, tanaka2020pruning, wang2019picking}. However, Dynamic Sparse Training (DST), also known as sparse training with dynamic sparsity \citep{mocanu2018scalable}, offers a different approach by jointly optimizing the sparse topology and weights during the training process starting from a sparse network \citep{liu2021sparse, liu2022more, evci2020rigging, jayakumar2020top, mostafa2019parameter, yuan2021mest}. This allows the model's sparse connections to gradually evolve in a prune-and-grow scheme, leading to improved performance compared to naively training a static sparse network \citep{LiuYMP21, xiao2022dynamic}. In contrast to prior methods that aim to find sparse networks that can match the performance of corresponding dense networks, we aim to leverage DST to adaptively fuse multi-scale features in a computationally efficient manner for 3D medical image segmentation.
141
+
142
+ \section{Methodology}
143
+ \label{sec:method}
144
+
145
+ \begin{figure*}[!t]
146
+ \centering
147
+ \includegraphics[width=0.8\textwidth]{E2ENet-framework2.pdf}
148
+ \caption{The overall architecture of the proposed E2ENet model consists of a CNN backbone that extracts multiple levels of features. These features are then gradually aggregated through several stages, during which the multi-scale features are fused using a fusion operation. The model has three main flow pathways: Downward flow (yellow) provides details from a lower to a higher level by down-sampling the features; Forward flow (red) applies multi-scale features fusion while maintaining resolution; Upward flow (blue) provides semantic information from a higher level to lower levels.}
149
+ \label{fig:framework}
150
+ \end{figure*}
151
+
152
+ In this section, we first present the overall architecture of E2ENet, which allows for the fusion of multi-scale features from three directions. Next, we describe the proposed DSFF mechanism, which adaptively selects informative features and filters out unnecessary ones during training. Lastly, to further increase efficiency, we introduce the use of restricted depth-shift in 3D convolution.
153
+
154
+ \subsection{The Architecture}
155
+ \label{sec:the Architecture}
156
+ Figure \ref{fig:framework} provides an overview of the proposed architecture. Given the input 3D image $I_{in} \in \mathbb{R}^{D \times H \times W}$, the CNN backbone extracts feature maps at multiple scales, represented by ${\mathbf{x}}^{0}=\left(\mathbf{x}^{0,1}, \mathbf{x}^{0,2}, \ldots, \mathbf{x}^{0,L} \right)$, where $L$ is the total number of feature scales. The feature at level ${i}$, $\mathbf{x}^{0, i}$ is a tensor with dimensions $d_{i} \times h_{i} \times w_{i} \times c_{i}$, where $d_{i}, h_{i}, w_{i}$ and $c_{i}$ represent the depth, height, width, and number of channels of the feature maps at level $i$, respectively. It's worth mentioning that the spatial resolution of the feature maps decreases as the level increases, while the number of channels increases to a maximum of $320$.
157
+ {The backbone generates a total of $L = 6$ multi-scale feature levels, each with a specified number of channels: [c1, c2, c3, c4, c5, c6] = [48, 96, 192, 320, 320, 320]. At each level of feature generation, there are two convolution layers with a kernel size of (1, 3, 3), followed by instance normalization and the application of leaky ReLU activation. The down-sampling ratios for each level are as follows: ((1, 2, 2), (2, 2, 2), (2, 2, 2), (2, 2, 2), (2, 2, 2), (2, 2, 2), (2, 2, 2)).}
158
+
159
+
160
+
161
+
162
+ To fully exploit the hierarchical features extracted by the CNN backbone, they are aggregated through multiple stages, as depicted in Figure 3 (green part). At stage $j$ ($0<j \leq L-1$) of this process, the features at level $i$ in the current stage are the result of fusing the adjacent features from the previous stage along three directions:
163
+ 1. “\textit{Downward flow}” (in yellow): The high-resolution feature $\mathbf{x}^{j-1,i-1}$, which provides richer visual details, is passed downward;
164
+ 2. “\textit{Upward flow}” (in blue): The low-resolution feature $\mathbf{x}^{j-1,i+1}$, which captures more global context, is passed upward;
165
+ 3. “\textit{Forward flow}” (in red): The features $x^{j-1,i}$, which maintain their spatial resolution, are passed forward for further information integration. The fused cross-scale feature map at the $i$-th level of the $j$-th stage can be formulated as:
166
+
167
+ \begin{equation}
168
+ \mathbf{x}^{j, i}=\left\{\begin{array}{ll}
169
+ \mathcal{F}^{j,i}([\mathbf{x}^{j-1,1},\mathcal{U}(\mathbf{x}^{j-1,2})]), & i=1; \\
170
+ \mathcal{F}^{j,i}([\mathcal{D}(\mathbf{x}^{j-1, i-1}),\mathbf{x}^{j-1,i}, \mathcal{U}(\mathbf{x}^{j-1,i+1})]), & others,
171
+ \end{array}\right.
172
+ \end{equation}
173
+
174
+ where $\mathcal{F}^{j,i}(.)$ is a fusion operation, consisting of a 3D convolution operation followed by Instance Normalization (IN) \citep{ulyanov2016instance} and a Leaky Rectified Linear Unit (LeakyReLU) \citep{maas2013rectifier}. $\mathcal{U}(.)$ and $\mathcal{D}(.)$ denote the up-sampling (transposed convolution with stride 2) and down-sampling (maxpooling), respectively. $[.]$ denotes the concatenation operation. In the 3D convolution operation, the input feature maps, which have a channel number of $C^{j,i}_{in}$ $(C^{j,i}_{in}=c_{i-1}+c_i+c_{i+1}$ or $c_i+c_{i+1})$, are fused and processed to produce output feature maps with a channel number of $C^{j,i}_{out}$ $(C^{j,i}_{out}= c_i)$.
175
+
176
+ We have integrated a deep supervision strategy to improve the training process, as in \citep{isensee2021nnu}. This strategy involves using downsampled versions of the ground truth segmentation and the corresponding predictions at each scale $i$ to compute the loss $l_i$. As shown in Figure \ref{fig:framework}, multi-scaled feature maps are processed through a $1 \times 1 \times 1$ convolutional layer in the output module, to generate multi-scaled predictions. The overall training objective is the sum of the losses at all scales, $l = w_1 \times l_1 + w_2 \times l_2 + …$, where the loss weights $w_i$ are normalized to sum to $1$ and decrease geometrically with the resolution.
177
+
178
+
179
+
180
+
181
+
182
+ Unlike the UNet++ model, which only considers bottom-up information flows for image segmentation, our proposed E2ENet architecture incorporates both bottom-up and top-down information flows. This allows E2ENet to make use of both high-level contextual information and fine-grained details in order to produce more accurate segmentation maps (experimental results can be found in Table~\ref{tab:braint_results}).
183
+
184
+ \subsection{Dynamic Sparse Feature Fusion}
185
+
186
+ \label{sec:Dynamic Sparse Feature Fusion}
187
+
188
+ Such multistaged cross-level feature propagation provides a more comprehensive understanding of the images, but can also introduce redundant information, which necessitates careful treatment in feature fusion to ensure efficient interaction between scales or stages.
189
+ Our proposed DSFF mechanism addresses the issue of multi-scale feature fusion in an intuitive and effective way. It optimizes the process by allowing selective and adaptive use of features from different levels during training. This results in a more efficient feature fusion process with lower computational and memory overhead.
190
+
191
+ The DSFF mechanism is applied in each fusion operation, allowing the fusion operation $\mathcal{F}^{j,i}(\cdot)$ to select the informative feature maps from the input fused features. The feature map selection process is controlled by a binary mask $\mathbf{M}^{j,i}(\cdot) \in \{0,1\}^{C^{j,i}_{in} \times C^{j,i}_{out}}$, which are trained to filter out $S$ unnecessary feature map connections by zeroing out all the values of the corresponding kernels in $\mathcal{F}^{j,i}(\cdot)$. With DSFF mechanism, the output of the fusion operation is then computed as:
192
+ \begin{equation}
193
+ \mathbf{x}_{c_{out}, :, :, :}^{j,i} ={\sigma}(\mathcal{I}(\sum_{c=0}^{C^{j,i}_{in}} (\tilde{\mathbf{x}}^{j,i}_{c, :, :, :} * (\mathbf{M}^{j,i}_{c,c_{out}} {\cdot} \mathbf{{\theta}}^{j,i}_{c, c_{out}, :, :, :})))),
194
+ \end{equation}
195
+ where $\tilde{\mathbf{x}}_{c,:,:,:}^{j,i}$ is the input fused feature map at the $c$-th channel. $\mathbf{{\theta}}^{j,i}_{c, c_{out}, :, :, :}$ is the kernel (feature map connection, as in Figure \ref{fig:selection}) that connects the $c$-th input feature map to $c_{out}$-th output feature map, $*$ is a convolution operation, {$\cdot$ is the product of a scalar (i.e. $\mathbf{{\theta}}^{j,i}_{c, c_{out}, :, :, :}$) and a matrix (i.e. $\mathbf{M}^{j,i}_{c,c_{out}}$}) \footnote{
196
+ {The computation cost of the masking operation is negligible. For instance, consider the feature map $\mathbf{x}_{c,:,:,:}^{j,i}$ with size $ D\times H \times W$, the kernel $\mathbf{{\theta}}^{j,i}_{c, c_{out}, :, :, :}$ with size of $3\times 3 \times 3$, and the number of kernels as $C_{in} \times C_{out}$. The computation cost of the masking operation alone is $C_{in} \times C_{out}$, whereas the combined cost of the masking operation and convolution operation is $D \times H \times W \times 3 \times 3 \times 3 \times C_{in} \times C_{out}.$}}.
197
+
198
+ $\mathbf{M}^{j,i}_{c,c_{out}}$ denotes the existence or absence of a connection between the $c$-th input feature map and the $c_{out}$-th output feature map. $\mathcal{I}(.)$ and $\sigma(.)$ denote the Instance Normalization and LeakyReLU, respectively.
199
+
200
+ \begin{algorithm*}[!t]
201
+ \caption{The Training Process of Dynamic Sparse Feature Fusion (DSFF)}
202
+ \label{alg:DSN}
203
+ \begin{algorithmic}[1]
204
+ \REQUIRE Dataset $\mathcal{X}$ with label $\mathcal{Y}$; feature sparsity $S$; backbone $f_{\Theta}(.)$; Output Module: $f_{out}$;
205
+ Total training epochs: $T$; \\
206
+ evolution period: $\Delta T$; connection updating {number}: $f_{decay}\left(\Delta T ; \alpha, T\right)=\frac{\alpha}{2}\left(1+\cos \left(\frac{\Delta T \pi}{T}\right)\right)$, {$\alpha$ represents the number of updated connections during the initial topology update, which is set to $1/2$}; Loss function: $\mathcal{L}(.)$; fusion operation: $\mathcal{F}^{j, i}(\cdot)$ with convolution kernels $\theta^{j, i}$, where the numbers of input and output channel are $C^{j,i}_{in}$, $C^{j,i}_{out}$. \\
207
+ \STATE $\mathbf{M}^{j,i} \leftarrow$ random initialize masks for all levels and stages, satisfying that $\|\mathbf{M}^{j,i}\|_0$ equals $(1-S) \times C^{j,i}_{in} \times C^{j,i}_{out}$
208
+ \FOR{$t=1$ {\bfseries to} $T$}
209
+ \STATE Sample a batch $I_{t}, Y_{t} \sim \mathcal{X}, \mathcal{Y}$ \\
210
+ \STATE Generate multi-scaled features: $\left(\mathbf{x}^{0,1}, \mathbf{x}^{0,2}, \ldots, \mathbf{x}^{0,L} \right)=f_{\Theta}(I_{t})$
211
+ \FOR{each stage $j=1$ {\bfseries to} $L-1$}
212
+ \FOR{each level $i=1$ {\bfseries to} $L-j$}
213
+
214
+ \IF{$i = 1$}
215
+ \STATE $\mathbf{x}^{j, i}= \mathcal{F}^{j,i}([\mathbf{x}^{j-1,1},\mathcal{U}(\mathbf{x}^{j-1,2})])$
216
+ \ELSE
217
+ \STATE $\mathbf{x}^{j, i}= \mathcal{F}^{j,i}([\mathcal{D}(\mathbf{x}^{j-1, i-1}),\mathbf{x}^{j-1,i}, \mathcal{U}(\mathbf{x}^{j-1,i+1})])$
218
+ \ENDIF
219
+ \ENDFOR
220
+ \ENDFOR
221
+ \STATE $l_{t}= 4/7\mathcal{L}(f_{out}\left(\mathbf{x}^{L-1, 1}\right), Y_{i})+2/7\mathcal{L}(f_{out}\left(\mathbf{x}^{L-2, 2}\right), \mathcal{D}(Y_{i}))+1/7\mathcal{L}(f_{out}\left(\mathbf{x}^{L-3, 3}\right), \mathcal{D}(\mathcal{D}(Y_{i})))$
222
+ \IF{$(t \bmod \Delta T)==0$}
223
+ \FOR{each stage $j=1$ {\bfseries to} $L-1$}
224
+ \FOR{each level $i=1$ {\bfseries to} $L-j$}
225
+ \STATE $u= (C^{j,i}_{in} \times C^{j,i}_{out})f_{{decay}}\left(t ; \alpha, T\right)(1-S)$
226
+ \STATE $IS \leftarrow$ importance score ($L1$-norm of corresponding kernel) for activated each feature connection
227
+ \STATE $\mathbb{I}_{{activate}}={RandomK} (\mathbb{I}_{{inactivate }}, u)$
228
+ \STATE $\mathbb{I}_{{inactivate }}={ArgTopK}\left(-IS, u\right)$
229
+ \STATE $\mathbf{M}^{j, i} \leftarrow$ Update $\mathbf{M}^{j, i}$ using $\mathbb{I}_{{inactivate}}$ and $\mathbb{I}_{{activate}}$
230
+ \ENDFOR
231
+ \ENDFOR
232
+ \ELSE
233
+ \STATE Training the E2ENet using SGD optimizer
234
+ \ENDIF
235
+ \ENDFOR
236
+ \end{algorithmic}
237
+ \end{algorithm*}
238
+
239
+ \begin{figure}[!htb]
240
+ \centering
241
+ \includegraphics[width=0.42\textwidth]{E2ENet-selection.pdf}
242
+ \vskip -0.15in
243
+ \caption{Illustration of our Dynamic Sparse Feature Fusion (DSFF) mechanism. The fusion operation starts from sparse feature connections and allows the connectivity to be evolved after training for $\Delta T$ epochs. During each evolution stage, a fraction of kernels with smaller $L_1$-norms will be zeroed out (red dotted line), while the same fraction of other inactivated connections will be reactivated randomly, keeping the feature sparsity $S$ constant during training (blue solid line).}
244
+ \label{fig:selection}
245
+ \end{figure}
246
+
247
+
248
+
249
+
250
+
251
+
252
+
253
+
254
+
255
+
256
+
257
+
258
+ Thus, the core of DSFF mechanism is the learning of binary masks during the training of E2ENet. At initialization, each binary mask is initialized randomly, with the number of non-zero entries $\|\mathbf{M}^{j,i}\|_0$ equals $(1-S) \times C^{j,i}_{in} \times C^{j,i}_{out}$. Here, $S$ ($0<S<1$) is a hyperparameter called the feature sparsity level, which determines the percentage of feature map connections that are randomly inactivated. The activated connections can be updated throughout the course of training, while the feature sparsity level $S$ remains constant. This allows for both testing and training efficiency, as the exploitation of connections is maintained sparsely throughout training.
259
+
260
+
261
+ Every $\Delta T$ training epoch, the activated connections with lower importance are removed, while the same number of deactivated connections are randomly reactivated, as shown in Figure \ref{fig:selection}. The importance of activated connection is determined by the L1-norm of the corresponding kernel. That is, for $\mathbf{M}^{j,i}$, the importance score of connection between $c_{in}$-th input feature and $c_{out}$-th output is $\|\mathbf{\theta}^{j,i}_{c_{in}, c_{out}, :, :, :} \|_1 $. Our intuition is simple: if the feature map connection is more important {(larger effect on the output accuracy)} than others, the L1-norm of the corresponding kernel should be larger. However, the importance score is suboptimal during training, as the kernels are gradually optimized. Randomly reactivating previous “obsolete” connections with re-initialization can avoid unrecoverable feature map abandonment and thoroughly explore the representative feature maps that contribute to the final performance.
262
+
263
+
264
+
265
+
266
+
267
+ The DSFF mechanism allows for the exploration and exploitation of multi-scaled sparse features through a novel sparse-to-sparse training method. {Our method selects features at multiple scales within each layer, as opposed to relying solely on feature selection at the input layer \citep{sokar2022where, atashgahi2020quick}.} Additionally, it is a plastic approach to fuse features that can adapt to changing conditions during training, as opposed to static methods that rely on one-shot feature selection, {as shown in the comparison of dynamic and static sparsity in Table 2 of \citep{mostafa2019parameter}}. More details of the training processing are elaborated in Algorithm \ref{alg:DSN}.
268
+
269
+ \subsection{Restricted Depth-Shift in 3D Convolution}
270
+ In 3D medical image segmentation, the 2D-based methods (such as the 2D nnUNet \citep{isensee2021nnu}), which apply the 2D convolution to each slice of the 3D image, are computationally efficient but can not fully capture the relationships between slices. To overcome this limitation, we take inspiration from the temporal-shift \citep{lin2019tsm} and 3D-shift \citep{fan2020rubiksnet} in efficient video action recognition, and the axial-shift \citep{lian2021mlp} in efficient MLP architecture for vision. Our proposed E2ENet incorporates a depth-shift strategy in 3D convolution operations, which facilitates inter-slice information exchange and captures 3D spatial relationships while retaining the simplicity and computational efficiency of 2D convolution. {Temporal-shift \citep{lin2019tsm} requires selecting a Shift Proportion (the proportion of channels to conduct temporal shift). Axial-shift \citep{lian2021mlp} and 3D-Shift optimize the learnable 3D spatiotemporal shift. We have made refinements to the channel shifting technique by shifting along the depth dimension and incorporating constraints on the shift size. This adaptation is thoughtfully designed to align with the distinctive needs of sparse models employed in medical image segmentation.}
271
+
272
+ {In our method, we employ a simple depth shift technique by shifting all channels while constraining the shift size to be either +1, 0 or -1, as shown in Figure \ref{fig:shift3D}. This choice is motivated using dynamic sparse feature fusion, where the feature maps contain sparse information. If only a portion of channels is selected for shifting or if the shift magnitude is too large, it can result in an insufficient representation of channels or an excessive representation of depth information, which can have a negative impact on the effectiveness of the shift operation (experimental
273
+ results can be found in Table \ref{tab:ablation study amos}).}
274
+
275
+
276
+ \begin{figure}[!htb]
277
+ \centering
278
+ \includegraphics[width=0.45\textwidth]{shift_conv.pdf}
279
+ \caption{Illustration of restricted depth-shift in 3D Convolution of our E2ENet. The input features (left) are firstly split into 3 parts along the channel direction, and then shifted by $\{ -1, 0, 1\}$ units along the depth dimension respectively (middle). After that, 3D CNNs with kernel size 1$\times$3$\times$3 is performed on the feature maps (middle) to generate the output features (right).}
280
+ \label{fig:shift3D}
281
+ \end{figure}
282
+
283
+ \section{Experiments}
284
+ \label{Exp}
285
+ In this section, we compare the performance of our E2ENet model to baseline methods on three datasets and report results in terms of both segmentation quality and efficiency. In addition, we will perform ablation studies to investigate the behavior of each component in the E2ENet model. To further analyze the performance of our model, we will present qualitative results by visualizing the predicted segmentations on sample images. We will also visualize the feature fusion ratios to gain insights into the importance of each feature in the segmentation process.
286
+ Furthermore, we will discuss the relationship between organ volume and organ segment accuracy to understand the effect of organ size on the segmentation quality.
287
+
288
+ \subsection{Datasets}
289
+
290
+ \textbf{AMOS-CT:} The Abdominal Multi-Organ Segmentation Challenge (AMOS) \citep{ji2022amos} task 1 consists of 500 computerized tomography (CT) cases, including 200 scans for training, 100 for validation, and 200 for testing. These cases have been collected from a diverse patient population and include annotations of 15 organs. The scans are from multiple centers, vendors, modalities, phases, and diseases.
291
+
292
+ \textbf{BTCV:} The Beyond the Cranial Vault (BTCV) abdomen challenge dataset \footnote{https://www.synapse.org/\#!Synapse:syn3193805/wiki/89480} consists of 30 CT scan images for training and 20 for testing. These images have been annotated by interpreters under the supervision of radiologists, and include labels for 13 organs.
293
+
294
+
295
+ \textbf{BraTS:} The Brain Tumor Segmentation Challenge in the Medical Segmentation Decathlon (MSD) \citep{antonelli2022medical, simpson2019large} consists of 484 MRI images from 19 different institutions. These images contain three different tumor regions of interest (ROIs): edema (ED), non-enhancing tumor (NET) and enhancing tumor (ET). The goal of the challenge is to segment these ROIs in the images accurately.
296
+
297
+ \subsection{Implementation Details}
298
+
299
+ In our work, we utilized the PyTorch toolkit \citep{paszke2019pytorch} on an NVIDIA A100 GPU for all our experimental evaluations. We also used the nnUNet codebase \citep{isensee2021nnu} to pre-process data before training our proposed E2ENet model. For the AMOS dataset, we used the nnUNet codebase as the benchmark implementation.
300
+
301
+ For training, we use the stochastic gradient descent (SGD) optimizer with an initial learning rate of 0.01, which is gradually decreased through a “poly” decay schedule. The optimizer is configured with a momentum of 0.99 and a weight decay of $3 \times 10^{-5}$. The maximum number of training epochs is 1000, with 250 iterations per epoch.
302
+ For the loss function, we combine both cross-entropy loss and Dice loss as in \citep{isensee2021nnu}. To improve performance, various data augmentation techniques such as random rotation, scaling, flipping, adding Gaussian noise, blurring, adjusting brightness and contrast, simulating low resolution, and Gamma transformation are used before training.
303
+
304
+ We employ a 5-fold cross-validation strategy on the training set for all experiments, selecting the final model from each fold and simply averaging their outputs for the final segmentation predictions. In the testing stage, we employ the sliding window strategy, where the window sizes are equal to the size of the training patches. Additionally, post-processing methods outlined in \citep{extending2022nnu} are applied for the AMOS-CT dataset during the testing phase.
305
+
306
+ \subsection{Evaluation Metrics}
307
+ \label{evaluate_metrics}
308
+ \subsubsection{Mean Dice Similarity Coefficient}
309
+ To assess the quality of the segmentation results, we use the mean Dice similarity coefficient (mDice), which is a widely used metric in medical image segmentation. The mDice is calculated as follows:
310
+ \begin{equation}
311
+ mDice = \frac{1}{N} \sum_{j=1}^N \frac{2|\mathbf{y}_{j} \cdot \hat{\mathbf{y}}_{j}| }{\left(|\mathbf{y}_{j}|+|\hat{\mathbf{y}}_{j}|\right)},
312
+ \end{equation}
313
+ where $N$ is the number of classes, $\cdot$ is the pointwise multiplication, $\mathbf{y}_{j}$ and $\hat{\mathbf{y}}_{j}$ represent the ground truth and predicted masks of the $j$-th class, respectively, which are encoded in one-hot format. $\frac{2|\mathbf{y}_{j} \cdot \hat{\mathbf{y}}_{j}| }{\left(|\mathbf{y}_{j}|+|\hat{\mathbf{y}}_{j}|\right)}$ is
314
+ the Dice of $j$-th class, which measures the overlap between the predicted and ground truth segmentation masks for that class.
315
+
316
+ \subsubsection{Number of Parameters}
317
+ The size of the network can be estimated by summing the number of non-zero parameters (Params), which includes the parameters of activated sparse feature connections (kernels) and parameters of the backbone. The calculation is given by the following equation:
318
+ \begin{equation}
319
+ Params = \| \Theta \|_0 +\sum_{j=1}^{L-1}\sum_{i=1}^{L-j}\sum_{c_{in}=1}^{C^{j, i}_{in}} \sum_{c_{out}=1}^{C^{j, i}_{out}} \mathbf{M}^{j, i}_ {c_{in}, c_{out}}\|\theta^{j, i}_ {c_{in}, c_{out}}\|_0.
320
+ \end{equation}
321
+ Here, $\Theta$ is the parameter from backbone, $L$ is the total number of feature levels, $\mathbf{M}^{j, i}$ is a matrix of size $C^{j, i}_{in} \times C^{j, i}_{out}$, and $\mathbf{M}^{j, i}_{c_{in}, c_{out}}$ indicates whether the kernel $\theta^{j, i}_ {c_{in}, c_{out}}$ connecting the $c_{in}$-th input and $c_{out}$-th output feature map exist or not. The $L_0$ norm $\|\theta^{j, i}_ {c_{in}, c_{out}}\|_0$ provides the number of non-zero entries of $\theta^{j, i}_ {c_{in}, c_{out}}$.
322
+
323
+ \subsubsection{Float Point Operations}
324
+ Floating point operations (FLOPs) is a commonly used metric to compare the computational cost of a sparse model to that of a dense counterpart \citep{hoefler2021sparsity} \footnote{{This is because current sparse training methods often use masks on dense weights to stimulate sparsity. This is done because most deep learning hardware is optimized for dense matrix operations. As a result, using these prototypes doesn't accurately reflect the true memory and speed benefits of a truly sparse network \citep{hoefler2021sparsity}.}}. In our comparison, it is calculated by counting the number of multiplications and additions performed in only one forward pass of the inference process without considering postprocessing. The inference FLOPs are estimated layer by layer and depend on the sparsity level of the network. For each convolution or transposed convolution layer, the inference FLOPs is calculated as follows:
325
+ \begin{equation}
326
+ FLOPs_{conv} = (2 K_dK_h K_wC_{in}(1-S) +1) \times C_{out} H W D,
327
+ \end{equation}
328
+ where $K_d$, $K_h$ and $K_w$ are the kernel sizes in depth, height and width; $S$ is the feature sparsity level, for layers that are not part of the DSFF mechanism, $S=0$ is used; $C_{in}$ and $C_{out}$ are the numbers of input feature and output feature; $H$, $W$ and $D$ are the height, width and depth of output features. For each fully connected layer, the inference FLOPs is calculated as follows:
329
+ \begin{equation}
330
+ FLOPs_{fc} = (2 C_{in}(1-S) +1) \times C_{out}.
331
+ \end{equation}
332
+
333
+ \subsubsection{Performance Trade-Off Score}
334
+ The accuracy-efficiency trade-offs could be further analyzed, from comparing resource requirements to describing holistic behaviours (including mDice, Params and inference FLOPs) for the 3D image segmentation methods. To quantify these trade-offs, we introduce the Performance Trade-Off (PT) score, which is defined as follows:
335
+ \begin{equation}
336
+ PT = \alpha_1 \frac{mDice}{mDice_{max}}+\alpha_2(\frac{Params_{min}}{ Params}+\frac{FLOPs_{min}}{FLOPs}),
337
+ \label{tp}
338
+ \end{equation}
339
+ where $\alpha_1$ and $\alpha_2$ are weighting factors, which control the trade-off between accuracy performance and resource requirements, and $mDice_{max}$, $Params_{min}$, and $FLOPs_{min}$ denote the highest mDice score, the smallest number of parameters, and the lowest inference FLOPs among the compared methods for a specific dataset, respectively. The term $\frac{mDice}{mDice_{max}}$ measures the segmentation accuracy, while $\frac{Params_{min}}{ Params}+\frac{FLOPs_{min}}{FLOPs}$ measures the resource cost.
340
+
341
+ In most cases, we consider both segmentation accuracy and resource cost to be equally important, thus we set $\alpha_1=1$ and $\alpha_2=1/2$ in the following experiments. However, we also explore the impact of different choices of $\alpha_1$ and $\alpha_2$, as detailed in Section \ref{Weighting Factors}. The PT score serves as a valuable metric for evaluating the trade-offs between segmentation accuracy and efficiency.
342
+
343
+ \begin{table*}[!htb]
344
+ \caption{Quantitative comparisons (class-wise Dice (\%) $\uparrow$, mDice(\%)$\uparrow$, Params(M)$\downarrow$, inference FLOPs(G)$\downarrow$, PT score$\uparrow$ and {mNSD(\%)$\uparrow$}) of segmentation performance on the validation set of AMOS-CT dataset. \textbf{Bold} indicates the best and
345
+ \underline{underline} indicates the second best. Note: Spl: spleen, RKid: right kidney, LKid: left kidney, Gall: gallbladder, Eso: esophagus, Liv: liver, Sto: stomach, Aor: aorta IVC: inferior vena cava, Pan: pancreas, RAG: right adrenal gland, LAG: left adrenal gland, Duo: duodenum, Bla: bladder, Pro/Uth: prostate/uterus. The class-wise Dice, mDice {and mNSD} results of baselines, except for nnUNet, are collected from the \citep{ji2022amos}. $^{\dag}$ indicates the results without postprocessing that are collected from the \href{http://www.amos.sribd.cn/about.html}{AMOS website}. $^{\ddag}$ denotes the results with postprocessing that are reproduced by us. $^{*}$ indicates the results with postprocessing.} \centering
346
+ \resizebox{\linewidth}{!}{
347
+ \begin{threeparttable}
348
+ \begin{tabular}{c|ccccccccccccccc|ccc|c|c}
349
+ \toprule[1pt]
350
+ Methods &Spl &RKid &LKid &Gall &Eso &Liv &Sto &Aor &IVC &Pan & RAG & LAG & Duo & Bla & Pro/Uth &mDice &Params & FLOPs \tnote{3} &PT score & {mNSD} \\
351
+ \hline
352
+ CoTr & 91.1& 87.2& 86.4& 60.5& 80.9& 91.6& 80.1& 93.7& 87.7& 76.3& 73.7& 71.7& 68.0& 67.4& 40.8 & 77.1 &41.87 &1510.53 &1.07 & {64.2}\\
353
+ nnFormer & 95.9& 93.5& 94.8& 78.5& 81.1& 95.9& 89.4& 94.2& 88.2& 85.0& 75.0& 75.9& 78.5& 83.9& 74.6& 85.6 &150.14 &{1343.65} &1.12 & {74.2}\\
354
+ \hline
355
+ UNETR & 92.7& 88.5& 90.6& 66.5& 73.3& 94.1& 78.7& 91.4& 84.0& 74.5& 68.2& 65.3& 62.4& 77.4& 67.5 & 78.3 &93.02 & \textbf{391.03} &1.41 & {61.5}\\
356
+ Swin UNETR & 95.5& 93.8& 94.5& 77.3& 83.0& 96.0& 88.9& 94.7& 89.6& 84.9& 77.2& 78.3& 78.6& 85.8& 77.4 & 86.4 &62.83 &1562.99 &1.14 & {75.3} \\
357
+ \hline
358
+ VNet &94.2& 91.9& 92.7& 70.2& 79.0& 94.7& 84.8& 93.0& 87.4& 80.5& 72.6& 73.2& 71.7& 77.0& 66.6 & 82.0 & 45.65 &1737.57 &1.10 & {67.9}\\
359
+ nnUNet$^{\dag}$ & \textbf{97.1} & 96.4 & 96.2 &83.2 & 87.5 & 97.6 & 92.2 & \textbf{96.0} & \underline{92.5} & 88.6 & 81.2 & 81.7 & \textbf{85.0} & \underline{90.5} & \underline{85.0} & 90.0 & 30.76 & 1067.89 & 1.30 & {82.1}\\
360
+ nnUNet$^{\ddag}$ & \textbf{97.1} & \textbf{97.0} & \textbf{97.1} & \textbf{86.6} & \textbf{87.7} & \textbf{97.9} & \textbf{92.4} & \textbf{96.0} & \textbf{92.7} & \underline{88.8} & \textbf{81.6} & {82.1} & \textbf{85.0} & \textbf{90.6} & \textbf{85.2} &\textbf{90.5} & 30.76 & 1067.89 &1.31 & \textbf{{83.0}} \\
361
+ \hline
362
+ E2ENet$^*$ (s=0.7) & \textbf{97.1} & \underline{96.9} & \textbf{97.1} & \underline{86.0} & \underline{87.6} & \textbf{97.9} & \underline{92.3} & {95.7} & {92.3} & \textbf{89.0} & \underline{81.5} & \textbf{82.4} & \underline{84.9} & {90.3} & 83.8 & \underline{90.3} &11.23 & 969.32 &1.54 & \underline{{82.7}}\\ E2ENet$^*$ (s=0.8) & \textbf{97.1} & \underline{96.9} & \underline{97.0} & 85.2 & 87.5 & \textbf{97.9} & \underline{92.3} & {95.7} & {92.3} & \textbf{89.0} & 81.3 & {82.1} & 84.6 & 90.1 & {84.8} & \underline{90.3} &\underline{9.44} & 778.74 &{1.65} &{82.5} \\
363
+ E2ENet$^*$ (s=0.9) & 96.7 & \underline{96.9} & \underline{97.0} & 84.2 & 87.0 & \underline{97.7} & 92.2 & 95.6 & 92.0 & 88.6 & 81.0 & 81.8 & 84.0 & 89.9 & 83.8 & 89.9 & \textbf{7.64} & \underline{492.29} &\textbf{1.89} &{81.8}\\
364
+ \hline
365
+ E2ENet (s=0.7) & \textbf{97.1} & 96.6 & 96.5 & 83.4 & \underline{87.6} & 97.5 & \underline{92.3} & \underline{95.8} & 92.3 & \textbf{89.0} & 81.4 & \underline{82.3} & \underline{84.9} & 90.3 & 83.8 & 90.1 &11.23 & 969.32 & 1.54 &{82.3}\\ E2ENet (s=0.8) & \textbf{97.1} & 96.6 & 96.5 &83.4 &87.5 &97.5 &\underline{92.3} &\underline{95.8} &92.3 &\textbf{89.0} &81.3 &82.0 &84.5 &90.1 &84.8 &90.0 &\underline{9.44} & 778.74 & 1.65 & {82.3}\\
366
+ E2ENet (s=0.9) & 96.7 &95.4 &96.4 &82.6 &86.9 &97.4 &92.2 &95.6 &92.0 & 88.6 & 80.9 & 81.7 & 84.0 &89.9 &83.8 &89.6 & \textbf{7.64} & \underline{492.29} & \underline{1.88} &{81.4} \\
367
+ {E2ENet(static, s=0.9)} & {96.6} &
368
+ {95.5} & {96.3} &82.6 &86.9 &97.4 &92.2 &95.6 &92.0 & 88.6 & 80.9 & 81.7 & 84.0 &89.9 &83.8 &89.6 & \textbf{7.64} & \underline{492.29} & \underline{1.88} &{81.4} \\
369
+ \bottomrule[1pt]
370
+ \end{tabular}
371
+ \begin{tablenotes}
372
+ \item[3] The inference FLOPs are calculated based on the patch sizes of $1\times128 \times 128 \times 128$ without considering postprocessing cost.
373
+ \end{tablenotes}
374
+ \end{threeparttable}
375
+ }
376
+ \label{tab:AMOS_c_Dice}
377
+ \end{table*}
378
+
379
+ \begin{table*}[!htb]
380
+ \caption{Quantitative comparisons of segmentation performance on BTCV test set. Note: Spl: spleen, RKid: right kidney, LKid: left kidney, Gall: gallbladder, Eso: esophagus, Liv: liver, Sto: stomach, Aor: aorta IVC: inferior vena cava, Veins: portal and splenic veins, Pan: pancreas, AG: adrenal gland. The results (class-wise Dice and mDice) for these baselines are from \citep{hatamizadeh2022unetr}. $^+$ denotes that the training of UNETR$^+$ is without using any extra data outside the challenge. {The results of nnUNet$^{\ddag}$, E2ENet and Hausdorff Distance (HD)$\downarrow$ of UNETR are from the \href{{https://www.synapse.org/\#!Synapse:syn3193805/wiki/217785}}{standard leaderboard} of BTCV challenge, while the results of nnUNet are from the \href{https://www.synapse.org/\#!Synapse:syn3193805/wiki/217785}{free leaderboard}.}}
381
+
382
+ \centering
383
+ \resizebox{\linewidth}{!}{
384
+ \begin{threeparttable}
385
+ \begin{tabular}{c|cccccccccccc|ccc|c|c}
386
+ \toprule[1pt]
387
+ Methods &Spl &RKid &LKid &Gall &Eso &Liv &Sto &Aor &IVC &Veins &Pan &AG &mDice &Params &FLOPs \tnote{1} & PT score & {HD}\\
388
+ \hline
389
+ CoTr &95.8 &92.1 &93.6 &70.0 &76.4 &96.3 &85.4 &92.0 &83.8 &78.7 &77.5 &69.4 &84.4 & 41.87 &{636.94} &1.22 & {/} \\
390
+ RandomPatch &96.3 &91.2 &92.1 &74.9 &76.0 &96.2 &87.0 &88.9 &84.6 &78.6 &76.2 &71.2 &84.4 & / & / & / & {/}\\
391
+ PaNN &96.6 & \textbf{92.7} &95.2 &73.2 &79.1 &97.3 &89.1 &91.4 &85.0 &80.5 &80.2 &65.2 &85.4 & / & / & / & {/}\\
392
+ UNETR$^+$ \tnote{} &\underline{96.8} & \underline{92.4} & 94.1 &75.0 &76.6 & 97.1& 91.3 &89.0 &84.7& 78.8 & 76.7 & 74.1 & 85.6 & 92.79 & \textbf{164.91} &\underline{1.53} & {23.4}\\
393
+ nnUNet \tnote{} &\textbf{97.2} &91.8 &\textbf{95.8} &75.3 &\underline{84.1} & \textbf{97.7} &\textbf{92.2} & \textbf{92.9} & \textbf{88.1} &\underline{83.2} & \textbf{85.2} &\underline{77.8} & \textbf{88.4} &\underline{31.18} &\underline{416.73} &1.38 & {15.6}\\
394
+ {nnUNet\tnote{$^{\ddag}$}} & {96.5} & {91.7} & {\textbf{95.8}} & {\textbf{78.5}} & {84.2}& {97.4} & {\underline{91.5}} & {\underline{92.3}} & {\underline{86.9}} & {\underline{83.1}} & {\underline{84.9}} &{{77.5}} & {{88.0}} & {\underline{31.18}} & {\underline{416.73}} & {{1.38}} & {{16.9}} \\
395
+
396
+ \hline
397
+
398
+ E2ENet ($s=0.7$) \tnote{} & 96.5 & 91.3 & \underline{95.7} & \underline{78.1} & \textbf{84.5} & \underline{97.5} & \underline{91.5} & \underline{92.2} & {86.7} & \textbf{83.4} & {84.8} & \textbf{77.9} & \underline{88.3} & \textbf{11.25} & 449.00 &\textbf{1.68} & {16.1}\\
399
+ \bottomrule[1pt]
400
+ \end{tabular}
401
+ \begin{tablenotes}
402
+ \item[1] {\footnotesize \small The inference FLOPs are calculated based on the patch sizes of $1 \times 96 \times 96 \times 96$. The codes for RandomPatch and PaNN are not publicly available, so it is not possible for us to determine their model size and inference FLOPs.}
403
+ \end{tablenotes}
404
+ \end{threeparttable}
405
+ }
406
+ \label{tab:BTCV_class_wise}
407
+ \end{table*}
408
+
409
+ \subsection{Performance Evaluation of E2ENet}
410
+ In this section, we present the performance evaluation of our proposed E2ENet model in comparison to several state-of-the-art baseline models on three different challenges: AMOS-CT challenge, BTCV challenge, and BraTS challenge in MSD. Specifically, we compare the models in terms of Dice/mDice, Params, FLOPs, and PT score. The results demonstrate the effectiveness of E2ENet, which achieves comparable performance with fewer parameters and lower computational cost than many other models.
411
+
412
+ \subsubsection{AMOS-CT Challenge}
413
+ To fairly and comprehensively validate our method, we compare it to several state-of-the-art CNN-based models (e.g. nnUNet \citep{isensee2021nnu}, and Vnet \citep{milletari2016vnet}) and transformer-based models (e.g. CoTr \citep{xie2021cotr}, nnFormer \citep{zhou2021nnformer}, UNETR \citep{hatamizadeh2022unetr}, and Swin UNETR \citep{tang2022self}). We recorded the class-wise Dice, mDice, Params, inference FLOPs, and PT score on the validation set\footnote{The validation set includes 100 images and was the previous test set for the first stage of the challenge. In the training of E2ENet, we did not use the validation set and treated it as the test set.} of the AMOS-CT challenge in Table \ref{tab:AMOS_c_Dice}. E2ENet with sparsity level $0.8$ achieves comparable performance with a mDice of $90.3\%$, while being four times smaller in size and requiring less computational cost in the testing phase, compared to the top-performing and lightweight model nnUNet. As the feature sparsity ($S$) of E2ENet increases, the number of model parameters and inference FLOPs can be decreased even further without significantly compromising segmentation performance. This indicates that there is potential to trade-off performance and efficiency by adjusting the sparsity of the model. {We reported the results of normalized surface dice (NSD), which is the official segmentation metrics for AMOS challenge, to provide supplementary information on boundaries segmentation quliaty.} We also presented the results of the test set in Table~\ref{Tab:AMOS_v_t}.
414
+
415
+
416
+
417
+
418
+
419
+ \subsubsection{BTCV Challenge}
420
+
421
+ We compare the performance of our E2ENet model to several baselines (CoTr \citep{xie2021cotr}, RandomPatch \citep{tang2021high}, PaNN \citep{zhou2019prior}, UNETR \citep{hatamizadeh2022unetr}, and nnUNet \citep{isensee2021nnu}) on the test set of BTCV challenge, and report class-wise Dice, mDice, Params and inference FLOPs on the test set in Table \ref{tab:BTCV_class_wise}. It is worth noting that nnUNet is a strong performer that uses an automatic model configuration strategy to select and ensemble two best of multiple U-Net models (2D, 3D and 3D cascade) based on cross-validation results. In contrast, E2ENet is designed to be computationally and
422
+ memory efficient, using a consistent 3D network configuration. Swin UNETR \citep{tang2022self} is among the best on the leaderboard for this challenge. However, we do not include it in our comparison because it employs self-supervised learning with extra data. This falls outside of our goal of trading off training efficiency and accuracy without using extra data.
423
+
424
+ Our proposed E2ENet, a single 3D architecture without cascade, has achieved comparable performance to nnUNet, with mDice of 88.3\%. Additionally, it has a significantly smaller number of parameters, 11.25 M, compared to other methods such as nnUNet (30.76 M), CoTr (41.87 M), and UNETR (92.78 M).
425
+
426
+
427
+
428
+
429
+
430
+
431
+ \subsubsection{BraTS Challenge in MSD}
432
+ On the 5-fold validation of the training set, our E2ENet model demonstrates superior performance in terms of mDice compared to other state-of-the-art methods (UNETR \citep{hatamizadeh2022unetr}, DiNTS \citep{he2021dints}, nnUNet \citep{isensee2021nnu} and UNet++ \citep{zhou2018unet++}). Additionally, it is a computationally efficient network that is competitive among the baselines, as evidenced by its small model size and low inference FLOPs. Specifically, the E2ENet model with a feature sparsity level of $90\%$ has only $7.63$ M parameters, which is significantly smaller than other models yet still outperforms them in terms of mDice. Overall, the E2ENet model provides an optimal trade-off between performance and efficiency.
433
+
434
+
435
+
436
+ \subsection{Ablation Studies}
437
+ In this section, we investigate the impact of two factors on the performance of the E2ENet model: the DSFF mechanism and restricted depth-shift in 3D convolution. Specifically, we consider the following scenarios: (\#1) w/ DSFF: DSFF mechanism is used to dynamically activate feature map, otherwise, all feature maps are activated during training; (\#2) w/ shift: restricted depth/height/width shifting is applied on feature maps before the convolution operation, otherwise, the convolution is performed directly as a standard 3D convolution without shifting operation. We conduct ablation studies on the BraTS challenge and the AMOS-CT challenge. We opted not to include the BTCV challenge in our study due to its similarity to the AMOS-CT challenge, as both involve multi-organ segmentation in CT images. Furthermore, the AMOS-CT challenge is a larger and more comprehensive evaluation platform, allowing us to thoroughly assess the effectiveness of two factors. As for the BraTS challenge, it is specific to brain MRI image segmentation, which differs from the previous two challenges in focus and image modality.
438
+
439
+ \begin{table}[!htb]
440
+ \vskip -0.1in
441
+ \caption{5-fold cross-validation of segmentation performance on the training set of BraTS Challenge in the MSD. Note: ED, ET and NET denote edema, enhancing tumor and non-enhancing tumor, respectively.} \centering
442
+ \resizebox{\linewidth}{!}{
443
+ \begin{threeparttable}
444
+ \begin{tabular}{c|ccc|ccc|c}
445
+ \toprule[1pt]
446
+ Methods & ED & ET & NET & mDice & Parmas & FLOPs \tnote{1} & PT score\\
447
+ \toprule[0.5pt] \midrule[0.5pt]
448
+ DiNTS \tnote{} & 80.2 & 61.1 & 77.6 & 73.0 &/ &/ &/ \\
449
+ {UNet++} &80.5 &\underline{62.5} &79.2 & 74.1 & 58.38 & 3938.25 &1.12\\
450
+ nnUNet &\underline{81.0} &62.0 &79.3 &74.1 & 31.20 & 1076.62 &1.35 \\
451
+ \cmidrule(lr){1-8}
452
+ {E2ENet ($S=0.7$)} & \textbf{81.2} &\textbf{62.7} &\textbf{79.5} &\textbf{74.5} &11.24 &1067.06 &1.57\\
453
+ {E2ENet ($S=0.8$)} &\underline{81.0} &\underline{62.5} &79.0 &74.2 &\underline{9.44} &\underline{780.97} &\underline{1.72} \\
454
+ {E2ENet ($S=0.9$)} &80.9 &\underline{62.5} &\underline{79.4} &\underline{74.3} & \textbf{7.63}& \textbf{494.52} &\textbf{2.00}\\
455
+ \bottomrule[1pt]
456
+ \end{tabular}
457
+ \begin{tablenotes}
458
+ \item[1] {\footnotesize \normalsize The inference FLOPs are calculated based on patch sizes of 4 × 128 × 128 × 128. The number of parameters and inference FLOPs for DiNTS are not reported since calculating them can be time-consuming. This is due to the fact that the architecture for DiNTS is not readily available in this dataset and must be searched for using Neural Architecture Search (NAS).}
459
+ \end{tablenotes}
460
+ \end{threeparttable}
461
+ }
462
+ \label{tab:braint_results}
463
+ \end{table}
464
+
465
+
466
+
467
+ \begin{table}[!htb]
468
+ \vskip -0.1in
469
+ \caption{The ablation study of the effect of DSFF mechanism and restricted depth-shift in 3D convolution, evaluated through 5-fold cross-validation of segmentation performance on the training set of the BraTS Challenge in the MSD.} \centering
470
+ \resizebox{\linewidth}{!}{
471
+ \begin{tabular}{c|cc|ccc|c|c|c}
472
+ \toprule[1pt]
473
+ w/ DSFF & shift & kernel size & ED & ET & NET & mDice &Params &FLOPs \tnote{}\\
474
+ \hline
475
+ \XSolidBrush & \XSolidBrush & $(1\times3\times3)$ &80.4 &62.4 &79.0 &73.9 &23.89 &3071.78\\
476
+ \XSolidBrush &\CheckmarkBold & $(1\times3\times3)$ & 81.0 &62.3 &79.0 &74.1 &23.89 &3071.78\\
477
+ \hline
478
+
479
+ \CheckmarkBold &\XSolidBrush & $(1\times3\times3)$ &80.3 &\underline{62.5} &79.0 &73.9 &11.24 &1067.06 \\
480
+ \CheckmarkBold & \CheckmarkBold & $(1\times3\times3)$ &\textbf{81.2} & \textbf{62.7} & \textbf{79.5} &\textbf{74.5} &11.24 &1067.06 \\
481
+ \hline
482
+ \hline
483
+ \CheckmarkBold & \XSolidBrush & $(3\times1\times3)$ &80.3 &61.7 &78.7 &73.6 &11.24 &1067.06 \\
484
+ \CheckmarkBold & \XSolidBrush & $(3\times3\times1)$ &80.5 &61.9 &78.7 &73.7 &11.24 &1067.06\\
485
+ \hline
486
+ \CheckmarkBold & \CheckmarkBold & $(3\times1\times3)$ &\underline{81.1} &62.4 &78.9 & 74.1 &11.24 &1067.06 \\
487
+ \CheckmarkBold & \CheckmarkBold & $(3\times3\times1)$ &81.0 &62.3 &\underline{79.4} & \underline{74.2} &11.24 &1067.06 \\
488
+
489
+ \bottomrule[1pt]
490
+ \end{tabular}
491
+ }
492
+ \label{tab:ablation study brast}
493
+ \end{table}
494
+
495
+
496
+ \subsubsection{Effect of DSFF Mechanism}
497
+ Table \ref{tab:ablation study brast} shows that removing the DSFF mechanism (2nd row) caused the mDice to drop from $74.5\%$ to $74.1\%$ for the BraTS challenge, while also increasing the number of parameters and inference FLOPs by nearly 2-3 times. On the AMOS-CT challenge, Table \ref{tab:ablation study amos} demonstrates that the E2ENet model with the DSFF mechanism achieved comparable performance with 3 times fewer parameters and inference FLOPs. These ablation study results highlight the effectiveness of the dynamic sparse feature fusion (DSFF) mechanism.
498
+
499
+ Moreover, as shown in Table \ref{tab:braint_results}, the E2ENet model with the DSFF mechanism outperforms other feature fusion architectures such as UNet++ and DiNTS in the BraTS challenge. Overall, the results of the BraTS and AMOS-CT challenges demonstrate that the DSFF mechanism is an effective and efficient feature fusion process with lower computational and memory overhead.
500
+
501
+
502
+
503
+ \begin{table}[!htb]
504
+ \caption{The ablation study of the effect of DSFF mechanism and restricted depth-shift in 3D convolution on the validation set of AMOS-CT Challenge.} \centering
505
+ \resizebox{\linewidth}{!}{
506
+ \begin{threeparttable}
507
+ \begin{tabular}{cc|cc|c|c|c|c|c}
508
+ \toprule[1pt]
509
+ w/ DSFF & {$\Delta T$ (\# iters)} & w/ shift & shift size & kernel size & mDice &Params &FLOPs & {mNSD} \tnote{1}\\
510
+ \hline
511
+ \XSolidBrush & {/} &\CheckmarkBold & $(-1, 0, 1)$ & $(1\times3\times3)$ &\textbf{90.2} &23.90 &3069.55 & {82.6}\\
512
+
513
+ \XSolidBrush & {/} &\XSolidBrush & $(-1, 0, 1)$ & $(1\times3\times3)$ &88.6 &23.90 &3069.55 & {78.6}\\
514
+ \hline
515
+ \CheckmarkBold & {1200} &\XSolidBrush & $(-1, 0, 1)$ & $(1\times3\times3)$ &88.2 &11.23 & 969.32 & {79.4}\\
516
+ \CheckmarkBold & {1200} &\CheckmarkBold & $(-1, 0, 1)$ & $(1\times3\times3)$ &\underline{90.1} &11.23 & 969.32 & {82.3}\\
517
+ \hline
518
+ \hline
519
+ \CheckmarkBold & {1200} &\CheckmarkBold & $(-2, 0, 2)$ & $(1\times3\times3)$ &{89.8} &11.23 & 969.32 & {82.0}\\
520
+ \CheckmarkBold & {1200} &\CheckmarkBold & $(-3, 0, 3)$ & $(1\times3\times3)$ &89.7 &11.23 & 969.32 & {81.6}\\
521
+ \hline
522
+ \hline
523
+ {\CheckmarkBold} & {1200} & {\XSolidBrush} & {$/$} & {$(3\times3\times3)$} &{90.1} &{52.54} &{4511.57} &{82.1} \\
524
+ {\XSolidBrush} & {1200} & {\XSolidBrush} & {$/$} & {$(3\times3\times3)$} &{90.2} &{27.97} &{1778.55} &{82.5}\\
525
+ \hline
526
+ {\CheckmarkBold} & {600} & {\CheckmarkBold} & {$(-1, 0, 1)$} & {$(1\times3\times3)$} &{90.0} &{11.23} &{969.32} &{82.2}\\
527
+ {\CheckmarkBold} & {1800} & {\CheckmarkBold} & {$(-1, 0, 1)$} & {$(1\times3\times3)$} &{89.9} &{11.23} &{969.32} &{82.0}\\
528
+ \bottomrule[1pt]
529
+ \end{tabular}
530
+ \begin{tablenotes}
531
+ \item[1] {\footnotesize \normalsize Inference FLOPs are calculated without including shift operations.}
532
+ \end{tablenotes}
533
+ \end{threeparttable}
534
+ }
535
+ \label{tab:ablation study amos}
536
+ \end{table}
537
+
538
+ {To investigate the impact of the DSFF module, we compared E2ENet with other multi-scale medical image segmentation methods on the AMOS-CT validation dataset. These methods include DeepLabv3 \citep{ChenPSA17}, CoTr \citep{xie2021cotr}, and MedFormer \citep{gao2022data} (see Table \ref{tab: ablation study for DSFF}). DeepLabv3 uses atrous convolution for capturing multi-scale context. CoTr integrates multi-scale features using attention, while MedFormer employs all-to-all attention for comprehensive multi-scale fusion, addressing both semantic and spatial aspects.}
539
+
540
+ {To further investigate the impact of the DSFF module, we compared E2ENet with other multi-scale medical image segmentation methods on the AMOS-CT validation dataset, including DeepLabv3 \citep{ChenPSA17}, CoTr\citep{xie2021cotr}, and MedFormer \citep{gao2022data} (see Table 6). DeepLabv3 uses atrous convolution for multi-scale context capture. CoTr integrates multi-scale features using attention. MedFormer employs all-to-all attention for comprehensive multi-scale fusion, both semantically and spatially aware. We observed that, with a sparsity ratio of 0.9, E2ENet achieves performance comparable to DeepLabv3 while significantly reducing computational and memory costs, by more than 2x and 8x, respectively. E2ENet with a sparsity ratio of 0.8 matches MedFormer's performance while significantly reducing computational and memory costs, by more than 2x and 4x, respectively. This demonstrates the benefits of multi-scale feature aggregation for medical image segmentation, with E2ENet's DSFF module being much more efficient than other multi-scale methods. Finally, we also studied the impact of topology update frequency ($\Delta T$) in the DSFF module and observed that our algorithm is not sensitive to the hyperparameter $\Delta T.$}
541
+
542
+ \begin{table}[!htb]
543
+ \caption{{The comparison with other multi-scale medical image segmentation methods on the AMOS-CT validation dataset. The mDice of MedFormer is collected from \citep{gao2022data}.}} \centering
544
+ \resizebox{0.8\linewidth}{!}{
545
+ \begin{threeparttable}
546
+ \begin{tabular}{c|c|c|c}
547
+ \toprule[1pt]
548
+ {method} & {mDice} & {Params} & {FLOPs} \\
549
+ \hline
550
+ \bottomrule[1pt]
551
+ {DeepLabv3} & {89.0} & {1546.25} & {74.68}\\
552
+ {CoTr} & {77.1} & {1510.53} & {41.87} \\
553
+ {MedFormer} & {\textbf{90.1}} & {2332.75} & {39.59} \\
554
+ {E2ENet (S=0.8)} & {\underline{90.0}} & {1041.42} & {9.43} \\
555
+ {E2ENet (S=0.9)} & {{89.0}} & {\textbf{680.25}} & {\textbf{7.63}} \\
556
+ \hline
557
+ \end{tabular}
558
+ \end{threeparttable}
559
+ }
560
+ \label{tab: ablation study for DSFF}
561
+ \end{table}
562
+
563
+ \begin{figure}[!htb]
564
+ \centering
565
+ \includegraphics[width=0.48\textwidth]{cd.drawio.pdf}
566
+ \caption{{The critical distance diagram on the AMOS-CT validation dataset, with the evaluation metric being mDice.}
567
+ }
568
+ \label{fig:cd}
569
+ \end{figure}
570
+
571
+
572
+ \subsubsection{Effect of Restricted Depth-Shift in 3D Convolution}
573
+
574
+
575
+
576
+ We evaluated the effectiveness of our restricted shift strategy by comparing the performance of E2ENet and its variants. Without restricted depth-shift, the mDice decreased from 74.5\% to 73.9\% for the BraTS challenge {as shown in 3rd row of Table \ref{tab:ablation study brast}, and decreased from 90.1\% to 88.2\% for AMOS-CT (as shown in the 3rd row of Table \ref{tab:ablation study amos}). In Table \ref{tab:ablation study amos}, when comparing E2ENet with kernel sizes of 3x3x3 (last 3rd and 4th row) to E2ENet with kernel sizes of 1x3x3 with a depth shift (2nd and 6th row), their segmentation accuracy remains the same, whether with DSFF or without DSFF. This demonstrates that the use of a 1x3x3 kernel with restricted depth shift is functionally equivalent to a 3x3x3 kernel in terms of segmentation accuracy, all the while offering savings in computational and memory resources.}
577
+
578
+ Rows 7 and 8 of Table \ref{tab:ablation study brast} demonstrate that when restricted shift is applied to the height or width dimensions, the model's performance decreases compared to E2ENet (4th row) with restricted shift on the depth dimension, called as restricted depth-shift. We also observed that the performance of 3D convolution with kernel sizes of $1 \times 3 \times 3$ (row 3), $3 \times 1 \times 3$ (row 5), and $3 \times 3 \times 1$ (row 6) decreased significantly without restricted shift compared to their counterparts with restricted shift on the corresponding dimensions (rows 4, 7, and 8, respectively). Furthermore, Table \ref{tab:ablation study amos} shows that in the AMOS-CT challenge, models without restricted depth-shift (1st and 3rd row) exhibited a noticeable decline in performance compared to their counterparts with restricted depth-shift. These phenomena demonstrate the effectiveness of restricted shift, especially restricted depth-shift, for medical image segmentation.
579
+
580
+
581
+
582
+ Furthermore, we evaluated the impact of different shift sizes on the model's performance. As shown in Table 6, we compared the performance of E2ENet with shift sizes of (-1, 0, 1) to (-2, 0, 2) and (-3, 0, 3), and observed a decrease in performance from $90.1\%$ to $89.8\%$ and $89.7\%$, respectively, as the shift size increased. This demonstrates that excessive shifting can lead to the overrepresentation of depth information in sparse feature maps, which can negatively impact the continuity of 3D image information. Therefore, we use a shift size of (-1, 0, 1) in our restricted depth-shift strategy as the default setting in our experiments.
583
+
584
+ {Finally, to demonstrate the advantages of individual modules, we plot a critical distance diagram using the Nemenyi post-hoc test with a p-value of 0.05 to establish the statistical significance of our modules. In Figure \ref{fig:cd}, the top line represents the axis along which the methods' average ranks, and a lower value indicates better performance. Methods joined by thick horizontal black lines are considered not statistically different. From the diagram, we can clearly observe that E2ENet with depth shift significantly outperforms E2ENet without depth shift. Additionally, the incorporation of dynamic sparse feature fusion into E2ENet results in a substantial reduction in both the number of FLOPs (from 23.90M to 11.23M) and parameters (from 3069.55G to 969.32G) while maintaining comparable performance, without any significant performance degradation.}
585
+
586
+ \subsection{The Impact of Weighting Factors $\alpha$ on PT Score}
587
+ \label{Weighting Factors}
588
+
589
+ In this section, we investigate the impact of the weighting factors $\alpha_1$ and $\alpha_2$ on the Performance Trade-off Score for the AMOS-CT challenge, as defined in Equation \ref{tp}. These factors are used to balance the trade-off between accuracy and resource cost. A higher value of $\alpha_1$ prioritizes accuracy, while a lower value emphasizes resource cost. Figure \ref{fig:convegence2} shows that as $\alpha_1$ decreases, the gaps in the Performance Trade-off Score between E2ENet and other methods become larger, indicating that our method is more advantageous when prioritizing resource cost. However, even when $\alpha_1$ is set to be 20 times greater than $\alpha_2$, which prioritizes accuracy over resource cost, the Performance Trade-off Score of E2ENet remains superior to other baselines. This result indicates that our proposed E2ENet architecture is highly efficient in terms of computational cost and memory usage while achieving excellent segmentation performance on the AMOS-CT challenge.
590
+
591
+ \subsection{Convergence Analysis}
592
+ \label{Convergence Analysis}
593
+
594
+ {In this section, we analyze the convergence behavior of E2ENet by examining the loss changes during topology updating (kernel activation/deactivation epochs), comparing it with the best-performing baseline nnUNet, and studying the impact of topology update frequency. From Figure \ref{fig:convegence_single}, we observed that the activation/deactivation of weights initially led to an increase in training loss. However, over the long term, the training converged. Additionally, we compared the learning curve of E2ENet with that of nnUNet and found that E2ENet converged even faster than nnUNet, as shown in the subplot in Figure \ref{fig:convegence_multi} (a).
595
+ To account for the effect of the number of parameters, we scaled down nnUNet to have a similar number of parameters as E2ENet and observed that it converged even more slowly than the original nnUNet.
596
+ We also studied the impact of topology update frequency.
597
+ As shown in Figure \ref{fig:convegence_multi} (b), when the topology updating frequency is increased, the convergence speed may decrease slightly, but the impact is not significant.}
598
+
599
+ \begin{figure}[!htb]
600
+ \centering
601
+ \includegraphics[width=0.48\textwidth]{convergence.drawio_1.pdf}
602
+ \caption{{The learning curve of E2ENet on AMOS-CT, with green dotted vertical lines indicating the epochs of weight activation and deactivation. The blue line represents the ratio of weight deactivation/reactivation throughout the training process.}
603
+ }
604
+ \label{fig:convegence_single}
605
+ \end{figure}
606
+
607
+ \begin{figure}[!htb]
608
+ \centering
609
+ \includegraphics[width=0.48\textwidth]{learning_curve.drawio.pdf}
610
+ \caption{{(a) Comparing the learning curve of E2ENet with that of nnUNet and scaled-down nnUNet (referred to as nnUNet (-)); (b) Comparing the learning curve of E2ENet with different topology update frequencies.}
611
+ }
612
+ \label{fig:convegence_multi}
613
+ \end{figure}
614
+
615
+ \subsection{Generalizability Analysis}
616
+ \label{Generalizability Analysis}
617
+ {To evaluate the generalizability of the resulting architectures, we compared our model, pre-trained on AMOS-CT and fine-tuned on AMOS-MRI, with nnUNet, which was pre-trained on AMOS-CT and fine-tuned on AMOS-MRI, and both models are fine-tuned for 250 epochs. It is worth noting that the topology (weight connections) is determined through AMOS-CT pre-training and remains fixed during the fine-tuning phase. From Table \ref{tab: ablation study for DSFF1}, we discovered that the E2ENet architecture, initially designed for AMOS-CT, can be effectively applied to AMOS-MRI as well. This adaptation resulted in improved performance compared to nnUNet and other baselines. nnUNet and these baselines were either transferred from CT, trained solely on MRI data, or jointly on MRI and CT datasets.}
618
+
619
+ \begin{table}[!htb]
620
+ \caption{{Evaluating the generalizability of E2ENet on AMOS-MRI dataset, CT $\rightarrow$ MRI: using CT dataset for pretraining and MRI dataset for fine-tuning; MRI: using only MRI dataset for training; CT+MRI: using both CT and MRI dataset for training. The results, except for E2ENet and nnUNet (CT $\rightarrow$ MRI), are collected from the \href{http://www.amos.sribd.cn/about.html}{AMOS website}}.}
621
+ \centering
622
+ \resizebox{\linewidth}{!}{
623
+ \begin{threeparttable}
624
+ \begin{tabular}{c|ccc|ccc}
625
+ \toprule[1pt]
626
+ {method} & \multicolumn{3}{c|}{{mDice}} & \multicolumn{3}{c}{{mNSD}} \\
627
+ & {CT $\rightarrow$ MRI} & {MRI} & {CT + MRI} & {CT $\rightarrow$ MRI} & {MRI} & {CT + MRI} \\
628
+ \toprule[0.5pt] \bottomrule[0.5pt]
629
+ \hline
630
+ {E2ENet} & {\textbf{87.8}} & {/} & {/} & {\textbf{83.9}} & {/} & {/} \\
631
+ {nnUNet} & {87.4} & {87.1} & {87.7} & {83.3} & {83.1} & {82.7} \\
632
+ {VNet} & {/} & {83.9} & {75.4} & {/} & {65.8} & {65.6} \\
633
+ {nnFormer} & {/} & {80.6} & {75.48} & {/} & {74.0} & {66.63} \\
634
+ {CoTr} & {/} & {77.5} & {73.46} & {/} & {78.0} & {65.35} \\
635
+ {Swin UNTER} & {/} & {75.7} & {77.52} & {/} & {65.3} & {69.10} \\
636
+ {UNTER} & {/} & {75.3} & {/} & {/} & {70.1} & {/} \\
637
+ \hline
638
+ \end{tabular}
639
+ \end{threeparttable}}
640
+ \label{tab: ablation study for DSFF1}
641
+ \end{table}
642
+
643
+ {In the AMOS-CT dataset, there is a domain shift between the training and test datasets due to variations in the image acquisition scanners \citep{ji2022amos}. Thus, we also assess the generalizability of our approach by evaluating its performance on the AMOS-CT test dataset. As demonstrated in Table \ref{Tab:AMOS_v_t}, E2ENet exhibits comparable performance even in the presence of domain shift when compared to nnUNet. This is achieved while maintaining lower computational and memory costs.}
644
+
645
+ \begin{table}[!htb]
646
+ \centering
647
+ \caption{Quantitative comparisons of segmentation performance on AMOS-CT test dataset. The mDice {and mNSD} of these baselines, except for nnUNet, are collected from the AMOS challenge \citep{ji2022amos}. $^{\dag}$ indicates the results without postprocessing that are reproduced by us. $^{\ddag}$ denotes the results with postprocessing that are reproduced by us. $^{*}$ indicates the results with postprocessing.}\resizebox{0.9\linewidth}{!}{
648
+ \begin{threeparttable}
649
+ \begin{tabular}{l|c|cc|c|c}
650
+ \toprule[1pt]
651
+ {Method} & mDice & {Params} & {FLOPs} \tnote{1} & {PT score-Test} & {mNSD}\\
652
+ \toprule[0.5pt] \bottomrule[0.5pt]
653
+ CoTr \tnote{} & $80.9$ &41.87 &{1510.53} &1.11 &{66.3}\\
654
+ nnFormer \tnote{} & $85.6$ &150.14 &{1343.65} &1.11 & {72.5}\\
655
+ \hline
656
+ UNETR \tnote{} & $79.4$ &93.02 & \textbf{391.03} & 1.41 & {60.8}\\
657
+ Swin-UNETR \tnote{} & $86.3$ &62.83 &1562.99 &1.13 & {73.8}\\
658
+ \hline
659
+ VNet \tnote{} & $82.9$ & 45.65 &1737.57 &1.11 & {67.6}\\
660
+ \hline
661
+ nnUNet$^{\dag}$ & 90.6 & 30.76 & 1067.89 & 1.31 & {82.0}\\
662
+ nnUNet$^{\ddag}$ & \textbf{91.0} & 30.76 & 1067.89 & 1.31 & {82.6}\\ \hline
663
+ E2ENet$^*$ (s=0.7) \tnote{} & \underline{90.7} &11.23 & 969.32 &1.54 & {82.2}\\
664
+ E2ENet$^*$ (s=0.8) \tnote{} & \underline{90.7} &\underline{9.44} & 778.74 &\underline{1.65} & {82.1}\\
665
+ E2ENet$^*$ (s=0.9) \tnote{} & 90.4 & \textbf{7.64} & \underline{492.29} &\textbf{1.89} & {81.4}\\
666
+ \hline
667
+ E2ENet (s=0.7) \tnote{} & {90.6} &11.23 & 969.32 &1.54 & {82.0}\\
668
+ E2ENet (s=0.8) \tnote{} & {90.4} &\underline{9.44} & 778.74 &\underline{1.65} & {81.8}\\
669
+ E2ENet (s=0.9) \tnote{} &90.1 & \textbf{7.64} & \underline{492.29} &\textbf{1.89} & {80.7}\\
670
+ \bottomrule[1pt]
671
+ \end{tabular}
672
+ \begin{tablenotes}
673
+ \item[1] The inference FLOPs is calculated based on the patch sizes of $1\times128 \times 128 \times 128$ without considering postprocessing cost.
674
+ \end{tablenotes}
675
+ \end{threeparttable}
676
+ }
677
+ \label{Tab:AMOS_v_t}
678
+ \end{table}
679
+
680
+
681
+
682
+ \subsection{{Model Capacity Analysis}}
683
+ {To ensure a fair comparison, we adjusted the configuration of nnUNet by reducing its width from 32 to 27 and its depth from 5 to 4, resulting in a modified version referred to as nnUNet (-). Additionally, we enhanced E2ENet by increasing its width from 48 to 58, creating a modified version referred to as E2ENet(+). We evaluated the performance of these models, and the results can be found in Table \ref{Tab:Model Capacity}. It is worth noting that scaling down nnUNet resulted in decreased performance in terms of mDice and mNSD, while scaling up E2ENet led to an increase in mDice and comparable performance in terms of mNSD. This indicates that the comparable performance of the memory and computation-efficient E2ENet is not attributed to the dataset's requirement for a small number of parameters and computations.
684
+ }
685
+
686
+ \begin{table}[!htb]
687
+ \centering
688
+ \caption{{A comparison under a similar FLOPs budget on the AMOS-CT dataset in both cases: when the nnUNet is scaled down, or E2ENet is scaled up.}}\resizebox{0.8\linewidth}{!}{
689
+ \begin{threeparttable}
690
+ \begin{tabular}{l|c|cc|c}
691
+ \toprule[1pt]
692
+ {Method} & {mDice} & {Params} & {FLOPs} \tnote{1} & {mNSD}\\
693
+ \toprule[0.5pt] \bottomrule[0.5pt]
694
+ {E2ENet (S=0.8)} \tnote{} & {90.0} & {\textbf{9.43}}& {778.74} & {\textbf{82.3}} \\
695
+ {nnUNet} \tnote{} & {90.0} & {31.20} & {1067.89} & {82.1} \\
696
+ \hline
697
+ {nnUNet (-)}\tnote{} & {89.7} & {12.96} & {\textbf{755.79}} & {81.9}\\
698
+ \hline
699
+ {E2ENet (+)} \tnote{} & {\textbf{90.4}} & {10.37} & {1119.17} & {82.2}\\
700
+ \bottomrule[1pt]
701
+ \end{tabular}
702
+ \begin{tablenotes}
703
+ \item[1] {The inference FLOPs is calculated based on the patch sizes of $1\times128 \times 128 \times 128$ without considering postprocessing cost.}
704
+ \end{tablenotes}
705
+ \end{threeparttable}
706
+ }
707
+ \label{Tab:Model Capacity}
708
+ \end{table}
709
+
710
+ \begin{figure}[!htb]
711
+ \centering
712
+ \includegraphics[width=0.48\textwidth]{alpha.drawio.pdf}
713
+ \caption{{Comparison of Performance Trade-Off score between E2ENet and other models on AMOS-CT challenge with varying $\alpha_1$ and $\alpha_2$ values. E2ENet outperforms other baselines in achieving a better trade-off between accuracy and efficiency across different preferences.}}
714
+ \label{fig:convegence2}
715
+ \end{figure}
716
+
717
+ \begin{figure*}[!htb]
718
+ \centering
719
+ \includegraphics[width=\textwidth]{E2ENet-Page-7-amos-btcv.pdf}
720
+ \caption{Qualitative comparison of the proposed E2ENet and nnUNet on AMOS-CT and BTCV challenges.
721
+ }
722
+ \label{fig:amos}
723
+ \end{figure*}
724
+
725
+ \begin{figure}[!htb]
726
+ \centering
727
+ \includegraphics[width=0.45\textwidth]{E2ENet-Page-7-brast.pdf}
728
+ \caption{Qualitative comparison of the proposed E2ENet and nnUNet on BraTS Challenge in MSD.
729
+ }
730
+ \label{fig:brast}
731
+ \end{figure}
732
+
733
+ \subsubsection{BraTS Challenge in MSD}
734
+
735
+ \subsection{Qualitative Results}
736
+ In this section, we compare the proposed E2ENet and nnUNet qualitatively on three challenges. To make the comparison easier, we highlight detailed segmentations with red dashed boxes.
737
+
738
+
739
+ \subsubsection{AMOS-CT Challenge}
740
+ Figure \ref{fig:amos} (a) presents a qualitative comparison of our proposed E2ENet with nnUNet on the AMOS-CT challenge. As a widely used baseline model, nnUNet has been evaluated on multiple medical image segmentation challenges. Our results demonstrate that, on certain samples, E2ENet can improve segmentation quality and overcome some of the challenges faced in the AMOS-CT challenge. For example, as shown in the first column, E2ENet accurately distinguishes between the stomach and esophagus, which can be a difficult task due to their close proximity. In the second column, E2ENet better differentiates the duodenum from the background, while in the third column, E2ENet accurately identifies the precise boundaries of the liver, a structure that is prone to over-segmentation. These examples demonstrate the potential of our proposed method to improve the accuracy of medical image segmentation to some extent, especially in challenging cases such as distinguishing between closely located organs or accurately segmenting complex shapes.
741
+
742
+ \subsubsection{BTCV Challenge}
743
+ In Figure \ref{fig:amos} (b), we present a qualitative comparison of our proposed E2ENet method with nnUNet as a baseline model on the BTCV challenge. Our results demonstrate the effectiveness of our proposed method in addressing some of the challenges of medical image segmentation. For example, as shown in the first and third columns, our E2ENet method accurately distinguishes the stomach from the background without over- or under-segmentation, which can be difficult due to the low contrast in the image. In the second column, E2ENet performs well in differentiating the stomach from the spleen. These examples suggest that our DSFF module can effectively encode feature information for improved performance in medical image segmentation.
744
+
745
+
746
+
747
+ Figure \ref{fig:brast} presents a qualitative comparison of our proposed E2ENet method with the nnUNet on the BraTS challenge with highly variable shapes of the segmentation targets. Based on the results of the baseline model, nnUNet, we observed that accurately distinguishing the edema (ED) from the background is difficult, as the edema tends to have less smooth boundaries. Our results suggest that E2ENet may have some potential to improve the distinguishability of the edema boundaries, as evidenced by the relatively better segmentation results in the first, second, and fourth columns. Moreover, E2ENet accurately differentiates the enhanced tumor (ET) from the edema, as shown in the third column, which is a challenging task due to the similarity in appearance between these two regions, and the dispersive distribution of ET. These findings suggest that E2ENet is a promising method for accurately segmenting brain tumors in challenging scenarios.
748
+
749
+
750
+
751
+ \subsection{Feature Fusion Visualization}
752
+
753
+
754
+
755
+ \begin{figure*}[!htb]
756
+ \centering
757
+ \includegraphics[width=\textwidth]{topology_dataset.drawio_1.pdf}
758
+ \caption{The proportions of feature map connections before and after training with the DSFF mechanism at a sparsity level of 0.8 on AMOS-CT, BTCV, and BraTS challenges. The proportions before training and the proportions after training are shown in (a), (b), and (c) for the three challenges respectively.}
759
+ \label{fig:feature_fusion}
760
+ \end{figure*}
761
+
762
+ \begin{figure*}[!htb]
763
+ \centering
764
+ \includegraphics[width=\textwidth]{topology-during.drawio.pdf}
765
+ \caption{{The proportions of feature map connections during training with the DSFF mechanism at a sparsity level of 0.8 on AMOS-CT.}}
766
+ \label{fig:feature_fusion1}
767
+ \end{figure*}
768
+
769
+
770
+ In this section, we explore how the DSFF mechanism fuses features from three directions (upward, forward, and downward) during training. To shed light on this mechanism, we specifically analyze the {proportions} of feature map connections from different directions to a specific fused feature node, considering a feature sparsity level of 0.8. Figure \ref{fig:feature_fusion} provides a visualization of these {proportions} obtained after training on the challenging AMOS, BTCV, and BraTS datasets, {and Figure \ref{fig:feature_fusion1} provides a visualization of these proportions obtained during training on the challenging AMOS.} We calculated these {proportions} by dividing the number of non-zero connections in a given direction by the total number of non-zero connections within the fused feature node.
771
+
772
+ After training, we observed in Figure \ref{fig:feature_fusion} (b)-(d) that the DSFF module learned to assign greater importance to features from the "forward" directions for most fused feature nodes. For example, in the first feature level, the flow and processing of features can be observed as if they were passing through a fully convolutional neural network (FCN), which preserves the spatial dimensions of the input image. At the second feature level, the original image is downsampled by a factor of $1/2$ and flows through another FCN. Therefore, from this perspective, E2ENet can take the advantage of FCN effectively incorporate and preserve multi-scale information, while reducing computation cost.
773
+
774
+
775
+ Simultaneously, the complementary feature flows from the "upward" and "downward" directions provide richer information. At the low feature levels and the earlier fusion step, the “upward” information is more dominant than the “downward” information. This prioritization of upward feature flow information is similar to the design of the decoder in UNet and UNet++, while alleviating the semantic gap. As the levels and fused steps increase, the proportion of feature map connections in the “downward” direction increases, which allows the network to effectively preserve low-level information and integrate it with high-level information, leading to improved performance in capturing fine details. {Interestingly, this trend becomes increasingly apparent during training, as illustrated in Figure \ref{fig:feature_fusion1}.}
776
+
777
+
778
+ \subsection{Organ Volume Statistics and Class-wise Results Visualization}
779
+
780
+ \begin{figure*}[!htb]
781
+ \centering
782
+ \includegraphics[width=1.0\textwidth]{E2ENet-btcv-va1.pdf}
783
+ \vskip -0.1in
784
+ \caption{(a) The organ volume statistics of AMOS-CT training dataset. (b) Class-wise Dice of nnUNet without postprocessing (visualization of Table \ref{tab:AMOS_c_Dice}). (c) Class-wise Dice differences between E2ENet with feature sparsity $0.7$ without postprocessing and nnUNet without postprocessing on AMOS-CT validation dataset . The positive value means that E2ENet outperforms nnUNet, vice versa.)
785
+ }
786
+ \vskip -0.1in
787
+ \label{fig:amos-va}
788
+ \end{figure*}
789
+
790
+ \begin{figure*}[!htb]
791
+ \centering
792
+ \includegraphics[width=1.0\textwidth]{E2ENet-amos-va1.pdf}
793
+ \caption{(a) The organ volume statistics of BTCV training dataset. (b) Class-wise Dice of nnUNet (visualization of Table \ref{tab:BTCV_class_wise}). Note that AG denotes the average of the right and left adrenal glands (RAG and LAG). (c) Class-wise Dice differences between E2ENet with feature sparsity $0.7$ and nnUNet on BTCV test dataset. The positive value means that E2ENet outperforms nnUNet, vice versa.
794
+ }
795
+ \vskip -0.1in
796
+ \label{fig:btcv-va}
797
+ \end{figure*}
798
+
799
+
800
+ \begin{figure*}[!htb]
801
+ \centering
802
+ \includegraphics[width=0.8\textwidth]{E2ENet-brain-va1.pdf}
803
+ \caption{(a) The organ volume statistics of BraTS training dataset. (b) Class-wise Dice of nnUNet (visualization of Table \ref{tab:braint_results}) (c) Class-wise Dice differences between E2ENet with feature sparsity $0.7$ and nnUNet on 5-fold cross-validation of the training dataset. The positive value means that E2ENet outperforms nnUNet, vice versa.
804
+ }
805
+ \vskip -0.1in
806
+ \label{fig:brain-va}
807
+ \end{figure*}
808
+
809
+ In this section, we analyzed the relationship between organ volume and segmentation accuracy on the AMOS-CT, BTCV, and BraTS challenges. The results, depicted in Figures \ref{fig:amos-va}, \ref{fig:btcv-va} and \ref{fig:brain-va}, showed that small organs with relatively low segmentation accuracy. For the AMOS-CT challenge, RAG (right adrenal gland), LAG (left adrenal gland), Gall (gallbladder), and Eso (esophagus) are more challenging to accurately segment. This may be due to the fact that smaller organ volumes provide less visual information for the segmentation algorithm to work with. However, our proposed method, E2ENet, also demonstrated comparable (or better) performance on these small organs, particularly for the organ “LAG”, in which the Dice improved from $81.7\%$ to $82.4\%$. On the BTCV challenge, the Dice of “Gall”, which is considered to be the most challenging organ, improves from $75.3\%$ to $78.1\%$ when using E2ENet compared to nnUNet. For the BraTs challenge, E2ENet demonstrates the most significant improvement in the Dice score of the "ET" region, which is considered the most challenging class, with an increase of $0.7\%$.
810
+
811
+ These results indicate that by applying the DSFF mechanism, E2ENet is able to effectively utilize multi-scale information, potentially leading to improved performance in segmenting small organs. It is important to note that other factors, such as the quality and resolution of the medical images, as well as the complexity of the anatomy being imaged, may also impact the performance of the segmentation algorithms. Future work could focus on further exploring the potential impact of these factors on segmentation accuracy.
812
+
813
+ \section{Conclusion}
814
+ In this work, our aim is to address the challenge of designing a 3D medical image segmentation method that is both accurate and efficient. By proposing a dynamic sparse feature fusion mechanism and incorporating restricted depth-shift in 3D convolution, our proposed E2ENet model is able to improve performance on 3D medical image segmentation tasks while significantly reducing the computation and memory overhead. The dynamic sparse feature fusion mechanism has shown that it adaptively learns the importance of each feature map and zeros out the less important ones. This leads to a more efficient feature representation without sacrificing performance. Additionally, the experiments demonstrate that the restricted depth-shift in 3D convolution enables the model to capture spatial information more effectively and efficiently.
815
+
816
+ Extensive experiments on three benchmarks show that E2ENet consistently achieves a superior trade-off between accuracy and efficiency compared to prior state-of-the-art baselines. While E2ENet provides a promising solution for balancing accuracy and computational cost, future work could explore the potential of learnable shift offset, which may lead to even better performance. {Additionally, the effectiveness of E2ENet strongly depends on the choice of the initial architecture. It is interesting to apply the DSFF modules and shift convolution to different architectures to explore their potential in the future.} Furthermore, future advancements in hardware support for sparse neural networks could fully unlock the potential of sparse training methods across further resource-constrained applications.
817
+
818
+
819
+ \section{Acknowledgement}
820
+ This work used the Dutch national e-infrastructure with the support of the SURF Cooperative using grant no. EINF-4456 and EINF-5565.
821
+
822
+
823
+
824
+ \bibliography{example_paper}
825
+ \bibliographystyle{icml2023}
826
+
827
+ \end{document}