\documentclass[10pt,twocolumn,letterpaper]{article} \usepackage{cvpr} \usepackage{times} \usepackage{epsfig} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{multirow} \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref} \cvprfinalcopy \def\cvprPaperID{2717} \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}} \ifcvprfinal\pagestyle{empty}\fi \begin{document} \title{All You Need is Beyond a Good Init: Exploring Better Solution for Training Extremely Deep Convolutional Neural Networks with Orthonormality and Modulation} \author{Di Xie\\ {\tt\small xiedi@hikvision.com} \and Jiang Xiong\\ {\tt\small xiongjiang@hikvision.com}\\ Hikvision Research Institute\\ Hangzhou, China \and Shiliang Pu\\ {\tt\small pushiliang@hikvision.com} } \maketitle \begin{abstract} Deep neural network is difficult to train and this predicament becomes worse as the depth increases. The essence of this problem exists in the magnitude of backpropagated errors that will result in gradient vanishing or exploding phenomenon. We show that a variant of regularizer which utilizes orthonormality among different filter banks can alleviate this problem. Moreover, we design a backward error modulation mechanism based on the quasi-isometry assumption between two consecutive parametric layers. Equipped with these two ingredients, we propose several novel optimization solutions that can be utilized for training a specific-structured (repetitively triple modules of Conv-BN-ReLU) extremely deep convolutional neural network (CNN) \emph{WITHOUT} any shortcuts/ identity mappings from scratch. Experiments show that our proposed solutions can achieve distinct improvements for a 44-layer and a 110-layer plain networks on both the CIFAR-10 and ImageNet datasets. Moreover, we can successfully train plain CNNs to match the performance of the residual counterparts. Besides, we propose new principles for designing network structure from the insights evoked by orthonormality. Combined with residual structure, we achieve comparative performance on the ImageNet dataset. \end{abstract} \section{Introduction} Deep convolutional neural networks have improved performance across a wider variety of computer vision tasks, especially for image classification~\cite{Krizhevsky2012ImageNet,Simonyan2014Very,Szegedy2015Going, Sermanet2013OverFeat,Zeiler2013Visualizing}, object detection~\cite{yang2016craft,Ren2016Faster,ShrivastavaGG16} and segmentation~\cite{long2015fully,chen2016deeplab,pinheiro2016learning}. Much of this improvement should give the credit to gradually deeper network architectures. In just four years, the layer number of networks escalates from several to hundreds, which learns more abstract and expressive representations from large amount of data, \eg~\cite{ILSVRC15}. Simply stacking more layers onto current architectures is not a reasonable solution, which incurs vanishing/exploding gradients~\cite{Bengio,Glorot2010Understanding}. To handle the relatively shallower networks, a variety of initialization and normalization methodologies are proposed~\cite{Glorot2010Understanding,Saxe,He2015Delving,Sussillo2015Random, Kr2015Data,Mishkin2015All,Ioffe2015Batch,Arpit2016Normalization}, while deep residual learning~\cite{He2015Residual} is utilized to deal with extremely deep ones. Though other works, \eg~\cite{Srivastava2015Deep,Srivastava2015Highway}, have also announced that they can train an extremely deep network with improved performance, deep residual network~\cite{He2015Residual} is still the best and most practical solution for dealing with the degradation of training accuracy as depth increases. However, it is substantial that residual networks are exponential ensembles of relatively shallow ones (usually only 10-34 layers deep), as an interpretation by Veit \etal~\cite{Veit2015Exp}, it avoids the vanishing/exploding gradient problem instead of resolving it directly. Intrinsically, the performance gain of networks is determined by its multiplicity, not the depth. So how to train an ultra-deep network is still an open research question with which few works concern. Most researches still focus on designing more complicated structures based on residual block and its variants~\cite{Larsson2016FractalNet,Zagoruyko2016Wide}. Anyway, dose there exist an applicable methodology that can be used for training a genuinely deep network? In this paper, we try to find a direct feasible solution to answer above question. We think batch normalization (BN)~\cite{Ioffe2015Batch} is necessary to ensure the propagation stability in the forward pass in ultra-deep networks and the key of learning availability exists in the backward pass which propagates errors with a top-down way. We constrain the network's structure to repetitive modules consisted by Convolution, BN and ReLU~\cite{Nair2010ReLU} layers (Fig.~\ref{fig:networkmodule}) and analyze the Jacobian of the output with respect to the input between consecutive modules. We show that BN cannot guarantee the magnitude of errors to be stable in the backward pass and this amplification/attenuation effect to signal will accumulate layer-wisely which results in gradients exploding/vanishing. From the view of norm-preserving, we find that keeping the orthonormality between filter banks within a layer during learning process is a sufficient and necessary condition to ensure the stability of backward errors. While this condition cannot be satisfied in nonlinear networks equipped with BN, this orthonormal constrain can mitigate backward signal's attenuation and we prove it by experiments. An orthonormal regularizer is introduced to replace traditional weight decay regularization~\cite{Girosi95Reg}. Experiments show that there is $3\%\thicksim4\%$ gains for a 44-layer network on CIFAR-10. \begin{figure}[t] \begin{center} \includegraphics[width=0.4\linewidth]{NetworkModule.pdf} \end{center} \caption{Diagram of the plain CNN network architecture (left) and repetitive triple-layer module (right) in this paper. Green box is for input data, Red color ones denotes parametric layers (convolutional or fully connected), yellow represents batch normalization layers and blue means activation layers. Actually, this structure is similar with the plain CNN designed by He \etal~\cite{He2015Residual}.} \label{fig:networkmodule} \end{figure} However, as depth increases, \eg deeper than 100 layers, the non-orthogonal impact induced by BN, ReLU and gradients updating accumulates, which breaks the dynamic isometry~\cite{Saxe} and makes learning unavailable. To neutralize this impact, we design a modulation mechanism based on the quasi-isometry assumption between two consecutive parametric layers. We show the quasi-isometry property with both mathematical analysis and experiments. With the modulation, a global scale factor can be applied on the magnitude of errors a little unscrupulously during the backward pass in a layer-wise fashion. Combined with orthonormality, experiments show that a plain CNN shown in Fig.~\ref{fig:networkmodule} can be trained relatively well and match the performance of its residual counterpart. The contributions of this paper are summarized as follows. 1) We demonstrate the necessity of applying BN and explain the potential reason which results in degradation problem in optimizing deep CNNs; 2) A concise methodology equipped with orthonormality and modulation is proposed to provide more insights to understand learning dynamics of CNNs; 3) Experiments and analysis exhibit interesting phenomenons and promising research directions. \section{Related Work} \textbf{Initialization in Neural Networks.} As depth increases, Gaussian initialization cannot suffice to train a network from scratch~\cite{Simonyan2014Very}. The two most prevalent works are proposed by Glorot \& Bengio~\cite{Glorot2010Understanding} and He \etal~\cite{He2015Delving} respectively. The core idea of their works is to keep the unit variance of each layer's output. Sussillo \& Abbott~\cite{Sussillo2015Random} propose a novel random walk initialization and mainly focus on adjusting the so-called scalar factor $g$ to make the ratio of input/output error to be constant around $1$. Kr\"ahenb\"uhl \etal~\cite{Kr2015Data} introduce data-dependent initialization to ensure all layers training at an equal rate. Orthogonality is also in consideration. Saxe \etal~\cite{Ganguli2013Learning,Saxe} analyse the dynamics of learning in linear deep neural networks. They find that the convergence rate of random orthogonal initialization of weights is equivalent to unsupervised pre-training, which are both superior to random Gaussian initialization. LSUV initialization method~\cite{Mishkin2015All} is proposed which not only takes advantage of orthonormality but also makes use of the unit-variance of each layer's output. In our opinion, a well-behaved initialization is not enough to resist the variation as learning progresses, which is to say, to have a good initial condition (\eg isometry) cannot ensure the preferred condition to keep unchanged all the time, especially in extremely deep networks. This argument forms the basic idea that motivates us to explore the solutions for genuinely deep networks. \textbf{Signal Propagation Normalization.} Normalization is a common and ubiquitous technique in machine learning community. The whitening and decorrelation of input data brings benefits to both deep learning and other machine learning algorithms, which helps speeding up the training process~\cite{Lecun2000Efficient}. Batch normalization~\cite{Ioffe2015Batch} generalize this idea to ensure each layer's output to be identical distributions which reduce the internal covariate shift. Weight normalization~\cite{Salimans2016Weight} is inspired by BN by decoupling the norm of the weight vector from its direction while introducing independencies between the examples in a minibatch. To overcome the disadvantage of BN that dependent on minibatch size, layer normalization~\cite{Ba2016Layer} is proposed to solve the normalization problem for recurrent neural networks. But this method cannot be applied to CNN, as the assumption violates the statistics of the hidden layers. For more applicable in CNN, Arpit~\etal introduce normalization propagation~\cite{Arpit2016Normalization} to reduce the internal covariate shift for convolutional layers and even rectified linear units. The idea of normalization each layers' activations is promising, but a little idealistic in practice. Since the incoherence prior of weight matrix is actually not true in the initialization phase and even worsen in iterations, the normalized magnitude of each layer's activations cannot be guaranteed in an extremely deep network. In our implementation, it even cannot prevent the exploding activations' magnitude just after initialization. \textbf{Signal Modulation.} Few work is done in this field explicitly, but implicitly integrated the idea of modulation. In a broad sense, modulation can be viewed as a persistent process of the combination of normalization and other methodology to keep the magnitude of a variety of signals steady at learning. With this understanding, we can summarize all the methods above with a unified framework, \eg batch normalization~\cite{Ioffe2015Batch} for activation modulation, weight normalization~\cite{Salimans2016Weight} for parameter modulation, \etc. \section{Methodology} \subsection{Why is BN a requisite?}\label{sec:BN} Since the complexity dynamics of learning in nonlinear neural networks~\cite{Saxe}, even a proven mathematical theory cannot guarantee that a variety of signals keeping isometrical at the same time in practice applications. Depth itself results in the ``butterfly effect" with exponential diffusion while nonlinear gives rise to indefiniteness and randomness. Recently proposed methods~\cite{Arpit2016Normalization,Sussillo2015Random,Kr2015Data} which utilize isometry fail to keep the steady propagation of signals in over-100-layer networks. These methods try to stabilize the magnitude of signals from one direction (forward/backward) as a substituted way to control the signals in both directions. However, since the complexity variations of signals, it is impossible to have conditions held on both ways with just one modulation method. An alternative option is to simplify this problem to constrain the magnitude of signals in either direction, which we can pay the whole attention to another direction\footnote{For a specified weight that connected $i$th neuron in $l$th layer and $k$th neuron in $(l+1)$th layer, $w^{(l)}_{ij}$, its gradient can be computed as $\nabla~w^{(l)}_{ij}=a^{(l)}_{i}\times\delta^{(l+1)}_{j}$. If the two variables are independent from each other, then the magnitude of gradient can be directly related with just one factor~(activation/error).}. Batch normalization is an existed solution that satisfies our requirement. It does normalization in the forward pass to reduce internal covariate shift with a layer-wise way\footnote{Methods modulate signals without a layer-wise manner, \eg~\cite{Arpit2016Normalization}, will accumulate the indefiniteness with a superlinear way and finally the propagated signals will be out of control.}, which, in our opinion, make us to focus all the analyses on the opposite direction. From~\cite{Ioffe2015Batch}, during the backpropagation of the gradient of loss $\ell$ through BN, we can formulate errors between adjacent layers as follow: \begin{equation} \frac{\partial~\ell}{\partial~x_{i}}=\frac{1}{\sqrt{\sigma_{B}^{2}+\epsilon}} (\delta_{i}-\mu_{\delta}-\frac{\hat{x}_{i}}{m}\sum_{j=1}^{m}\delta_{j}\hat{x}_{j}) \label{equation1} \end{equation} where $x_{i}$ is $i$th sample in a mini-batch (we omit activation index for simplicity), so $\frac{\partial~\ell}{\partial~x_{i}}$ denotes output error. $\delta_{i}=\frac{\partial~\ell}{\partial~y_{i}}\cdot\gamma$ where $\frac{\partial~\ell}{\partial~y_{i}}$ is the input error and $\gamma$ is scale parameter of BN. $\mu_{\delta}=\frac{1}{m}\sum_{i=1}^{m}\delta_{i}$ is mean of scaled input errors, where $m$ denotes mini-batch's size. $\hat{x}_{i}=\frac{x_{i}-\mu_{B}}{\sqrt{\sigma_{B}^{2}+\epsilon}}$ is the corresponding normalized activation. Equation~\ref{equation1} represents a kind of ``pseudo-normalization" transformation for error signals $\delta_{i}$ compared with its forward operation. If the mean of distribution of input error $\delta_{i}$ is zero and symmetric, we can infer that the mean of distribution of output error is approximately zero. It centralizes the errors and the last term $\frac{\hat{x}_{i}}{m}\sum_{j=1}^{m}\delta_{j}\hat{x}_{j}$ will bias the distribution but these biases may be cancelled out from each other owing to the normalized coefficient $\hat{x}_{i}$ which is normal distribution. Besides, errors are normalized with a mismatched variance. This type of transformation will change error signal's original distribution with a layer-wise way since the second order moment of each layer's output errors loses its isometry progressively. However, this phenomenon can be ignored when we only consider a pair of consecutive layers. In a sense, we can think the backward propagated errors are also normalized as well as its forward pass, which is why we apply ``Conv-BN-ReLU" triple instead of ``Conv-ReLU-BN"\footnote{Another reason is that placing ReLU after BN guarantees approximately $50\%$ activations to be nonzero, while the ratio may be unstable if putting it after convolution operation.}. The biased distribution effect will accumulated as depth increases and distort input signals' original distribution, which is one of several reasons that make training extreme deep neural network difficult. In next section we try to solve the problem to some extent. \subsection{Orthonormality} Norm-preserving resides in the core idea of this section. A vector $\textbf{x}\in\Re^{d_{\textbf{x}}}$ is mapped by a linear transformation $\textbf{W}\in\Re^{d_{\textbf{x}}\times~d_{\textbf{y}}}$ to another vector $\textbf{y}\in\Re^{d_{\textbf{y}}}$, say, $\textbf{y}=\textbf{W}^{T}\textbf{x}$. If $\|\textbf{y}\|=\|\textbf{x}\|$, then we call this transformation norm-preserving. Obviously, orthonormality, not the normalization proposed by~\cite{Arpit2016Normalization} alone, is both sufficient and necessary for holding this equation, since \begin{equation} \|\textbf{y}\|=\sqrt{\textbf{y}^{T}\textbf{y}}=\sqrt{\textbf{x}^{T}\textbf{W} \textbf{W}^{T}\textbf{x}}=\sqrt{\textbf{x}^{T}\textbf{x}}=\|\textbf{x}\|~iff.~ \textbf{W}\textbf{W}^{T}=\textbf{I} \label{equation2} \end{equation} Given the precondition that signals in forward pass are definitely normalized, here we can analyse the magnitude variation of errors only in backward pass. To keep the gradient with respect to the input of previous layer norm-preserving, it is straightforward to conclude that we would better maintain orthonormality among columns\footnote{Beware of the direction, which results in the exchange of notations in equation~\ref{equation2}. So the rows and columns of the matrix are also exchanged.} of a weight matrix in a specific layer during learning process rather than at initialization according to Eq.~\ref{equation2}, which equivalently makes the Jacobian to be ideally dynamical isometry~\cite{Saxe}. Obviously in CNN this property cannot be ensured because of 1) the gradient update which makes the correlation among different columns of weights stronger as learning proceeding; 2) nonlinear operations, such as BN and ReLU, which destroy the orthonormality. However, we think it is reasonable to force the learned parameters to be conformed with the orthogonal group as possible, which can alleviate vanishing/exploding phenomenon of the magnitude of errors and the signal distortion after accumulated nonlinear transformation. The rationality of these statements and hypotheses has been proved by experiments. To adapt the orthonormality for convolutional operations, we generalize the orthogonal expression with a direct modification. Let $\tilde{\textbf{W}}_{l}\in\Re^{W\times~H\times~C\times~M}$ denote a set of convolution kernels in $l$th layer, where $W$, $H$, $C$, $M$ are width, height, input channel number and output channel number, respectively. We replace original weight decay regularizer with the orthonormal regularizer: \begin{equation} \frac{\lambda}{2}\sum_{i=1}^{D}\|\textbf{W}_{l}^{T}\textbf{W}_{l}-\textbf{I}\|_{F}^{2} \label{equation3} \end{equation} where $\lambda$ is the regularization coefficient as weight decay, $D$ is total number of convolutional layers and/or fully connected layers, $\textbf{I}$ is the identity matrix and $\textbf{W}_{l}\in\Re^{f_{in}\times~f_{out}}$ where $f_{in}=W\times~H\times~C$ and $f_{out}=M$. $\|\cdot\|_{F}$ represents the Frobenius norm. In other words, equation~\ref{equation3} constraints orthogonality among filters in one layer, which makes the learned features have minimum correlation with each other, thus implicitly reduce the redundancy and enhance the diversity among the filters, especially those from the lower layers~\cite{Shang2016Understanding}. Besides, orthonormality constraints provide alternative solution other than $L2$ regularization to the exploration of weight space in learning process. It provides more probabilities by limiting set of parameters in an orthogonal space instead of inside a hypersphere. \subsection{Modulation} The dynamical isometry of signal propagation in neural networks has been mentioned and underlined several times~\cite{Arpit2016Normalization,Saxe,Ioffe2015Batch}, and it amounts to maintain the singular values of Jacobian, say $\textbf{J}=\frac{\partial\textbf{y}}{\partial\textbf{x}}$, to be around $1$. In this section, we will analyze the variation of singular values of Jacobian through different types of layers in detail. We omit the layer index and bias term for simplicity and clarity. For linear case, we have $\textbf{y}=\textbf{W}^{T}\textbf{x}$, which shows that having dynamical isometry is equivalent to keep orthogonality since $\textbf{J}=\textbf{W}^{T}$ and $\textbf{J}\textbf{J}^{T}=\textbf{W}^{T}\textbf{W}$. Next let us consider the activations after normalization transformation, $\textbf{y}=\textrm{BN}_{\gamma,\beta}(\textbf{W}^{T}\textbf{x})$, which we borrow the notation from~\cite{Ioffe2015Batch}. Given the assumption that input dimension equals output dimension and both are $d$-dimension vectors, the Jacobian is \begin{equation} \textbf{J}=\left[ \begin{array}{cccc} \textbf{J}_{11} & \textbf{0} & \cdots & \textbf{0}\\ \textbf{0} & \textbf{J}_{22} & \cdots & \textbf{0}\\ \vdots & \vdots & \ddots & \vdots\\ \textbf{0} & \textbf{0} & \cdots & \textbf{J}_{dd} \end{array} \right]_{md\times md} \label{equation8} \end{equation} where each $\textbf{J}_{kk}$ is a $m\times m$ square matrix, that is \begin{equation} \textbf{J}_{kk}=\left[ \begin{array}{cccc} \frac{\partial y_{1}^{(k)}}{\partial x_{1}^{(k)}} & \frac{\partial y_{1}^{(k)}} {\partial x_{2}^{(k)}} & \cdots & \frac{\partial y_{1}^{(k)}}{\partial x_{m}^{(k)}}\\ \frac{\partial y_{2}^{(k)}}{\partial x_{1}^{(k)}} & \frac{\partial y_{2}^{(k)}} {\partial x_{2}^{(k)}} & \cdots & \frac{\partial y_{2}^{(k)}}{\partial x_{m}^{(k)}}\\ \vdots & \vdots & \ddots & \vdots\\ \frac{\partial y_{m}^{(k)}}{\partial x_{1}^{(k)}} & \frac{\partial y_{m}^{(k)}} {\partial x_{2}^{(k)}} & \cdots & \frac{\partial y_{m}^{(k)}}{\partial x_{m}^{(k)}} \end{array} \right] \label{equation4} \end{equation} Here $\frac{\partial y_{i}^{(k)}}{\partial x_{j}^{(k)}}$ denotes partial derivative of output of $i$th sample with respect to $j$th sample in $k$th component. The Jacobian of BN has its speciality that its partial derivatives are not only related with components of activations, but also with samples in one mini-batch. Because of each component $k$ of activations is transformed independently by BN, $\textbf{J}$ can be expressed with a blocked diagonal matrix as Eq.~\ref{equation8}. Again since the independence among activations, we can analyse just one of $d$ sub-Jacobians, \eg~$\textbf{J}_{kk}$. From equation~\ref{equation1} we can get the entries of $\textbf{J}_{kk}$, which is \begin{equation} \frac{\partial y_{j}}{\partial x_{i}}=\rho\left[\Delta(i=j)-\frac{1+\hat{x}_{i}\hat{x}_{j}}{m}\right] \label{equation5} \end{equation} where $\rho=\frac{\gamma}{\sqrt{\sigma_{B}^{2}+\epsilon}}$ and $\Delta(\cdot)$ is the indicator operator. Here we still omit index $k$ since dropping it brings no ambiguity. Eq.~\ref{equation5} concludes obviously that $\textbf{J}\textbf{J}^{T}\neq\textbf{I}$. So the orthonormality is not held after BN operation. Now the correlation among columns of $\textbf{W}$ is directly impacted by normalized activations, while the corresponding weights determine these activations in turn, which results in a complicated situation. Fortunately, we can deduce the preferred equation according to subadditivity of matrix rank~\cite{Banerjee2014Linear}, which is \begin{equation} \textbf{J}=\textbf{P}^{T} \rho\left[ \begin{array}{ccccc} 1-\frac{\lambda_{1}}{m} & 0 & 0 & \cdots & 0\\ 0 & 1-\frac{\lambda_{2}}{m} & 0 & \cdots & 0\\ 0 & 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & 1 \end{array} \right]_{m\times m} \textbf{P} \label{equation7} \end{equation} where $\textbf{P}$ is the matrix consists of eigenvectors of $\textbf{J}$. $\lambda_{1}$ and $\lambda_{2}$ are two nonzero eigenvalues of $\textbf{U}$, say $U_{ij}=1+\hat{x}_{i}\hat{x}_{j},~i=1\cdots m,~j=1\cdots m$. Eq.~\ref{equation7} shows us that $\textbf{J}\textbf{J}^{T}\approx\rho^{2}\textbf{I}$ \footnote{The Jacobian after ReLU is amount to multiply a scalar with $\textbf{J}$~\cite{Arpit2016Normalization}, which we can merge it into $\rho$ instead.}. The approximation comes from first two diagonal entries in Eq.~\ref{equation7} which may be close to zero. We think it is one of reasons that violate the perfect dynamic isometry and result in the degradation problem with this kind of non-full rank. Since value of $\rho$ is determined by $\gamma$ and $\sigma_{B}$, it is bounded as long as these two variables keep stable during the learning process, which achieves the so-called quasi-isometry~\cite{Collins1998Combinatorial}. Notice that $\rho$ changes with $\gamma$ and $\sigma_{B}$ while $\gamma$ and $\sigma_{B}$ will change in every iteration. Based on the observation, we propose the scale factor $\rho$ should be adjusted dynamically instead of fixing it like~\cite{Arpit2016Normalization,Sussillo2015Random,Saxe}. According to~\cite{Saxe}, when the nonlinearity is odd, so that the mean activity in each layer is approximately $0$, neural population variance, or second order moment of output errors, can capture these dynamical properties quantitatively. ReLU nonlinearity is not satisfied but owing to the pseudo-normalization we can regard the errors propagated backwardly through BN as having zero mean, which makes the second order moment statistics reasonable. \section{Implementation Details} We insist to keep the orthonormality throughout the training process, so we implement this constraint both at initialization and in regularization. For a convolution parameter $\textbf{W}_{l}\in\Re^{f_{in}\times~f_{out}}$ of $l$th layer, we initialize subset of $\textbf{W}$, say $f_{in}$-dimension vectors, on the first output channel. Then Gram-Schmidt process is applied to sequentially generate next orthogonal vectors channel by channel. Mathematically, generating $n$ orthogonal vectors in $d$-dimension space which satisfies $n>d$ is ill-posed and, hence, impossible. So one solution is to avoid the fan-ins and fan-outs of kernels violating the principle, say $f_{in}\geq f_{out}$, in designing structures of networks; another candidate is group-wise orthogonalization proposed by us. If $f_{in}