arxiv_db / papers /1709 /1709.06030.tex
taesiri's picture
Initial commit
8cf8144
raw
history blame
59.5 kB
\documentclass{article} \usepackage{iclr2018_conference,times}
\usepackage{hyperref}
\usepackage{url}
\usepackage{url}
\usepackage{graphicx}
\usepackage{color}
\usepackage{subcaption}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{booktabs}
\usepackage{listings}
\usepackage{verbatim}
\usepackage{algorithmicx}
\usepackage{algorithm}
\usepackage{algpseudocode}
\graphicspath{ {images/} }
\newcommand{\skipstartraw}{s_{\text{start}}}
\newcommand{\skipendraw}{s_{\text{end}}}
\newcommand{\skipstart}{s_{\text{start}}\xspace}
\newcommand{\skipend}{s_{\text{end}}\xspace}
\title{N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning}
\author{Anubhav Ashok
\\
Robotics Institute\\
Carnegie Mellon University\\
\texttt{bhav@cmu.edu} \\
\And
Nicholas Rhinehart \\
Robotics Institute\\
Carnegie Mellon University\\
\texttt{nrhineha@cs.cmu.edu} \\
\And
Fares Beainy\\
Volvo Construction Equipment\\
Volvo Group\\
\texttt{fares.beainy@volvo.com}
\AND
Kris M. Kitani \\
Robotics Institute\\
Carnegie Mellon University\\
\texttt{kkitani@cs.cmu.edu} \\
}
\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}
\iclrfinalcopy
\begin{document}
\maketitle
\begin{abstract}
While wider and deeper neural network architectures continue to advance the state-of-the-art for many computer vision tasks, real-world adoption of these networks is impeded by hardware and speed constraints. Conventional model compression methods attempt to address this problem by modifying the architecture manually or using pre-defined heuristics. Since the space of all reduced architectures is very large, modifying the architecture of a deep neural network in this way is a difficult task. In this paper, we tackle this issue by introducing a principled method for \textit{learning} reduced network architectures in a data-driven way using reinforcement learning. Our approach takes a larger `teacher' network as input and outputs a compressed `student' network derived from the `teacher' network. In the first stage of our method, a recurrent policy network aggressively removes layers from the large `teacher' model. In the second stage, another recurrent policy network carefully reduces the size of each remaining layer. The resulting network is then evaluated to obtain a reward -- a score based on the accuracy and compression of the network. Our approach uses this reward signal with policy gradients to train the policies to find a locally optimal student network. Our experiments show that we can achieve compression rates of more than $10\times$ for models such as ResNet-34 while maintaining similar performance to the input `teacher' network. We also present a valuable transfer learning result which shows that policies which are pre-trained on smaller `teacher' networks can be used to rapidly speed up training on larger `teacher' networks.
\end{abstract}
\section{Introduction}
While carefully hand-designed deep convolutional networks continue to increase in size and in performance, they also require significant power, memory and computational resources, often to the point of prohibiting their deployment on smaller devices. As a result, researchers have developed model compression techniques based on Knowledge Distillation to compress a large (teacher) network to a smaller (student) network using various training techniques (e.g., soft output matching, hint layer matching, uncertainty modeling). Unfortunately, state-of-the-art knowledge distillation methods share a common feature: they require carefully \emph{hand-designed} architectures for the student model. Hand-designing networks is a tedious sequential process, often loosely guided by a sequence of trial-and-error based decisions to identify a smaller network architecture. This process makes it very difficult to know if the resulting network is optimal. Clearly, there is a need to develop more principled methods of identifying optimal student architectures.
\begin{figure}[tb]
\centering
\includegraphics[scale=0.35]{full_bigtext.png}
\caption{Layer Removal Policy removes layers of Teacher network architecture (stage-1 candidates) then Layer Shrinkage Policy reduces parameters (stage-2 candidates).} \label{fig:pipeline}
\end{figure}
Towards a more principled approach to network architecture compression, we present a reinforcement learning approach to \emph{identify a compressed high-performance architecture (student) given knowledge distilled from a larger high-performing model (teacher)}. We make a key conceptual assumption that formulates the sequential process of converting a teacher network to a student network as a Markov Decision Process (MDP). Under this model, a state $s$ represents the network architecture. Clearly, the domain of the state $\mathcal{S}$ is very large since it contains every possible reduced architecture of the teacher network. A deterministic transition in this state space, $T(s'|s,a)$, is determined by selecting the action $a$, e.g., removing a convolutional filter or reducing the size of a fully connected layer. Each action will transform one architecture $s$ to another architecture $s'$. Under the MDP, the strategy for selecting an action given a certain state is represented by the policy $\pi(a|s)$, which stochastically maps a state to an action. The process of reinforcement learning is used to learn an optimal policy based on a reward function $r(s)$ defined over the state space. In our work, we define the reward function based on the \textit{accuracy} and the \textit{compression rate} of the specified architecture $s$.
A straightforward application of reinforcement learning to this problem can be very slow depending on the definition of the action space. For example, an action could be defined as removing a single filter from every layer of a convolutional neural network. Since the search space is exponential in the size of the action space and sequence length, it certainly does not scale to modern networks that have hundreds of layers.
Our proposed approach addresses the problem of scalability in part, by introducing a two-stage action selection mechanism which first selects a macro-scale ``layer removal" action, followed by a micro-scale ``layer shrinkage" action. In this way we enable our reinforcement learning process to efficiently explore the space of reduced networks. Each network architecture that is generated by our policy is then trained with Knowledge Distillation \cite{hinton2015distilling}. Figure~\ref{fig:pipeline} illustrates our proposed approach.
To the best of our knowledge, this is the first paper to provide a principled approach to the task of network compression, where the architecture of the student network is obtained via reinforcement learning. To facilitate reinforcement learning, we propose a reward function that encodes both the compression rate and the accuracy of the student model. In particular, we propose a novel formulation of the compression reward term based on a relaxation of a constrained optimization problem, which encodes the hardware-based computational budget items in the form of linear constraints.
We demonstrate the effectiveness of our approach over several network architectures and several visual learning tasks of varying difficulty (MNIST, SVHN, CIFAR-10, CIFAR-100, Caltech-256). We also demonstrate that the compression policies exhibit generalization across networks with similar architectures. In particular, we use a policy trained on a ResNet-18 model on a ResNet-34 model and show that it greatly accelerates the reinforcement learning process.
\section{Related Work}
We first discuss methods for compressing models to a manually designed network (pruning and distillation). Towards automation, we discuss methods for automatically constructing high-performance networks, orthogonal to the task of compression.
\paragraph{Pruning:} Pruning-based methods preserve the weights that matter most and remove the redundant weights \cite{lecun1989optimal}, \cite{hassibi1993optimal}, \cite{srinivas2015data}, \cite{han2015learning}, \cite{han2015deep}, \cite{mariet2015diversity}, \cite{anwar2015structured}, \cite{guo2016dynamic}. While pruning-based approaches typically operate on the weights of the teacher model, our approach operates on a much larger search space over both model weights and model architecture. Additionally, our method offers greater flexibility as it allows the enforcement of memory, inference time, power, or other hardware constraints. This allows our approach to find the optimal architecture for the given dataset and constraints instead of being limited to that of the original model.
\paragraph{Knowledge Distillation:} Knowledge distillation is the task of training a smaller network (a ``student") to mimic a ``teacher" network, performing comparably to the input network (a ``teacher") \cite{bucilua2006model}, \cite{ba2014deep}, \cite{hinton2015distilling}, \cite{romero2014fitnets}, \cite{urban2016deep}.
The work of \cite{hinton2015distilling} generalized this idea by training the student to learn from both the teacher and from the training data, demonstrating that this approach outperforms models trained using only training data. In \cite{romero2014fitnets}, the approach uses Knowledge Distillation with an intermediate hint layer to train a thinner but deeper student network containing fewer parameters to outperform the teacher network. In previous Knowledge Distillation approaches, the networks are hand designed, possibly after many rounds of trial-and-error. In this paper, we train a policy to learn the optimal student architecture, instead of hand-designing one. In a sense, we \emph{automate Knowledge Distillation}, employing the distillation method of \cite{ba2014deep} as a component of our learning process. In the experiments section we show that our learned architectures outperform those described in \cite{romero2014fitnets} and \cite{hinton2015distilling}.
\paragraph{Architecture Search:} There has been much work on exploring the design space of neural networks \cite{saxe2011random}, \cite{zoph2016neural}, \cite{baker2016designing}, \cite{ludermir2006optimization}, \cite{miikkulainen2017evolving}, \cite{real2017large}, \cite{snoek2012practical}, \cite{snoek2015scalable}, \cite{stanley2002evolving}, \cite{jozefowicz2015empirical}, \cite{murdock2016blockout}, \cite{feng2015learning}, \cite{warde2014self}, \cite{iandola2016squeezenet}. The principal aim of previous work in architecture search has been to build models that maximize performance on a given dataset.
On the other hand, our goal is to find a compressed architecture while maintaining reasonable performance on a given dataset. Our approach also differs from existing architecture search method since we use the teacher model as the search space for our architecture instead of constructing networks from scratch. Current methods that construct networks from scratch either operate on a very large search space, making it computationally expensive \cite{zoph2016neural}, \cite{real2017large}, \cite{miikkulainen2017evolving}, \cite{jozefowicz2015empirical} or operate on a highly restricted search space \cite{baker2016designing}, \cite{snoek2015scalable}. Our approach instead leverages the idea that since the teacher model is able to achieve high accuracy on the dataset, it already contains the components required to solve the task well and therefore is a suitable search space for the compressed architecture.
\section{Approach}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{.35\textwidth}
\includegraphics[width=\textwidth]{images/layer_removal_2}
\caption{} \label{subfig:layer_removal_compressor}
\end{subfigure}
\begin{subfigure}[b]{.35\textwidth}
\includegraphics[width=\textwidth]{images/layer_shrinkage.pdf}
\caption{} \label{subfig:layer_shrinkage_compressor}
\end{subfigure}
\caption{\textbf{a)} Layer removal policy network, \textbf{b)} Layer shrinkage policy network} \label{fig:compressor}
\end{figure}
Our goal is to learn an optimal compression strategy (policy) via reinforcement learning, that takes a Teacher network as input and systematically reduces it to output a small Student network.
\subsection{Markov Decision Process}
We formulate the sequential process of finding a reduced architecture as a sequential decision making problem. The decision process is modeled as a Markov Decision Process (MDP). Formally, the MDP is defined as the tuple $\mathcal{M} = \{ \mathcal{S}, \mathcal{A}, T, r, \gamma\}$.
\textbf{States:} $\mathcal{S}$ is the state space, a finite set consisting of all possible reduced network architectures that can be derived from the Teacher model. For example, a VGG network \cite{simonyan2014very} represents the state $s\in\mathcal{S}$ (the initial state) and by removing one convolutional filter from the first layer we obtain a new network architecture $s'$.
\textbf{Actions:} $\mathcal{A}$ is a finite set of actions that can transform one network architecture into another network architecture. In our approach there are two classes of action types: layer removal actions and layer parameter reduction actions. The definition of these actions are further described in Section \ref{sec:layer_removal} and \ref{sec:layer_shrinkage}.
\textbf{Transition Function:} $T: \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$ is the state transition dynamic. Here, $T$ is deterministic since an action $a$ always transforms a network architecture $s$ to the resulting network architecture $s'$ with probability one.
\textbf{Discount Factor:} $\gamma$ is the discount factor. We use $\gamma = 1$ so that all rewards contribute equally to the final return.
\textbf{Reward:} $r: \mathcal{S} \rightarrow \mathbb{R}$ is the reward function. The rewards of network architecture $r(s)$ can be interpreted to be a score associated with a given network architecture $s$. Note that we define the reward to be 0 for intermediate states, which represent ``incomplete" networks, and only compute a non-trivial reward for the final state. The reward function is described in detail in Section \ref{sec:reward_signal}.
\subsection{Student-Teacher Reinforcement Learning}
Under this MDP, the task of reinforcement learning is to learn an optimal policy $\pi : \mathcal{S} \rightarrow \mathcal{A}$, such that it maximizes the expected total reward, with the total reward given by:
\begin{align}
R (\vec{s}) &= \sum_{i=0}^{L=|\vec{s}|} r(s_i) = r(s_L).
\end{align}
We take a policy gradient reinforcement learning approach and iteratively update the policy based on sampled estimates of the reward. The design of the action space is critical for allowing the policy gradient method to effectively search the state space. If the actions are selected to be very incremental, a long sequence of actions would be needed to make a significant change to the network architecture, making credit assignment difficult. To address this issue, we propose a two stage reinforcement learning procedure. In the first stage a policy selects a sequence of actions deciding whether to keep or remove each layer of the teacher architecture. In the second stage, a different policy selects a sequence of discrete actions corresponding to the magnitude by which to attenuate configuration variables of each remaining layer. In this way, we are able to efficiently explore the state space to find the optimal student network.
\label{alg:reinforce}
\begin{algorithm}[H]
\small
\caption{Student-Teacher Reinforcement Learning}
\begin{algorithmic}[1]
\Procedure{Student-Teacher RL}{$\mathcal{S}, \mathcal{A}, T, r, \gamma$}
\State $s_0 \leftarrow$ Teacher
\For{$i=1$ to $N_1$} \Comment{Layer removal}
\For{$t=1$ to $L_1$}
\State $a_t \sim \pi_{\text{remove}}(s_{t-1}; \theta_{\text{remove}, i-1})$
\State $s_{t} \leftarrow T(s_{t-1}, a_t)$
\EndFor
\State $R \leftarrow r(s_{L_1})$
\State $\theta_{\text{remove}, i} \leftarrow \nabla_{\theta_{\text{remove}, i-1}}J(\theta_{\text{remove}, i-1})$ \Comment{(Eq. \ref{eq:gradient})}
\EndFor
\State $s_0 \leftarrow \text{Stage-1 Candidate}$
\For{$i=1$ to $N_2$} \Comment{Layer shrinkage}
\For{$t=1$ to $L_2$}
\State $a_t \sim \pi_{\text{shrink}}(s_{t-1}; \theta_{\text{shrink}, i-1})$
\State $s_{t} \leftarrow T(s_{t-1}, a_t)$
\EndFor
\State $R \leftarrow r(s_{L_2})$
\State $\theta_{\text{shrink}, i} \leftarrow \nabla_{\theta_{\text{shrink}, i-1}}J(\theta_{\text{shrink}, i-1})$ \Comment{(Eq. \ref{eq:gradient})}
\EndFor
\State \textbf{Output: } Compressed model
\EndProcedure
\end{algorithmic}
\end{algorithm}
A sketch of the algorithm is given in Algorithm \ref{alg:reinforce}.
For both layer removal and shrinkage policies, we repeatedly sample architectures and update the policies based on the reward achieved by the architectures. We now describe the details of the two stages of student-teacher reinforcement learning.
\subsubsection{Layer removal} \label{sec:layer_removal}
In the layer removal stage, actions \(a_t\) correspond to the binary decision to keep or remove a layer. The length of the trajectory for layer removal is \(T=L\), the number of layers in the network. At each step $t$ of layer removal, the Bidirectional LSTM policy (See Figure~\ref{subfig:layer_removal_compressor}) observes the hidden states, $h_{t-1}, h_{t+1}$, as well as information $x_t$ about the current layer: $\pi_{\text{remove}}(a_t | h_{t-1}, h_{t+1}, x_t)$. Information about the current layer $l$ is given as
\[x_t = (l, k, s, p, n, \skipstartraw, \skipendraw),\]
where \(l\) is the layer type, \(k\) kernel size, \(s\) stride, \(p\) padding and \(n\) number of outputs (filters or connections). To model more complex architectures, such as ResNet, \(s_{\text{start}}\) and \(s_{\text{end}}\) are used to inform the policy network about skip connections. For a layer inside a block containing a skip connection, \(s_{\text{start}}\) is the number of layers prior to which the skip connection began and \(s_{\text{end}}\) is the number of layers remaining until the end of the block.
Additionally it is to be noted that although actions are stochastically sampled from the outputs at each time step, the hidden states that are passed on serve as a sufficient statistic for \(x_0, a_0...x_{t-1}, a_{t-1}\)\cite{wierstra2010recurrent}.
\subsubsection{Layer shrinkage} \label{sec:layer_shrinkage}
The length of the trajectory for layer shrinkage is \(T = \sum_{l=1}^{L}H_l\), where \(H\) is the number of configuration variables for each layer. At each step $t$ of layer shrinkage, the policy observes the hidden state $h_{t-1}$, the previously sampled action $a_{t-1}$ and current layer information $x_t$: $\pi_{\text{shrink}}(a_t | a_{t-1}, h_{t-1}, x_t)$. The parameterization of $x_t$ is similar to layer removal except that the previous action is appended to the representation in an autoregressive manner (See Figure~\ref{subfig:layer_shrinkage_compressor}). The action space for layer shrinkage is defined as \(a_t \in [0.1, 0.2,\hdots,1]\) (each action corresponds to how much to shrink a layer parameter) and an action is produced for each configurable variable for each layer. Examples include kernel size, padding, and number of output filters or connections.
\subsection{Reward function}
The design of the reward function plays a critical role in learning the policies. A poorly designed reward that provides no discrimination between good and bad student architectures prevents policies from learning the trade-offs in architecture space. The objective of model compression is to maximize compression while maintaining a high accuracy. Since there is no benefit in producing highly compressed models which have bad performance, we want to provide a harsher penalty for a model with high compression + low accuracy than one with low compression + high accuracy. Furthermore we would also like to define a general reward function that does not depend on dataset/model specific hyperparameters. Additional discussion on the design of the reward function is provided in the appendix.
In our approach, we define the reward function as follows:
\begin{align*}
R &= R_c \cdot R_a\\
&= C(2-C) \cdot \frac{A}{A_{\text{teacher}}}
\end{align*}
Where \(C\) is the relative compression ratio of the student model, \(A\) is the validation accuracy of the student model and \(A_{\text{teacher}}\) is the validation accuracy of the teacher model provided defined as a constant. \(R_c\) and \(R_a\) refer to the compression and accuracy reward respectively. We compute the reward as a product of the compression and accuracy reward since we want the reward to scale with both quantities dependently.
The compression reward, \(R_c = C(2-C)\), is computed using a non-linear function that biases the policy towards producing models that maintain accuracy while optimizing for compression. The relative compression \(C \in [0, 1)\) is defined in terms of the ratio of trainable parameters of each model: \(C = 1 - \frac{\#\text{params}(\text{student})}{\#\text{params}(\text{teacher})}\). It is noted here that other compression methods that use quantization or coding define compression ratio in terms of number of bits instead of parameters. The accuracy reward, \(R_a\), is defined with respect to the teacher model as \(R_a = \frac{A}{A_{\text{teacher}}}\), where \(A \in [0, 1]\) refers to the validation accuracy of the student model and \(A_{\text{teacher}}\) refers to the validation accuracy of the teacher model. We note that both accuracy and compression rewards are normalized with respect to the teacher and thus do not require additional hyperparameters to perform task-specific weighting.
Lastly, it is possible that the policies may produce degenerate architectures in such cases, a reward if -1 is assigned (details in appendix).
\subsubsection{Constraints as Rewards} \label{sec:constrained_reward_signal}
Our approach allows us to incorporate pre-defined hardware or resource budget constraints by rewarding architectures that meet the constraints and discouraging those that do not. Formally, our constrained optimization problem is
\begin{flalign*}
\max E_{a_{1:T}} [R] \\
\text{subject to } Ax \le b,
\end{flalign*}
where $A$ and $b$ form our constraints, and $x$ is vector of constrained variables. We relax these hard constraints by redefining our reward function as:
\[
R =
\begin{cases}
R_a \cdot R_c & \text{if $Ax \le b$} \\
-1 & \text{otherwise}.
\end{cases}
\]
The introduction of the non-smooth penalty may result in a reduced exploration of the search space and hence convergence to a worse local minimum. To encourage early exploration gradually incorporate constraints over time:
\[
R =
\begin{cases}
R_a \cdot R_c & \text{if $Ax \le b$} \\
\epsilon_t (R_a \cdot R_c + 1) - 1& \text{otherwise},
\end{cases}
\]
where \(\epsilon_t \in [0, 1]\) monotonically decreases with \(t\) and \(\epsilon_0 = 1\).
As it is possible to incorporate a variety of constraints such as memory, time, power, accuracy, label-wise accuracy, our method is flexible enough to produce models practically viable in a diversity of settings. This is in contrast to conventional model compression techniques which require many manual repetitions of the algorithm in order to find networks that meet the constraints as well as optimally balance the accuracy-size tradeoff.
\subsection{Optimization}
We now describe the optimization procedure for each our stochastic policies, \(\pi_\text{remove}\) and \(\pi_\text{shrink}\). The procedure is the same for each policy, thus we use $\pi$ in what follows. Each policy network is parameterized by its own \(\theta\).
Our objective function is the expected reward over all sequences of actions \(a_{1:T}\), i.e.:
\[
J(\theta) = E_{a_{1:T} \sim P_{\theta}}(R)
\]
We use the REINFORCE policy gradient algorithm from Williams \cite{williams1992simple} to train both of our policy networks.
{\small
\begin{flalign*}
\nabla_\theta J(\theta) &= \nabla_\theta E_{a_{1:T} \sim P_{\theta}}(R) \\
&= \sum_{t=1}^T E_{a_{1:T} \sim P_{\theta}} [\nabla_\theta \log P_\theta(a_t|a_{1:(t-1)}) R] \\
&\approx \frac{1}{m}\sum_{k=1}^m \sum_{t=1}^T [\nabla_\theta \log P_\theta(a_t|h_t) R_k]
\end{flalign*}
}where $m$ is the number of rollouts for a single gradient update, $T$ is the length of the trajectory, $P_{\theta}(a_t|h_t)$ is the probability of selecting action $a_t$ given the hidden state $h_t$, generated by the current stochastic policy parameterized by $\theta$ and $R_k$ is the reward of the $k^{\text{th}}$ rollout.
The above is an unbiased estimate of our gradient, but has high variance. A common trick is to use a state-independent baseline function to reduce the variance:
{\small
\begin{equation}
\nabla_\theta J(\theta) \approx \frac{1}{m}\sum_{k=1}^m \sum_{t=1}^T [\nabla_\theta \log P_\theta(a_t|h_t) (R_k-b)] \label{eq:gradient}
\end{equation}
}We use an exponential moving average of the previous rewards as the baseline $b$. An Actor-Critic policy was also tested. While there was a minor improvement in stability, it failed to explore as effectively in some cases, resulting in a locally optimal solution. Details are in the appendix.
\label{sec:reward_signal}
\subsection{Knowledge distillation}
\label{sec:knowledge_distillation}
Student models are trained using data labelled by a teacher model. Instead of using hard labels, we use the un-normalized log probability values (the logits) of the teacher model. Training using the logits helps to incorporate \textit{dark knowledge} \cite{hinton2015distilling} that regularizes students by placing emphasis on the relationships learned by the teacher model across all of the outputs.
As in \cite{ba2014deep}, the student is trained to minimize the mean $L_2$ loss on the training data $\left\{(x^{i}, z^{i})\right\}_{i=1}^N$. Where \(z^{i}\) are the logits of the teacher model.
\[
\mathcal{L}_{\text{KD}}(f(x;W), z) = \frac{1}{N} \sum_{i} || f(x^{(i)}; W) - z^{(i)} ||_2^2
\]
where W represents the weights of the student network and \(f(x^{(i)}; W)\) is the model prediction on the \(i^{th}\) training data sample.
Final student models were trained to convergence with hard and soft labels using the following loss function.
\[
\mathcal{L(W)} = \mathcal{L_{\text{hard}}}(f(x; W), y_{\text{true}}) + \lambda * \mathcal{L_{\text{KD}}}(f(x; W), z)
\]
Where \(\mathcal{L_{\text{hard}}}\) is the loss function used for training with hard labels (in our case cross-entropy) and \(y_{\text{true}}\) are the ground truth labels.
\begin{table}[t]
\caption{Summary of Compression results.}
\label{tab:summary}
\centering
\tabcolsep=0.1cm
\scalebox{1}{
\begin{tabular}{llllll}
\toprule
\multicolumn{6}{c}{MNIST} \\
\toprule
Architecture & & Acc. & \#Params & \(\Delta\) Acc. & Compr.\\
\midrule
VGG-13 &Teacher & 99.54\% & 9.4M & --- & ---\\
&Student (Stage1) & 99.55\% & 73K & +0.01\% & 127x\\
\toprule
\multicolumn{6}{c}{CIFAR-10} \\
\midrule
VGG-19 & Teacher & 91.97\% & 20.2M & --- & ---\\
& Student (Stage1) & 92.05\% & 1.7M & +0.08\% & 11.8x\\
& Student (Stage1+Stage2) & 91.64\% & 984K & -0.33\% & 20.53x\\
\midrule
ResNet-18 & Teacher & 92.01\% & 11.17M & --- & ---\\
& Student (Stage1) & 91.97\% & 2.12M & -0.04\% & 5.26x\\
& Student (Stage1+Stage2) & 91.81\% & 1.00M & -0.2\% & 11.10x\\
\midrule
ResNet-34 & Teacher & 92.05\% & 21.28M & --- & ---\\
&Student (Stage1) & 93.54\% & 3.87M & +1.49\% & 5.5x\\
&Student (Stage1+Stage2) & 92.35\% & 2.07M & +0.30\% & 10.2x \\
\toprule
\multicolumn{6}{c}{SVHN} \\
\midrule
ResNet-18 & Teacher & 95.24\% & 11.17M & --- & ---\\
& Student (Stage1) & 95.66\% & 2.24M & +0.42\% & 4.97x\\
& Student (Stage1+Stage2) & 95.38\% & 564K & +0.18\% & 19.8x\\
\toprule
\multicolumn{6}{c}{CIFAR-100} \\
\midrule
ResNet-18& Teacher & 72.22\% & 11.22M & --- & ---\\
& Student (Stage1) & 69.64\% & 4.76M & -2.58\% & 2.35x\\
& Student (Stage1+Stage2) & 68.01\% & 2.42M & -4.21\% & 4.64x\\
ResNet-34& Teacher & 72.86\% & 21.33M & --- & ---\\
& Student (Stage1) & 70.11\% & 4.25M & -2.75\% & 5.02x\\
\toprule
\multicolumn{6}{c}{Caltech256} \\
\midrule
ResNet-18& Teacher & 47.65\% & 11.31M & --- & ---\\
& Student (Stage1) & 44.71\% & 3.62M & -2.94\% & 3.12x\\
\bottomrule
\end{tabular}
}
\vspace{-3mm}
\end{table}
\section{Experiments}
In the following experiments, we first show that our method is able to find highly compressed student architectures with high performance on multiple datasets and teacher architectures, often exceeding performance of the teacher model. We compare the results obtained to current baseline methods of model compression, showing competitive performance. Then we demonstrate the viability of our method in highly resource constrained conditions by running experiments with strong model size constraints. Finally, we show that it is possible to rapidly speed up training when using larger teacher models by reusing policies that are pretrained on smaller teacher models.
\subsection{Datasets}
\textbf{MNIST}
The MNIST \cite{lecun1998mnist} dataset consists of $28\times28$ pixel grey-scale images depicting handwritten digits. We use the standard 60,000 training images and 10,000 test images for experiments. Although MNIST is easily solved with smaller networks, we used a high capacity models (e.g., VGG-13) to show that the policies learned by our approach are able to effectively and aggressively remove redundancies from large network architectures.
\textbf{CIFAR-10}
The CIFAR-10 \cite{krizhevsky2009learning} dataset consists of 10 classes of objects and is divided into 50,000 train and 10,000 test images (32x32 pixels). This dataset provides an incremental level of difficulty over the MNIST dataset, using multi-channel inputs to perform model compression.
\textbf{SVHN}
The Street View House Numbers \cite{netzer2011reading} dataset contains 32×32 colored digit images with 73257 digits for training, 26032 digits for testing. This dataset is slightly larger that CIFAR-10 and allows us to observe the performance on a wider breadth of visual tasks.
\textbf{CIFAR-100}
To further test the robustness of our approach, we evaluated it on the CIFAR-100 dataset. CIFAR-100 is a harder dataset with 100 classes instead of 10, but the same amount of data, 50,000 train and 10,000 test images (32x32). Since there is less data per class, there is a steeper size-accuracy tradeoff. We show that our approach is able to produce solid results despite these limitations.
\textbf{Caltech-256}
To test the effectiveness of our approach in circumstances where data is \textit{sparse}, we run experiments on the Caltech-256 dataset \cite{griffin2007caltech}. This dataset contains more classes and less data per class than CIFAR-100, containing 256 classes and a total of 30607 images (224x224). We trained the networks from scratch instead of using pretraining in order to standardize our comparisons across datasets.
\subsection{Training details}
In the following experiments, student models were trained as described in Section \ref{sec:knowledge_distillation}. We observed heuristically that 5 epochs was sufficient to compare performance.
The layer removal and layer shrinkage policy networks were trained using the Adam optimizer with a learning rate of 0.003 and 0.01 respectively. Both recurrent policy networks were trained using the REINFORCE algorithm (batch size=5) with standard backpropagation through time. A grid search was done to determine the ideal learning rate and batch size (details in appendix).
\subsection{Compression Experiments}
In this section we evaluate the ability of our approach to learn policies to find compressed architectures without any constraints. In the following experiments, we expect that the policies learned by our approach will initially start out as random and eventually tend towards an optimal size-accuracy trade-off which results in a higher reward. Definitions of architectures are available in the appendix.
\begin{figure}[tbh!]
\centering
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{mnist10runs}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{mnistvgg5runs}
\end{subfigure}\\
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{mnistmodel_param}
\caption{CONV4}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{mnistvgg_param}
\caption{VGG-13}
\end{subfigure}
\label{fig:vgg_mnist_1}
\caption{Student learning on \textbf{MNIST}. Reward, Accuracy, Compression vs Iteration (\textbf{Top}: Stage 1, \textbf{Bottom}: Stage 2)} \label{fig:mnist_plots}
\end{figure}
\vspace{1mm}
\textbf{MNIST} To evaluate the compression performance we use (1) a \textbf{Conv4} network consisting of 4 convolutional layers and (2) a high capacity \textbf{VGG-13} network.
Figure~\ref{fig:mnist_plots} shows the results of our compression approach for each teacher network. The lines represent the compression (blue), accuracy (green) and reward (orange). The y-axis represents the score of those quantities, between 0 and 1. The x-axis is the iteration number. We also highlight the largest and smallest models with red circles to give a sense of the magnitude of compression.
This experiment appears to confirm our original expectation that the policies would improve over time.
\begin{figure*}[tbh!]
\centering
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{cifarvgg_5runs}
\end{subfigure}
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{resnet1810runs}
\end{subfigure}
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{resnet34_5runs}
\end{subfigure}
\\
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{cifarvgg_param}
\caption{VGG-19}
\end{subfigure}
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{resnet18_param3runs}
\caption{ResNet-18}
\end{subfigure}
\begin{subfigure}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{resnet34_param}
\caption{ResNet-34}
\end{subfigure}
\caption{Student learning on \textbf{CIFAR-10}. Reward, Accuracy, Compression vs Iteration (\textbf{Top}: Stage 1, \textbf{Bottom}: Stage 2)}
\label{fig:cifar}
\end{figure*}
\textbf{CIFAR-10} On the CIFAR-10 dataset we ran experiments using the following teacher networks: (1) \textbf{VGG-19}, (2) \textbf{ResNet-18} and (3) \textbf{ResNet-34} networks. The experimental results are shown in Figure~\ref{fig:cifar}. It is interesting to note that on CIFAR-10, our learned student networks perform almost as well or better the teacher networks despite a 10x compression rate.
\textbf{SVHN} On the SVHN dataset, we ran experiments using \textbf{ResNet-18} network as the teacher model. We observed that the reward and compression steadily increased while the accuracy remained stable, confirming similar results to that of CIFAR-10. This is a promising indication that our approach works for a breadth of tasks and isn't dataset specific. Results are in the appendix.
\begin{figure}[th!]
\centering
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{cifar100resnet18}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{cifar100resnet34}
\end{subfigure}
\\
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{cifar100_param}
\caption{ResNet-18}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{cifar100_resnet34_param}
\caption{ResNet-34}
\label{subfig:cifar100_stage2}
\end{subfigure}
\caption{Student learning on \textbf{CIFAR-100}.}
\label{fig:cifar100}
\end{figure}
\textbf{CIFAR-100} We also verified our approach on a harder dataset, CIFAR-100 to show how our approach performs with less data per class (Figure \ref{fig:cifar100}). Considering the largely reduced number of parameters, the compressed network achieves reasonably high accuracy. A notable aspect of many of the final compressed models is that ReLU layers within residual blocks were removed. Another interesting result is that the compressed ResNet-34 student model outperforms the ResNet-18 model despite having fewer parameters. This can likely be explained by the increased number of residual blocks in the ResNet-34 model.
\textbf{Caltech-256} The Caltech-256 experiments (appendix) show the performance of our approach when training data is scarce. We would like to verify that our approach does not overly compress the network by overfitting to the small number of training examples. As with the other experiments, the policies appears to learn to maximize reward over time, although the positive trend is not as pronounced due to the lack of training data. This is expected since less data means the reward signal is less robust to sources of noise, which in turn affects training of the policy.
\subsection{Baselines}
We compare the performance of our approach to current model compression methods, namely pruning and Knowledge Distillation (with hand-designed model). We note here that compression rate is defined as the ratio of number of parameters instead of number of bits, which some other compression methods (quantization, coding) use. To provide a fair comparison with our method, the same trained teacher models used in our method were used.
\subsubsection{Pruning}
\begin{table}[tbh!]
\caption{Pruning (Baseline)}
\centering
\tabcolsep=0.1cm
\scalebox{1}{
\begin{tabular}{lllll}
\toprule
Model & Acc. & \#Params & Compr. & $\Delta$ Acc. \\
\midrule
Teacher (MNIST/VGG-13) & 99.54\% & 9.4M & --- & ---\\
Pruning & 99.12\% & 162K & 58x & -0.42\%\\
Ours & \textbf{99.55\%} & \textbf{73K} & \textbf{127x} & \textbf{+0.01\%}\\
\midrule
Teacher (CIFAR-10/VGG-19) & 91.97\% & 20.2M & --- & ---\\
Pruning & 91.06\% & 2.3M & 8.7x & -0.91\%\\
Ours & \textbf{92.05\%} & \textbf{1.7M} & \textbf{11.8x} & \textbf{+0.08\%}\\
\bottomrule
\end{tabular}
}
\label{tab:pruning_baseline}
\end{table}
We compare our method to pruning, which is a model compression approach that operates directly on the weight space of a network, removing redundant weights or filters. We perform pruning based on \cite{molchanov2016pruning}, which removes filters using a greedy criteria based approach and then finetunes the network. With pruning, the performance of the final model can vary depending on the degree to which it was pruned. To ensure a fair comparison, we stop pruning when 1. accuracy drops below 1\% of the student model obtained by our method or 2. the number of parameters is less than our method. Pruning is done 5 times to control for variance and the best performing model is reported.
The results of this experiment, reported in Table \ref{tab:pruning_baseline}, show that while the pruned models show good compression rates, our approach outperforms this baseline on both datasets. These results could indicate that operating on the architecture space of the model might result in more consistent results than using heuristics to operate on the weight space directly.
\subsubsection{Knowledge Distillation}
\begin{table}[tbh!]
\caption{Knowledge distillation with hand designed models (Baseline)}
\centering
\tabcolsep=0.1cm
\scalebox{1}{
\begin{tabular}{lllll}
\toprule
Model & Acc. & \#Params & Compr. & $\Delta$ Acc.\\
\midrule
Teacher (SVHN/ResNet-18) & 95.24\% & 11.17M & --- & ---\\
SqueezeNet1.1 & 89.34\% & 727K & 15x & -5.90\%\\
Ours & \textbf{95.38\%} & \textbf{564K} & \textbf{19.8x} & \textbf{+0.18\%}\\
\midrule
Teacher (CIFAR-10/ResNet-18) & 92.01\% & 11.17M & --- & ---\\
FitNet-4 & 91.33\% & 1.2M & 9.3x & -0.63\%\\
VGG-small & 83.93\% & 1.06M & 10.5x & -8.08\% \\
Ours & \textbf{91.81\%} & \textbf{1.00M} & \textbf{11.0x} & \textbf{-0.20\%} \\
\bottomrule
\end{tabular}
}
\label{tab:kd_baseline}
\end{table}
We also tested the validity of our hypothesis that hand designed models may not be optimal for Knowledge distillation. We compare models generated by our method to hand designed models that contain a similar number of parameters. We perform experiments with 3 hand designed model architectures, FitNet-4, SqueezeNet and a reduced network based on VGG, (VGG-small) which contains 10 layers. These networks were then trained to convergence with Knowledge Distillation on the CIFAR-10 dataset and the SVHN datasets.
For the implementation of FitNet-4 (17 layers), we used the same model architecture described in \cite{mishkin2015all} with the ReLU activation and Xavier initialization. The paper reported a baseline accuracy of 90.63 when trained from scratch and 1.2 M parameters (Table 3 in \cite{mishkin2015all}). For SqueezeNet, we implemented the 1.1 version described in \cite{iandola2016squeezenet}, which contained 727K parameters after adapting it to CIFAR-10. We benchmarked VGG-small and FitNet on the CIFAR-10 dataset and SqueezeNet on the SVHN dataset in order to provide a fair comparison with our best models in terms of the number of parameters.
From the results reported in Table \ref{tab:kd_baseline}, we observe that our method performs better than the hand-designed models on both datasets despite containing fewer parameters. The CIFAR-10 results seem to indicate that model selection is an important factor in Knowledge Distillation. Our model and the FitNet-4 model both outperform the VGG-small model, further confirming our hypothesis that hand-designing models may not be the optimal approach for use with Knowledge Distillation.
\subsection{Compression with Size Constraints}
\begin{table}[tbh!]
\caption{Model Compression with Size Constraints}
\centering
\tabcolsep=0.1cm
\scalebox{1}{
\begin{tabular}{lllll}
\toprule
Model & Acc. & \#Params & Compr. & Constr.\\
\midrule
Teacher (MNIST/VGG-13) & \textbf{99.54\%} & 9.4M & 1x & N/A\\
Student (Stage 1 \& 2) & \textbf{98.91}\% & \textbf{17K} & \textbf{553x}&20K\\
\midrule
Teacher (CIFAR-10/VGG-19) & \textbf{91.97\%} & 20.2M & 1x & N/A\\
Student (Stage 1 \& 2) & \textbf{90.8}\% & \textbf{573K} & \textbf{35x} & 1M\\
\bottomrule
\end{tabular}
}
\label{tab:const}
\end{table}
While the experiments to this point used no explicit constraints, in this experiment, we add a size constraint in terms of the number of parameters via the reward function as in Section \ref{sec:constrained_reward_signal}. We expect the optimization to be harder because the range of acceptable architectures is reduced.
Results are summarized in Table \ref{tab:const}. These promising results suggest that the compression policies are able to produce sensible results despite being heavily constrained, thus demonstrating the viability of the approach in practice.
\subsection{Transfer Learning}
\begin{table}[tb]
\footnotesize
\caption{Transfer Learning Performance during first 10 iterations.}
\tabcolsep=0.1cm
\centering
\scalebox{1}{
\begin{tabular}{@{}llllclllclll@{}}
\toprule
& \multicolumn{3}{c}{ResNet18 \(\rightarrow\) ResNet34 } && \multicolumn{3}{c}{ResNet34\(\rightarrow\) ResNet18} && \multicolumn{3}{c}{VGG11\(\rightarrow\) VGG19}\\
\cmidrule{2-4} \cmidrule{6-8} \cmidrule{10-12}
& Reward & Comp. & Acc. && Reward & Comp. & Acc. && Reward & Comp. & Acc. \\
\midrule
Pre-trained & \textbf{0.81} & \textbf{78.1\%} & 79.5\% && \textbf{0.76} & \textbf{65.5\%} & 82.3\% && \textbf{0.52} & \textbf{46.0\%} & \textbf{71.7}\% \\
Scratch & 0.50 & 34.8\% & \textbf{82.4\%} && 0.53 & 39.7\% & \textbf{82.8\%} && -0.07 & 20.2\% & 42.5 \%\\
\bottomrule
\end{tabular}
}
\label{tab:transfer}
\end{table}
Naively applying our approach to a new teacher network means that the compression policies must be learned from scratch for each new problem. We would like to know if layer removal and shrinkage policy networks can be reused to accelerate compression for new teacher architectures. In the following experiments, we train a policy on an initial teacher model and then apply it to another teacher model to test whether the policy has learned a general strategy for compressing a network. Since both a pretrained policy and a randomly initialized policy is expected to eventually converge to a locally optimal policy given enough iterations, we provide performance measures over the the first 10 policy update iterations.
Results are summarized in Table \ref{tab:transfer}. The slight drop in accuracy (third subcolumn) in models produced by the pretrained policy is expected due to the tradeoff between compression and accuracy. However, the average reward (first subcolumn) is always higher when we use a pretrained policy. Note that in the VGG experiment, the reward is negative since the non-pretrained policy starts off by producing degenerate models. However, the pretrained policy starts off from a different initialization that does not.
This is an important result as it shows promising evidence that we can even transfer learned knowledge from a \emph{smaller} model to a \emph{larger} model, rapidly accelerating the policy search procedure on very deep networks.
\section{Conclusion}
We introduced a novel method for compressing neural networks. Our approach employs a two-stage layer removal and layer shrinkage procedure to learn how to compress large neural networks. By leveraging signals for accuracy and compression as supervision, our method efficiently learns to search the space of model architectures. We show that our method performs well over a variety of datasets and architectures. We also observe generalization capabilities of our method through transfer learning, allowing our procedure to be made even more efficient. Our method is also able to incorporate other practical constraints, such as power or inference time, thus showing potential for application in a real world setting.
\bibliography{iclr2018_conference}
\bibliographystyle{iclr2018_conference}
\clearpage
\section*{Appendix}
\section{Actor-Critic}
Policy gradient based Actor-Critic algorithms have been shown to improve the stability of the policy search. This is achieved by replacing the baseline with a learned estimate of the value function at each time step.
Formally, with vanilla REINFORCE we have,
\[ \nabla_\theta J(\theta) \approx \frac{1}{m}\sum_{k=1}^m \sum_{t=1}^T [\nabla_\theta \log P_\theta(a_t|h_t) (R_k-b_k)] \]
In the Actor-Critic algorithm we replace $b_k$ with $V^\theta_k$, resulting in a new gradient estimate,
\[ \nabla_\theta J(\theta) \approx \frac{1}{m}\sum_{k=1}^m \sum_{t=1}^T [\nabla_\theta \log P_\theta(a_t|h_t) (R_k-V^\theta_k)] \]
We implement the Critic network by adding an additional fully-connected layer that takes as input the hidden state of the LSTM and outputs a single scalar value. Below are the results of the experiments performed.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{.45\textwidth}
\includegraphics[width=\textwidth]{mnist_ac}
\end{subfigure}
\begin{subfigure}[b]{.45\textwidth}
\includegraphics[width=\textwidth]{mnist_benchmark}
\end{subfigure}
\caption{MNIST \textbf{Left:} Actor-critic \textbf{Right:} REINFORCE, averaged over 3 runs}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{.45\textwidth}
\includegraphics[width=\textwidth]{cifar_ac}
\end{subfigure}
\begin{subfigure}[b]{.45\textwidth}
\includegraphics[width=\textwidth]{cifar_benchmark}
\end{subfigure}
\caption{CIFAR-10 \textbf{Left:} Actor-critic \textbf{Right:} REINFORCE, averaged over 3 runs}
\end{figure}
For the MNIST dataset, our results show that there is a slight improvement in stability, although they both converge at a similar rate.
For the CIFAR-10 dataset, although the Actor-critic version was more stable, it did not perform as well as the vanilla REINFORCE algorithm.
\clearpage
\section{Learning rate and batch size}
The learning rate and batch size were selected via a grid search. The following graphs show the rate of convergence for different learning rates and batch sizes.
\subsection{Learning rate}
In order to determine the learning rate, we performed a grid search over {0.03, 0.003, 0.0003}. We performed this grid search on the MNIST dataset using the VGG-13 network to save time. For the stage-1 policy, it was observed that lr=0.03 did not converge while lr=0.0003 converged too slowly. Thus we used lr=0.003 as the learning rate.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.49\textwidth]{lr_grid}
\caption{Average reward over 3 runs for various learning rates on the MNIST dataset}
\end{figure}
\subsection{Batch size}
Similarly we performed a grid search to determine the optimal batch size over {1, 5, 10}. A batch size of 1 was too unstable while a batch size of 10 offered no substantial improvements to justify the additional computation. Thus we observed that a batch size of 5 worked the best.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{.3\textwidth}
\includegraphics[width=\textwidth]{mnist_1}
\end{subfigure}
\begin{subfigure}[b]{.3\textwidth}
\includegraphics[width=\textwidth]{mnist_5}
\end{subfigure}
\begin{subfigure}[b]{.3\textwidth}
\includegraphics[width=\textwidth]{mnist_10}
\end{subfigure}
\caption{Average reward over 3 runs for batch sizes \textbf{Left:} 1, \textbf{Middle:} 5, \textbf{Right:} 10 on the MNIST dataset}
\end{figure}
\section{Transfer learning experiments}
Below are the results of the transfer learning experiments, as observed, the pretrained policies start off with a high reward unlike the policies trained from scratch.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{resnet18pretrained}
\caption{ResNet18 $\rightarrow$ ResNet34}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{resnet34pretrained_5runs}
\caption{ResNet34 $\rightarrow$ ResNet18}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{vggtransfer}
\caption{VGG-11 $\rightarrow$ VGG-19}
\end{subfigure}
\caption{Transfer learning experiments}
\end{figure}
\section{Additional experiments}
The following section contains results about additional compression experiments that were conducted.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{svhn_resnet18}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{svhn_resnet18_param}
\end{subfigure}
\caption{ResNet-18 experiments on \textbf{SVHN}, (\textbf{Top}: Stage 1, \textbf{Bottom}: Stage 2)} \label{fig:SVHN}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{caltech}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{caltech_param}
\end{subfigure}
\caption{ResNet-18 experiments on \textbf{Caltech}, (\textbf{Top}: Stage 1, \textbf{Bottom}: Stage 2)}
\end{figure}
\section{Implementation details}
The following section contains the implementation details required to replicate the experiments. All of the experiments were implemented in PyTorch with 4 NVIDIA TitanX GPUs.
\subsection{Policies}
\textbf{Removal policy} The removal policy was implemented with 2 hidden layers and 30 hidden units and trained with the Adam optimier and a learning rate of 0.003. The shrinkage policy was implemented with 2 hidden layers and 50 hidden units and trained with the Adam optimier and with a learning rate of 0.1. These policies were each trained for at least 100 epochs for each experiment. Batch size of 5 rollouts was used.
\subsection{Teacher models}
\textbf{MNIST} Teacher models for MNIST were trained for 50 epochs with a starting learning rate of 0.01. The learning rate is reduced by a factor of 10 in the 30th epoch. A batch size of 64 was used.\\
\textbf{CIFAR-10/100} Teacher models for CIFAR-10/100 were trained for 150 epochs with a starting learning rate of 0.001. The learning rate is decreased by a factor of 10 in the 80th and 120th epochs. Standard data augmentation with horizontal mirroring (p=0.5), random cropping with padding of 4 pixels and mean subtraction of (0.5, 0.5, 0.5). A batch size of 128 was used.\\
\textbf{SVHN} Teacher models for SVHN were trained for 150 epochs with a starting learning rate of 0.001. The learning rate is decreased by a factor of 10 in the 80th and 120th epochs. Mean subtraction of (0.5, 0.5, 0.5) and a batch size of 128 was used.\\
\textbf{Caltech256}
To make the experiments controlled over alll datasets the Caltech256 models were trained from scratch. It is to be noted that Caltech256 models are usually initialized with pre-trained ImageNet weights since data is sparse. The training procedure consisted of 50 epochs with an intial learning rate of 0.01. It was reduced to 0.001 after the 50th epoch. Data augmentation such as horizontal flipping and random cropping alongside mean subtraction was used.
\section{Reward design}
In this section we go into greater detail regarding the design of the chosen reward function compared to a naive reward.
For our objective of model compression, we want the reward to reflect the following qualitative heuristics.
\begin{enumerate}
\item A model with \(\uparrow\) compression but \(\downarrow\) accuracy should be penalized more than a model with \(\downarrow\) compression and \(\uparrow\) accuracy. Since we do not want to produce highly compressed models which do not perform well on the task, we do not want to let the compression score dominate the reward.
\item The reward function should montonically increase with both compression and accuracy.
\end{enumerate}
\subsection{Naive approach}
Defining a naive, symmetrical reward function results in the following failure case. Suppose we define our reward as:
\[R = A * C\]
where \(A, C\) are the relative validation accuracy and compression achieved by the student model. Let us consider the following 2 cases:
\begin{enumerate}
\item \(\uparrow\) accuracy, \(\downarrow\) compression. A = 1, C = 0.25
\item \(\downarrow\) accuracy, \(\uparrow\) compression. A = 0.25, C = 1
\end{enumerate}
In both cases \(R = A*C = 0.25\), which we do not want. If we use the reward function defined in the paper we get a reward of 0.25 and 0.4375 for each of the cases, which is closer to our true objective. In our empirical experiments, the non-linear reward outperformed the naive one. Other more complex reward functions that respect the above criteria may also work well.
The visualization of the reward manifold in \ref{fig:compare_rewards} better illustrates the difference.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{naive_contour.png}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{ours_contour.png}
\end{subfigure}
\caption{Reward manifold of naive reward vs. our reward}
\label{fig:compare_rewards}
\end{figure}
As observed, a naive reward function is symmetric while our reward function returns a lower reward for low accuracy, high compression models compared to high accuracy, low compression models. Both functions are monotonically increasing.
\subsection{Degenerate cases}
The following section outlines a few of the cases which are considered degenerate and for which a fixed reward of -1 is assigned.
\begin{enumerate}
\item \textbf{Empty architecture} - Depending on how it is implemented, the policies could possibly output "remove" actions for each layer during the layer removal stage. In this case, the output would be an empty architecture with no trainable parameters.
\item \textbf{Large FC layer} - If too many layers are removed in the feature extraction portion of the convolutional neural network, the size of the feature map before the fully connected layers would be large. In this case, although we have a well defined reward, training the network could be impractical
\item \textbf{Specialized architectures} - When dealing with more complex architectures, there may be inter-layer dependencies which impose certain requirements. For example, in a ResNet, the dimensionality of the feature maps at the start and end of each residual block has to match.
\end{enumerate}
\section{Future directions}
This paper introduces a general method to generate an architecture that optimizes the size-capacity trade-off with respect to a particular task. The current limitation with this method is that we need to train each student model for a few epochs to determine a reward for it. This step can be computationally expensive depending on the dataset. Results from \cite{saxe2011random}, \cite{jarrett2009best} and \cite{cox2011beyond} seem to suggest that initializing models with random weights could be an efficient way to evaluate architectures provided the right non-linearities and pooling are used. Another way to provide a better initialization could be to use a hypernetwork which takes the student model architecture as input and produces weights for the model. Other methods that select an informative subset of the training and test dataset to efficiently evaluate the network could also be interesting to explore. Another interesting direction would be to use the pretrained policies for transfer learning on different architecture search problems (apart from compression) to see if any generalizable information about deep architectures is being learned.
\end{document}