taesiri commited on
Commit
8cf8144
0 Parent(s):

Initial commit

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .DS_Store +0 -0
  2. .gitattributes +56 -0
  3. README.md +3 -0
  4. papers/0809/0809.3083.tex +0 -0
  5. papers/0908/0908.2724.tex +750 -0
  6. papers/0909/0909.0910.tex +0 -0
  7. papers/1011/1011.5270.tex +0 -0
  8. papers/1206/1206.5538.tex +0 -0
  9. papers/1210/1210.1207.tex +0 -0
  10. papers/1309/1309.6392.tex +999 -0
  11. papers/1311/1311.2524.tex +0 -0
  12. papers/1311/1311.2901.tex +993 -0
  13. papers/1312/1312.1445.tex +0 -0
  14. papers/1312/1312.6034.tex +307 -0
  15. papers/1404/1404.1100.tex +553 -0
  16. papers/1405/1405.0174.tex +246 -0
  17. papers/1405/1405.0312.tex +363 -0
  18. papers/1406/1406.6247.tex +457 -0
  19. papers/1409/1409.1259.tex +938 -0
  20. papers/1409/1409.4667.tex +0 -0
  21. papers/1411/1411.4555.tex +816 -0
  22. papers/1411/1411.5018.tex +774 -0
  23. papers/1412/1412.0035.tex +589 -0
  24. papers/1412/1412.3555.tex +758 -0
  25. papers/1412/1412.6856.tex +386 -0
  26. papers/1412/1412.6980.tex +14 -0
  27. papers/1501/1501.02530.tex +767 -0
  28. papers/1501/1501.04560.tex +1545 -0
  29. papers/1502/1502.03044.tex +972 -0
  30. papers/1502/1502.04681.tex +1271 -0
  31. papers/1503/1503.04069.tex +688 -0
  32. papers/1503/1503.08677.tex +830 -0
  33. papers/1504/1504.08083.tex +1103 -0
  34. papers/1505/1505.01197.tex +388 -0
  35. papers/1505/1505.04474.tex +890 -0
  36. papers/1505/1505.05192.tex +643 -0
  37. papers/1506/1506.00019.tex +1412 -0
  38. papers/1506/1506.02078.tex +607 -0
  39. papers/1506/1506.02640.tex +514 -0
  40. papers/1506/1506.02753.tex +0 -0
  41. papers/1509/1509.01469.tex +867 -0
  42. papers/1509/1509.06825.tex +333 -0
  43. papers/1510/1510.00726.tex +0 -0
  44. papers/1511/1511.06335.tex +486 -0
  45. papers/1511/1511.06422.tex +520 -0
  46. papers/1511/1511.06732.tex +1011 -0
  47. papers/1511/1511.09230.tex +0 -0
  48. papers/1512/1512.02479.tex +559 -0
  49. papers/1512/1512.02902.tex +854 -0
  50. papers/1512/1512.03385.tex +846 -0
.DS_Store ADDED
Binary file (6.15 kB). View file
 
.gitattributes ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
56
+ papers/2110.00641.tex filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ ---
papers/0809/0809.3083.tex ADDED
The diff for this file is too large to render. See raw diff
 
papers/0908/0908.2724.tex ADDED
@@ -0,0 +1,750 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[12pt]{article}
2
+
3
+ \usepackage{mslapa}
4
+ \usepackage{epsfig}
5
+ \usepackage{graphicx}
6
+ \usepackage{amsmath}
7
+ \usepackage{amsfonts}
8
+ \usepackage{bm}
9
+ \usepackage{amssymb}
10
+ \usepackage{enumerate}
11
+ \usepackage{algorithm}
12
+ \usepackage{algorithmic}
13
+
14
+ \newcommand{\dataset}{{\cal D}}
15
+ \newcommand{\fracpartial}[2]{\frac{\partial #1}{\partial #2}}
16
+ \newtheorem{theorem}{Theorem}
17
+ \newtheorem{definition}[theorem]{Definition}
18
+
19
+ \newcommand{\bfu}{\mathbf{u}}
20
+ \newcommand{\bfx}{\mathbf{x}}
21
+ \newcommand{\bfz}{\mathbf{z}}
22
+ \newcommand{\bfy}{\mathbf{y}}
23
+ \newcommand{\bfw}{\mathbf{w}}
24
+ \newcommand{\bfX}{\mathbf{X}}
25
+ \newcommand{\bfY}{\mathbf{Y}}
26
+ \newcommand{\phib}{\bm{\phi}}
27
+ \newcommand{\bfe}{\mathbf{e}}
28
+
29
+ \begin{document}
30
+
31
+ \title{Sparse Canonical Correlation Analysis}
32
+
33
+ \author{David R. Hardoon and John Shawe-Taylor\\
34
+ [5pt]Centre for Computational Statistics and Machine Learning\\
35
+ Department of Computer Science\\
36
+ University College London\\
37
+ {\tt \{D.Hardoon,jst\}@cs.ucl.ac.uk }}
38
+
39
+ \date{}
40
+
41
+ \maketitle
42
+ \begin{abstract}
43
+ We present a novel method for solving Canonical Correlation Analysis (CCA) in a sparse convex framework using a least squares approach. The presented method focuses on the scenario when one is interested in (or limited to) a primal representation for the first view while having a dual representation for the second view. Sparse CCA (SCCA) minimises the number of features used in both the primal and dual projections while maximising the correlation between the two views. The method is demonstrated on two paired corpuses of English-French and English-Spanish for mate-retrieval. We are able to observe, in the mate-retreival, that when the number of the original features is large SCCA outperforms Kernel CCA (KCCA), learning the common semantic space from a sparse set of features.
44
+ \end{abstract}
45
+
46
+
47
+ \section{Introduction}
48
+ Proposed by \cite{Hotelling36}, CCA is a technique for finding pairs of vectors that maximises the correlation between a set of paired variables. The set of paired variables can be considered as two views of the same object, a perspective we adopt throughout the paper. Since the debut of CCA, a multitude of analyses, adaptations and applications have been proposed \cite{jrketter,fyfe-ica,pei-kernel,akaho01,fclbk01,fblk01,BachJordan,Hardoon03CBMI,Hardoon04,Fukumizu,Hardoon06a,Szedmak07,Hardoon07}.\\
49
+ \\
50
+ The potential disadvantage of CCA and similar statistical methods, such as Principle Component Analysis (PCA) and Partial Least Squares (PLS), is that the learned projections are a linear combination of all the features in the primal and dual representations respectively. This makes the interpretation of the solutions difficult. Studies by \cite{zou04sparse,moghaddam-spectral,dhanjal06sparse} and the more recent \cite{Dspremont,sriperumbudurICML07} have addressed this issue for PCA and PLS by learning only the relevant features that maximise the variance for PCA and covariance for PLS. A previous application of sparse CCA has been proposed in \cite{Torres} where the authors imposed sparsity on the semantic space by penalising the cardinality of the solution vector \cite{Weston}. The SCCA presented in this paper is novel to the extent that instead of working with covariance matrices \cite{Torres}, which may be computationally intensive to compute when the dimensionality of the data is large, it deals directly with the training data. \\
51
+ \\
52
+ In the Machine Learning (ML) community it is common practice to refer to the input space as the primal-representation and the kernel space as the dual-representation. In order to avoid confusion with the meanings of the terms primal and dual commonly used in the optimisation literature, we will use ML-primal to refer to the input space and ML-dual to refer to the kernel space for the remainder of the paper, though note that the references to primal and dual in the abstract refer to ML-primal and ML-dual.\\
53
+ \\
54
+ We introduce a new convex least squares variant of CCA which seeks a semantic projection that uses as few relevant features as possible to explain as much correlation as possible. In previous studies, CCA had either been formulated in the ML-primal (input) or ML-dual (kernel) representation for both views. These formulations, coupled with the need for sparsity, could prove insufficient when one desires or is limited to a ML primal-dual representation, i.e. one wishes to learn the correlation of words in one language that map to documents in another. We address these possible scenarios by formulating SCCA in a ML primal-dual framework in which one view is represented in the ML-primal and the other in the ML-dual (kernel defined) representation. We compare SCCA with KCCA on a bilingual English-French and English-Spanish data-set for a mate retrieval task. We show that in the mate retrieval task SCCA performs as well as KCCA when the number of original features is small and SCCA outperforms KCCA when the number of original features is large. This emphasises SCCA's ability to learn the semantic space from a small number of relevant features.\\
55
+ \\
56
+ In Section \ref{sec1} we give a brief review of CCA, and Section \ref{sec2} formulates and defines SCCA. In Section \ref{sec4} we derive our optimisation problem and show how all the pieces are assembled to give the complete algorithm. The experiments on the paired bilingual data-sets are given in Section \ref{sec5}. Section \ref{sec6} concludes this paper.
57
+
58
+
59
+ \section{Canonical Correlation Analysis}
60
+ \label{sec1}
61
+ We briefly review canonical correlation analysis and its ML-dual (kernel) variant to provide a smooth understanding of the transition to the sparse formulation. First, basic notation representation used in the paper is defined
62
+ \begin{eqnarray*}
63
+ \mathbf{b} & - & \text{boldface lower case letters represent vectors}\\
64
+ s & - & \text{lower case letters represent scalars}\\
65
+ M & - & \text{upper case letters represent matrices}.
66
+ \end{eqnarray*}
67
+ \def\omit{
68
+ Consider the linear combination $x_a = \bfw_a' \mathbf{x}_a$ and $x_b = \bfw_b'\mathbf{x}_b $. Let $\mathbf{x}_a$ and $\mathbf{x}_b$ be two random variables (i.i.d assumption\footnote{The i.i.d assumption is made throughout the paper}) , with zero mean (i.e. the data is centred).\\}
69
+ \\
70
+ The correlation between $\mathbf{x}_a$ and $\mathbf{x}_b$ can be computed as
71
+ \begin{equation}
72
+ \label{math:sim}
73
+ \max_{\bfw_a,\bfw_b}\rho = \frac{\bfw_a'C_{ab}\bfw_b}{\sqrt{\bfw_a'C_{aa}\bfw_a'\bfw_b'C_{bb}\bfw_b}},
74
+ \end{equation}
75
+ where $C_{{aa}} =X_aX_a'$ and $C_{{bb}} = X_bX_b'$ are the within-set covariance matrices and $C_{{ab}} = X_aX_b'$ is the between-sets covariance matrix, $X_a$ is the matrix whose columns are the vectors $\bfx_{i}$, $i = 1,\ldots, \ell$ from the first representation while $X_b$ is the matrix with columns $\bfx_{i}$ from the second representation. We are able to observe that scaling $\bfw_a, \bfw_b$ does not effect the quotient in equation (\ref{math:sim}), which is therefore equivalent to maximising $\bfw_a'C_{ab}\bfw_b$ subject to $\bfw_a'C_{{aa}}\bfw_a= \bfw_b'C_{{bb}}\bfw_b
76
+ =1$.\\
77
+ \\
78
+ The kernelising of CCA \cite{fyfe-ica,pei-kernel} offers an alternative by first projecting the data into a higher dimensional feature space $\phib_t : \bfx = (x_{1},\ldots, x_{n}) \rightarrow \phib_t(\bfx) = (\phib_{1}(\bfx),\ldots,\phib_{N}(\bfx)) \mbox{\ \ $(N \geq n, t = a,b)$}$ before performing CCA in the new feature spaces. The kernel variant of CCA is useful when the correlation is believed to exist in some non linear relationship. Given the kernel functions $\kappa_{a}$ and $\kappa_{b}$ let
79
+ $K_{\bm{a}} = X_a' X_a$ and $K_{\bm{b}} = X_b'X_b$ be the linear
80
+ kernel matrices corresponding to the two representations of the
81
+ data, where $X_a$ is now the matrix whose columns are the vectors
82
+ $\phib_a(\bfx_{i})$, $i = 1,\ldots, \ell$ from the first representation while $X_b$ is the matrix with columns $\phib_b(\bfx_{i})$ from the second representation. The weights $\bfw_a$ and $\bfw_b$ can be expressed as a linear combination of the training examples $\bfw_a = X_a \bm{\alpha}$ and $\bfw_b = X_b \bm{\beta}$. Substitution into the ML-primal
83
+ CCA equation (\ref{math:sim}) gives the optimisation
84
+ \begin{equation*}
85
+ \max_{\bm{\alpha},\bm{\beta}} \rho =
86
+ \frac{\bm{\alpha}' K_{\bf a} K_{\bf b}\bm{\beta}}{\sqrt{\bm{\alpha}' K_{\bf a}^2\bm{\alpha} \bm{\beta}K_{\bf b}^2\bm{\beta}}},
87
+ \end{equation*}
88
+ which is equivalent to maximising $\bm{\alpha}' K_{\bf a} K_{\bf b}\bm{\beta}$ subject to $\bm{\alpha}' K_{\bf a}^{2}\bm{\alpha} = \bm{\beta}' K_{\bf
89
+ b}^{2}\bm{\beta} = 1$. This is the ML-dual form of the CCA optimisation problem given in equation (\ref{math:sim}) which can be cast as a generalised eigenvalue problem and for which the first $k$ generalised eigenvectors can be found efficiently. Both CCA and KCCA can be formulated as symmetric eigenproblems.\\
90
+ \\
91
+ A variety of theoretical analyses have been presented for CCA \cite{akaho01,BachJordan,Fukumizu,Hardoon04,JohnNelloKM,DRH-JST_08}. A common conclusion of some of these analyses is the need to regularise KCCA. For example the quality of the generalisation of the associated pattern function is shown to be controlled by the sum of the squares of the weight vector norms in \cite{DRH-JST_08}. Although there are advantages in using KCCA, which have been demonstrated in various experiments across the literature, we clarify that when using a linear kernel in both views, regularised KCCA is the same as regularised CCA (since the former and latter are linear). Nonetheless using KCCA with a linear kernel can have advantages over CCA, the most important being speed when the number of features is larger than the number of samples.\footnote{The KCCA toolbox used was from the code section http://academic.davidroihardoon.com/ }
92
+
93
+
94
+ \section{Sparse CCA}
95
+ \label{sec2}
96
+ The motivation for formulating a ML primal-dual SCCA is largely intuitive when faced with real-world problems combined with the need to understand or interpret the found solutions. Consider the following examples as potential case studies which would require ML primal-dual sparse multivariate analysis methods, such as the one proposed.
97
+ \begin{itemize}
98
+ \item Enzyme prediction; in this problem one would like to uncover the relationship between the enzyme sequence, or more accurately the sub-sequences within each enzyme sequence that are highly correlated with the possible combination of the enzyme reactants. We would like to find a sparse ML-primal weight representation on the enzyme sequence which correlates highly to sparse ML-dual feature vector of the reactants. This will allow a better understanding of the enzyme structure relationship to reactions.
99
+ \item Bilingual analysis; when learning the semantic relationship between two languages, we may want to understand how one language maps from the word space (ML-primal) to the contextual document (ML-dual) space of another language. In both cases we do not want a complete mapping from all the words to all possible contexts but to be able to extract an interpretable relationship from a sparse word representation from one language to a particular and specific context (or sparse combination of) in the other language.
100
+ \item Brain analysis; here, one would be interested in finding a (ML-primal) sparse voxel\footnote{A voxel is a pixel representing the smallest three-dimensional point volume referenced in an fMRI (functional magnetic resonance imaging) image of the brain. It is usually approximately $3$mm$\times3$mm. } activation map to some (ML-dual) non-linear stimulus activation (such as musical sequences, images and various other multidimensional input). The potential ability to find only the relevant voxels in the stimuli would remove the particularly problematic issue of thresholding the full voxel activation maps that are conventionally generated.
101
+ \end{itemize}
102
+ For the scope of this paper we limit ourselves to experiments with the bilingual texts problems.\\
103
+ \\
104
+ Throughout the paper we only consider the setting when one is interested in a ML-primal representation for the first view and a ML-dual representation for the second view, although it is easily shown that the given derivations hold for the inverted case (i.e. a ML-dual representation for the first view and a ML-primal representation for the second view) which is therefore omitted.\\
105
+ \\
106
+ Consider a sample from a pair of random vectors (i.i.d assumptions hold) of the form $(\bfx_a^i,\bfx_b^i)$ each with zero mean (i.e. centred) where $i = 1,\ldots, \ell$. Let $X_a$ and $X_b$ be matrices whose columns are the corresponding training samples and let $K_b = X_b'X_b$ be the kernel matrix of the second view and $\bfw_b$ be expressed as a linear combination of the training examples $\bfw_b = X_b\bfe$ (note that $\bfe$ is a general vector and should not be confused with notation sometimes used for unit coordinate vectors). The primal-dual CCA problem can be expressed as a primal-dual Rayleigh quotient
107
+ \begin{eqnarray}
108
+ \nonumber
109
+ \rho & = & \max_{\bfw_a,\bfw_b}\frac{\bfw_a'X_aX_b'\bfw_b}{\sqrt{\bfw_a'X_aX_a'\bfw_a\bfw_bX_bX_b'\bfw_b}}\\
110
+ \nonumber
111
+ & =& \max_{\bfw_a,\mathbf{e}}\frac{\bfw_a'X_aX_b'X_b \bfe}{\sqrt{\bfw_a'X_aX_a'\bfw_a\bfe'X_b'X_bX_b'X_b\bfe}}\\
112
+ &= & \max_{\bfw_a,\mathbf{e}}\frac{\bfw_a'X_aK_b\bfe}{\sqrt{\bfw_a'X_aX_a'\bfw_a\bfe'K_b^2\bfe}},
113
+ \label{eig:origin}
114
+ \end{eqnarray}
115
+ where we choose the primal weights $\bfw_a$ of the first representation and dual features $\bfe$ of the second representation such that the correlation $\rho$ between the two vectors is maximised. As we are able to scale $\bfw_a$ and $\bfe$ without changing the quotient, the maximisation in equation (\ref{eig:origin}) is equal to maximising $\bfw_a'X_aK_b\bfe$ subject to $\bfw_a'X_a'X_a\bfw_a = \bfe'K_b^2\bfe = 1$. For simplicity let $X = X_a$, $\bfw = \bfw_a$ and $K = K_b$. \\
116
+ \\
117
+ Having provided the initial primal-dual framework we proceed to reformulate the problem as a convex sparse least squares optimisation problem. We are able to show that maximising the correlation between the two vectors $K\bfe$ and $X'\bfw$ can be viewed as minimising the angle between them. Since the angle is invariant to rescaling, we can fix the scaling of one vector and then minimise the norm\footnote{We define $\| \cdot \|$ to be the $2-$norm.} between the two vectors
118
+ \begin{equation}
119
+ \label{eigsec}
120
+ \min_{\bfw,\bfe}\|X'\bfw - K\bfe\|^2
121
+ \end{equation}
122
+ subject to $\|K\bfe\|^2 = 1$. This intuition is formulated in the following theorem,
123
+ \begin{theorem}
124
+ \label{theo1}
125
+ Vectors $\bfw,\bfe$ are an optimal solution of equation (\ref{eig:origin}) if and only if there exist $\mu,\gamma$ such that $\mu\bfw, \gamma\bfe$ are an optimal solution of equation (\ref{eigsec}).
126
+ \end{theorem}
127
+ Theorem \ref{theo1} is well known in the statistics community and corresponds to the equivalence between one form of Alternating Conditional Expectation (ACE) and CCA \cite{Breiman,Hastie}.
128
+ For an exact proof see Theorem $5.1$ on page $590$ in \cite{Breiman}.\\
129
+ \\
130
+ Constraining the $2-$norm of $K\bfe$ (or $X'\bfw$) will result in a non convex problem, i.e we will not obtain a positive/negative-definite Hessian matrix. Motivated by the Rayleigh quotient solution for optimising CCA, whose resulting symmetric eigenproblem does {\it not} enforce the $\|K\bfe\|^2 = 1$ constraint, i.e. the optimal solution is invariant to rescaling of the solutions. Therefore we replace the scaling of $\|K\bfe\|^2 = 1$ with the scaling of $\bfe$ to be $\|\bfe\|_\infty = 1$. We will address the resulting convexity when we achieve the final formulation.\\
131
+ \\
132
+ After finding an optimal CCA solution, we are able to re-normalise $\bfe$ so that $\|K\bfe\|^2 = 1$ holds. We emphasis that even though $K$ has been removed from the constraint the link to kernels (kernel tricks and RKHS) is represented in the choice of kernel $K$ used for the dual-view, otherwise the presented method is a sparse linear CCA\footnote{One should keep in mind that even kernel CCA is still linear CCA performed in kernel defined feature space}. We can now focus on obtaining an optimal sparse solution for $\bfw,\bfe$.\\
133
+ \\
134
+ It is obvious that when starting with $\bfw = \bfe = {\bf 0}$ further minimising is impossible. To avoid this trivial solution and to ensure that the constraints hold in our starting condition\footnote{$\|\mathbf{e}\|_\infty = \max(|e_1|,\ldots,|e_\ell|) = 1$, therefore there must be at least one $e_i$ for some $i$ that is equal to $1$.} we set $\|\bfe\|_\infty = 1$ by fixing $e_k = 1$ for some fixed index $1 \leq k \leq \ell$ so that $\bfe = [e_1,\ldots,e_{k-1},e_{k},e_{k+1},\ldots,e_\ell]$. To further obtain a sparse solution on $\bfe$ we constrain the $1-$norm of the remaining coefficients $\|\tilde{\bfe}\|_1$, where we define $\tilde{\bfe} = [e_1,\ldots,e_{k-1},e_{k+1},\ldots,e_\ell]$. The motivation behind isolating a specific $k$ and constraining the $1-$norm of the remaining coefficients, other than ensuring a non-trivial solution, follows the intuition of wanting to find similarities between the samples given some basis for comparison. In the case of documents, this places the chosen document (indexed by $k$) in a semantic context defined by an additional (sparse) set of documents. This captures our previously stated goal of wanting to be able to extract an interpretable relationship from a sparse word representation from one language to a particular and specific context in the other language. The $j \in \mathbb{N}^\ell$ choices of $k$ correspond to the $\bfe_j, \bfw_j$ projection vectors. We discuss the selections of $k$ and the ensuring of orthogonality of the sparse projections in Section \ref{sccaalgsec}.\\
135
+ \\
136
+ We are also now able to constrain the $1-$norm of $\bfw$ without effecting the convexity of the problem. This gives the final optimisation as
137
+ \begin{equation}
138
+ \label{eqn:stacca1}
139
+ \min_{\bfw,\bfe} \| X'\bfw - K\bfe\|^2 + \mu\|\bfw\|_1 + \gamma \| \tilde{\bfe}\|_1
140
+ \end{equation}
141
+ subject to $\|\bfe\|_\infty = 1$. The expression $\|X\bfw - K\bfe\|^2$ is quadratic in the variables $\bfw$ and $\bfe$ and is bounded from below $(\geq 0)$ and hence is convex since it can be expressed as $\|X\bfw - K\bfe\|^2 = C + g'\bfw + f'\bfe + [\bfw' \bfe'] H [\bfw' \bfe']'$. If $H$ were not positive definite taking multiple $\mu$ of the eigenvector $v' = [v'_1 v'_2]$ with negative eigenvalue $\lambda$ would give $C + \mu g'v_1 + \mu f'v_2 + \mu^2 \lambda$ creating arbitrarily large negative values. When minimising subject to linear constraints (1-norms are linear) this makes the whole optimisation convex.\\
142
+ \\
143
+ While equation (\ref{eqn:stacca1}) is similar to Least Absolute Shrinkage and Selection Operator (LASSO) \cite{Tibshirani94}\footnote{Basis Pursuit Denoising \cite{chen99}}, it is not a standard LASSO problem unless $\bfe$ is fixed. The problem in equation (\ref{eqn:stacca1}) could be considered as a double-barreled LASSO where we are trying to find sparse solutions for {\it both} $\bfw,\bfe$.\nocite{Heiler}
144
+
145
+
146
+
147
+
148
+ \section{Derivation \& Algorithm}
149
+ \label{sec4}
150
+ We propose a novel method for solving the optimisation problem represented in equation (\ref{eqn:stacca1}), where the suggested algorithm minimises the gap between the primal and dual Lagrangian solutions using a greedy search on $\bfw, \bfe$. The proposed algorithm finds a sparse $\mathbf{w},\mathbf{e}$ vectors, by iteratively solving between the ML primal and dual formulation in turn. We give the proposed algorithm as the following high-level pseudo-code. A more complete description will follow later;
151
+ \begin{itemize}
152
+ \item Repeat
153
+ \begin{enumerate}
154
+ \item Use the dual Lagrangian variables to solve the ML-primal variables
155
+ \item Check whether all constraints on ML-primal variables hold
156
+ \item Use ML-primal variables to solve the dual Lagrangian variables
157
+ \item Check whether all dual Lagrangian variable constraints hold
158
+ \item Check whether 2. holds, IF not go to 1.
159
+ \end{enumerate}
160
+ \item End
161
+ \end{itemize}
162
+ We have yet to address how to determine which elements in $\bfw,\bfe$ are to be non-zero. We will show that from the derivation given in Section \ref{primald} a lower and upper bound is computed. Combining the bound with the constraints provides us with a criterion for selecting the non-zero elements for both $\bfw$ and $\bfe$. The criteria being that only the respective indices which violate the bound and the various constraints need to be updated.\\
163
+ \\
164
+ We proceed to give the derivation of our problem. The minimisation
165
+ \begin{equation*}
166
+ \min_{\bfw,\bfe} \| X'\bfw - K\bfe\|^2 + \mu\|\bfw\|_1 + \gamma \| \tilde{\bfe}\|_1
167
+ \end{equation*}
168
+ subject to $\|\bfe\|_\infty = 1$ can be written as
169
+ \begin{equation*}
170
+ \bfw'XX'\bfw + \bfe'K^2\bfe - 2\bfw'XK\bfe + \mu\|\bfw\|_1 + \gamma \| \tilde{\bfe}\|_1
171
+ \end{equation*}
172
+ subject to $\|\bfe\|_\infty = 1$, where $\mu$, $\gamma$ are fixed positive parameters.\\
173
+ \\
174
+ To simplify our mathematical notation we revert to uniformly using $\bfe$ in place of $\tilde{\bfe}$, as $k$ will be fixed in an outer loop so that the only requirement is that no update will be made for $e_k$, which can be enforced in the actual algorithm. We further emphasis that we are only interested in the positive spectrum of $\bfe$, which again can be easily enforced by updating any $e_i < 0$ to be $e_i = 0$\footnote{We can also easily enforce the $\|\cdot\|_\infty$ constraint by updating any $e_i > 1$ to be $e_i = 1$.}. Therefore we could rewrite the constraint $\|\bfe\|_\infty = 1 $ as $0 < e_i \leq 1, \forall i \in \mathbb{R}^\ell$ \\
175
+ \\
176
+ We are able to obtain the corresponding Lagrangian
177
+ \begin{equation*}
178
+ \mathcal{L} = \bfw'XX'\bfw + \mathbf{e}'K^2\mathbf{e} - 2\bfw'XK\mathbf{e}+ \mu \|\bfw\|_1+ \gamma \mathbf{e}'\mathbf{j} - {\bm{\beta}}'\mathbf{e} ,
179
+ \end{equation*}
180
+ subject to
181
+ \begin{eqnarray*}
182
+ \bm{\beta} & \geq & \mathbf{0},
183
+ \end{eqnarray*}
184
+ where $\bm{\beta}$ is the dual Lagrangian variable on $\bfe$ and $\mu,\gamma$ are positive scale factors as discussed in Theorem \ref{theo1} and $\mathbf{j}$ is the all ones vector. We note that as we algorithmically ensure that $\bfe \geq 0$ we are able to write $ \gamma \| \bfe\|_1= \gamma \mathbf{e}'\mathbf{j}$ as $\|\bfe\|_1 := \sum_{i=1}^\ell |e_i|$.\\
185
+ \\
186
+ The constants $\mu,\gamma$ can also be considered as the hyper-parameters (or regularisation parameters) common in the LASSO literature, controlling the trade-off between the function objective and the level of sparsity. We show that the scale parameters can be treated as a type of dual Lagrangian parameters to provide an underlying automatic determination of sparsity. This potentially sub-optimal setting still obtains very good results and is discussed in Section \ref{sec:hyper}.\\
187
+ \\
188
+ To simplify the $1-$norm derivation we express $\bfw$ by its positive and negative components\footnote{This means that $\bfw^+/\bfw^-$ will only have the positive/negative values of $\bfw$ and zero elsewhere.} such that $\bfw = \bfw^+ - \bfw^-$ subject to $\bfw^+,\bfw^- \geq 0$. We limit ourselves to positive entries in $\bfe$ as we expect to align with a positive subset of articles.\\
189
+ \\
190
+ This allows us to rewrite the Lagrangian as
191
+ \begin{eqnarray}
192
+ \label{eqnlarg}
193
+ \mathcal{L} & = & (\bfw^+ - \bfw^-)'XX'(\bfw^+ - \bfw^-) + \mathbf{e}'K^2\mathbf{e} \\
194
+ \nonumber
195
+ && - 2(\bfw^+ - \bfw^-)'XK\mathbf{e} -{\bm{\alpha}^-}'\bfw^- -{\bm{\alpha}^+}'\bfw^+
196
+ -{\bm{\beta}}'\mathbf{e}\\
197
+ \nonumber
198
+ && + \gamma (\mathbf{e}'\mathbf{j}) + \mu((\bfw^+ + \bfw^-)'\mathbf{j}).
199
+ \end{eqnarray}
200
+ The corresponding Lagrangian in equation (\ref{eqnlarg}) is subject to
201
+ \begin{eqnarray*}
202
+ \bm{\alpha}^+ &\geq & \mathbf{0}\\
203
+ \bm{\alpha}^- &\geq & \mathbf{0}\\
204
+ \bm{\beta} & \geq & \mathbf{0}.
205
+ \end{eqnarray*}
206
+ The two new dual Lagrangian variables $\bm{\alpha}^+, \bm{\alpha}^-$ are to uphold the positivity constraints on $\bfw^+, \bfw^-$.
207
+
208
+ \subsection{SCCA Derivation}
209
+ \label{primald}
210
+ In this section we will show that the constraints on the dual Lagrangian variables will form the criterion for selecting the non-zero elements from $\bfw$ and $\bfe$. First we define further notations used. Given the data matrix ${X}\in \mathbb{R}^{m \times \ell}$ and Kernel matrix ${K}\in\mathbb{R}^{\ell \times \ell}$ as defined in Section \ref{sec2}, we define the following vectors
211
+ \begin{eqnarray*}
212
+ \bfw^+ & = & \left[w^+_1,\ldots, w^+_m\right]\\
213
+ \bfw^- & = & \left[w^-_1,\ldots, w^-_m\right]\\
214
+ \bm{\alpha}^+ & = &\left[ \alpha^+_1,\ldots, \alpha^+_m \right]\\
215
+ \bm{\alpha}^- & = &\left[ \alpha^-_1,\ldots, \alpha^-_m \right]\\
216
+ \bfe & = & \left[e_1,\ldots,e_\ell \right]\\
217
+ \bm{\beta} & = & \left[\beta_1,\ldots,\beta_\ell \right].
218
+ \end{eqnarray*}
219
+ Throughout this section let $i$ be the index of either $\bfw,\bfe$ that needs to be updated. We use the notation $(\cdot)_i$ or $[\cdot]_i$ to refer to the $i$th index within a vector and $(\cdot)_{ii}$ to refer to the $i$th element on the diagonal of a matrix. \\
220
+ \\
221
+ \\
222
+ Taking derivatives of equation (\ref{eqnlarg}) in respect to $\bfw^+$, $\bfw^-$, $\mathbf{e}$ and equating to zero gives
223
+ \begin{eqnarray}
224
+ \label{sccdev1}
225
+ \frac{\partial\mathcal{L}}{\partial \bfw^+} & = & 2 XX'(\bfw^+ - \bfw^-) - 2 X'K\mathbf{e} - \bm{\alpha}^+ + \mu \mathbf{j} = \mathbf{0}\\
226
+ \nonumber
227
+ \frac{\partial\mathcal{L}}{\partial \bfw^-} & = &- 2 XX'(\bfw^+ - \bfw^-) + 2X'K\mathbf{e} - \bm{\alpha}^- + \mu \mathbf{j} = \mathbf{0}\\
228
+ \nonumber
229
+ \frac{\partial\mathcal{L}}{\partial \mathbf{e}} & = &2 K^2\mathbf{e}- 2KX'\bfw - \bm{\beta} + \gamma' \mathbf{j} = \mathbf{0},
230
+ \end{eqnarray}
231
+ adding the first two equations gives
232
+ \begin{eqnarray*}
233
+ \bm{\alpha}^+ & = & 2\mu \mathbf{j} -\bm{\alpha}^-\\
234
+ \bm{\alpha}^- & =& 2\mu \mathbf{j} -\bm{\alpha}^+,
235
+ \end{eqnarray*}
236
+ implying a lower and upper component-wise bound on $\bm{\alpha}^-,\bm{\alpha}^+$ of
237
+ \begin{eqnarray*}
238
+ \mathbf{0} &\leq& \bm{\alpha}^- \leq 2\mu\mathbf{j}\\
239
+ \mathbf{0} &\leq& \bm{\alpha}^+ \leq 2\mu\mathbf{j}.
240
+ \end{eqnarray*}
241
+ We use the bound on $\bm{\alpha}$ to indicate which indices of the vector $\bfw$ need to be updated by only updating the $w_i$'s whose corresponding $\alpha_i$ violates the bound. Similarly, we only update $e_i$ that has a corresponding $\beta_i$ value smaller than $0$.\\
242
+ \\
243
+ We are able to rewrite the derivative with respect to $\bfw^+$ in terms of $\bm{\alpha}^-$
244
+ \begin{eqnarray*}
245
+ \frac{\partial\mathcal{L}}{\partial \bfw^+} & = & 2 XX'(\bfw^+ - \bfw^-) - 2X'K\mathbf{e} - 2\mu \mathbf{j} + \bm{\alpha}^- + \mu \mathbf{j} \\
246
+ & = & 2 XX'(\bfw^+ - \bfw^-) - 2X'K\mathbf{e} - \mu \mathbf{j} + \bm{\alpha}^- .
247
+ \end{eqnarray*}
248
+ We wish to compute the update rule for the selected indices of $\bfw$. Taking the second derivatives of equation (\ref{eqnlarg}) in respect to $\bfw^+$ and $\bfw^-$, gives
249
+ \begin{eqnarray*}
250
+ \frac{\partial^2\mathcal{L}}{\partial \bfw^{+2}} & = & 2 XX' \\
251
+ \frac{\partial^2\mathcal{L}}{\partial \bfw^{-2}} & = & 2 XX',\\
252
+ \end{eqnarray*}
253
+ so for the $\mathbf{i}_i$, the unit vector with entry $1$, we have an exact Taylor series expansion $t^+$ and $t^-$ respectively for $w^+_i$ and $w^-_i$ as
254
+ \begin{eqnarray*}
255
+ \mathcal{\hat{L}}(\bfw^+ + t^+\mathbf{i}_i) & = &\mathcal{L}(\bfw^+) + \frac{\partial\mathcal{L}}{\partial w^+_i} t^+ + \frac{\partial^2\mathcal{L}}{\partial w^+_i} (t^+)^2\\
256
+ \mathcal{\hat {L}}(\bfw^- + t^-\mathbf{i}_i) & =& \mathcal{L}(\bfw^-) + \frac{\partial\mathcal{L}}{\partial w^-_i} t^- + \frac{\partial^2\mathcal{L}}{\partial w^-_i} (t^-)^2
257
+ \end{eqnarray*}
258
+ giving us the exact update for $w^+_i$ by setting
259
+ \begin{eqnarray*}
260
+ \frac{\partial\mathcal{\hat {L}}(\bfw^+ + t^+\mathbf{i}_i)}{\partial t^+} & = & \left(2 XX'(\bfw^+ -\bfw^-) - 2X'K\mathbf{e} -\bm{\alpha}^+ + \mu \mathbf{j} \right)_i + 4 (XX')_{ii}t^+ = 0\\
261
+ \Rightarrow t^+ &= & \frac{1}{4 (XX')_{ii}}\left[ 2X'K\mathbf{e} - 2 XX'(\bfw^+ - \bfw^-) - \bm{\alpha}^- + \mu \mathbf{j}\right]_i.
262
+ \end{eqnarray*}
263
+ Therefore the update for $w^+_i$ is $\Delta w^+_i = t^+$. We also compute the exact update for $w^-_i$ as
264
+ \begin{eqnarray*}
265
+ \frac{\partial\mathcal{\hat {L}}(\bfw^- + t^-\mathbf{i}_i)}{\partial t^-} & = & \left(- 2 XX'(\bfw^+ - \bfw^-) + 2X'K\mathbf{e} - \bm{\alpha}^- + \mu \mathbf{j} \right)_i + 4 (XX')_{ii}t^- = 0\\
266
+ \Rightarrow t^- & =& -\frac{1}{4(XX')_{ii}}\left[ 2X'K\mathbf{e} - 2 XX'(\bfw^+ - \bfw^-) - \bm{\alpha}^- + \mu \mathbf{j}\right]_i,
267
+ \end{eqnarray*}
268
+ so that the update for $w^-_i$ is $\Delta w^-_i = t^-$.
269
+ Recall that $\bfw = (\bfw^+ - \bfw^-)$, hence the update rule for $w_i$ is
270
+ \begin{eqnarray*}
271
+ \hat{w}_i \leftarrow w_i + (\Delta w^+_i - \Delta w^-_i).
272
+ \end{eqnarray*}
273
+ Therefore we find that the new value of $w_i$ should be
274
+ \begin{eqnarray*}
275
+ \hat{w}_i \leftarrow w_i +\frac{1}{2(XX')_{ii}}\left[ 2X'K\mathbf{e} - 2 XX'\bfw - \bm{\alpha}^- + \mu \mathbf{j}\right]_i.
276
+ \end{eqnarray*}
277
+ We must also consider the update of $w_i$ when $\alpha_i$ is within the constraints and $w_i \neq 0$, i.e. previously $\alpha_i$ had violated the constraints triggering the updated of $w_i$ to be non zero. Notice from equation (\ref{sccdev1}) that
278
+ \begin{equation*}
279
+ 2(XX')_{ii}w_i + 2 \sum_{j\neq i}(XX')_{ij}w_j = 2(X'K\mathbf{e})_i - {\alpha}_i + \mu.
280
+ \end{equation*}
281
+ It is easy to observe that the only component which can change is $2(XX')_{ii}w_i$, therefore as we need to update $w_i$ towards zero. Hence when $w_i > 0$ the absolute value of the update is
282
+ \begin{eqnarray*}
283
+ 2(XX')_{ii}\Delta w_i & =& 2\mu - \alpha_i\\
284
+ \Delta w_i &=& \frac{2\mu - \alpha_i}{2(XX')_{ii}}
285
+ \end{eqnarray*}
286
+ else when $w_i < 0$ then the update is the negation of
287
+ \begin{eqnarray*}
288
+ 2(XX')_{ii}\Delta w_i & =& 0 - \alpha_i\\
289
+ \Delta w_i &=& \frac{-\alpha_i}{2(XX')_{ii}}
290
+ \end{eqnarray*}
291
+ so that the update rule is $\hat{w}_i \leftarrow w_i - \Delta w_i$. In the updating of $w_i$ we ensure that $w_i,\hat{w}_i$ do not have opposite signs, i.e. we will always stop at zero before updating in any new direction.\\
292
+ \\
293
+ We continue by taking second derivatives of the Lagrangian in equation (\ref{eqnlarg}) with respect to $\mathbf{e}$, which gives
294
+ \begin{eqnarray*}
295
+ \frac{\partial^2\mathcal{L}}{\partial \mathbf{e}^2} & = & 2 K^2,
296
+ \end{eqnarray*}
297
+ so for $\mathbf{i}_i$, the unit vector with entry $1$, we have an exact Taylor series expansion
298
+ \begin{equation*}
299
+ \mathcal{\hat {L}}({\bfe} + t\mathbf{i}_i) = \mathcal{L}(\bfe) + \frac{\partial\mathcal{L}}{\partial{e}_i} t + \frac{\partial^2\mathcal{L}}{\partial{e}_i} (t)^2
300
+ \end{equation*}
301
+ giving us the following update rule for ${e}_i$
302
+ \begin{eqnarray*}
303
+ \frac{\partial\mathcal{\hat {L}}({\bfe} + t\mathbf{i}_i)}{\partial t} & = & (2 K^2\mathbf{e}- 2KX'\bfw - \bm{\beta} + \gamma' \mathbf{j} )_i + 4 K^2_{ii}t = 0\\
304
+ \Rightarrow t & = & \frac{1}{4 K^2_{ii}}\left[ 2KX'\bfw - 2 K^2\mathbf{e} + \bm{\beta} - \gamma' \mathbf{j}\right]_i,
305
+ \end{eqnarray*}
306
+ the update for $\bfe$ is $\Delta e_i = t$. The new value of $e_i$ will be
307
+ \begin{equation*}
308
+ {\hat {e}}_i \leftarrow {e}_i + \frac{1}{4 K^2_{ii}}\left[ 2KX'\bfw - 2 K^2\mathbf{e} + \bm{\beta} - \gamma' \mathbf{j}\right]_i,
309
+ \end{equation*}
310
+ again ensuring that $0 \leq \hat{e}_i \leq 1$.
311
+
312
+ \begin{algorithm}[tp]
313
+ \scriptsize
314
+ \caption{The SCCA algorithm}
315
+ \label{alg:scca}
316
+ input: Data matrix $\mathbf{X}\in \mathbb{R}^{m \times \ell}$, Kernel matrix $\mathbf{K}\in\mathbb{R}^{\ell \times \ell}$ and the value $k$.
317
+ \begin{algorithmic}
318
+ \STATE $\%$ Initialisation:
319
+ \STATE $\mathbf{w} = \mathbf{0}$, $\bf j = 1$, $\bfe = \mathbf{0}$, $e_k = 1$
320
+ \STATE $\mu = \frac{1}{M}\sum_i^{M} |( 2XK\bm{e})_i|$, $\gamma = \frac{1}{\ell} \sum_i^{\ell}| (2K^2\bm{e})_i|$
321
+ \STATE $\bm{\alpha}^- = 2XK\mathbf{e} + \mu \mathbf{j}$
322
+ \STATE $I = (\bm{\alpha} < \mathbf{0})\ || \ (\bm{\alpha} > 2\mu\mathbf{j})$
323
+ \STATE
324
+ \REPEAT
325
+ \STATE $\%$ Update the found weight values:
326
+ \STATE Converge over $\mathbf{w}$ using Algorithm \ref{alg:sccaw}
327
+ \STATE
328
+ \STATE $\%$ Find the dual values that are to be updated
329
+ \STATE ${\bm{\beta}} = 2K^2{\mathbf{e}}-2KX\bfw + \gamma {\bf{j}}$
330
+ \STATE $J = ({\bm{\beta}} < \mathbf{0})$
331
+ \STATE
332
+ \STATE $\%$ Update the found dual projection values
333
+ \STATE Converge over $\mathbf{e}$ using Algorithn \ref{alg:sccae}
334
+ \STATE
335
+ \STATE $\%$ Find the weight values that are to be updated
336
+ \STATE $\bm{\alpha}^- = 2XK\mathbf{e} - 2 XX'\mathbf{w} + \mu \mathbf{j}$
337
+ \STATE $I = (\bm{\alpha} < \mathbf{0})\ || \ (\bm{\alpha} > 2\mu\mathbf{j})$
338
+ \UNTIL{convergence}
339
+ \STATE
340
+ \STATE $\bfe = \frac{\bfe}{\|K\bfe\|}$, $\bfw = \frac{\bfw}{\|X'\bfw\|}$
341
+ \STATE
342
+ \end{algorithmic}
343
+ {\bf Output}: Feature directions $\mathbf{w,e}$
344
+ \end{algorithm}
345
+
346
+ \begin{algorithm}[tp]
347
+ \scriptsize
348
+ \caption{The SCCA algorithm - Convergence over $\bfw$}
349
+ \label{alg:sccaw}
350
+ \begin{algorithmic}
351
+ \REPEAT
352
+ \FOR{$i = 1$ to length of $I$}
353
+ \IF{${\alpha}_{I_i} > 2\mu$}
354
+ \STATE ${\alpha}_{I_i} = 2\mu$
355
+ \STATE $\hat{w}_{I_i} \leftarrow w_{I_i} + \frac{1}{2(XX')_{I_i,I_i}}\left[ 2(XK\mathbf{e})_{I_i} - 2 (XX'\bfw)_{I_i} -{\alpha}^-_{I_i}+ \mu \right]$
356
+ \ELSIF{${\alpha}_{I_i} < 0$}
357
+ \STATE ${\alpha}_{I_i} = 0$
358
+ \STATE $\hat{w}_{I_i} \leftarrow w_{I_i} + \frac{1}{2(XX')_{I_i,I_i}}\left[ 2(XK\mathbf{e})_{I_i} - 2 (XX'\bfw)_{I_i} -{\alpha}^-_{I_i} + \mu \right]$
359
+ \ELSE
360
+ \IF{$w_{I_i} > 0$}
361
+ \STATE $\hat{w}_{I_i} \leftarrow w_{I_i} - \frac{2\mu-{\alpha}_{I_i}}{2(XX')_{I_i,I_i}}$
362
+ \ELSIF{$w_{I_i} < 0$}
363
+ \STATE $\hat{w}_{I_i} \leftarrow w_{I_i} + \frac{{\alpha}_{I_i}}{2(XX')_{I_i,I_i}}$
364
+ \ENDIF
365
+ \ENDIF
366
+ \IF{$sign(w_{I_{i}}) \neq sign(\hat{w}_{I_i})$}
367
+ \STATE $w_{I_i} = 0$
368
+ \ELSE
369
+ \STATE $w_{I_i} = \hat{w}_{I_i}$
370
+ \ENDIF
371
+ \ENDFOR
372
+ \UNTIL{convergence over $\bfw$}
373
+ \end{algorithmic}
374
+ \end{algorithm}
375
+
376
+ \begin{algorithm}[tp]
377
+ \scriptsize
378
+ \caption{The SCCA algorithm - Convergence over $\bfe$}
379
+ \label{alg:sccae}
380
+ \begin{algorithmic}
381
+ \REPEAT
382
+ \FOR{$i = 1$ to length of $J$}
383
+ \IF{$J_i \neq k$}
384
+ \STATE ${{e}}_{J_i} \leftarrow {{e}}_{J_i} + \frac{1}{4 K^2_{{J_i}{J_i}}}\left[ 2(KX'\bfw)_{J_i} - 2 ( K^2\mathbf{e})_{J_i}-{\gamma}\right]$
385
+ \IF{${{e}}_{J_i} < 0$}
386
+ \STATE ${{e}}_{J_i} = 0$
387
+ \ELSIF{${{e}}_{J_i} > 1$}
388
+ \STATE ${{e}}_{J_i}= 1$
389
+ \ENDIF
390
+ \ENDIF
391
+ \ENDFOR
392
+ \UNTIL{convergence over $\mathbf{e}$}
393
+ \end{algorithmic}
394
+ \end{algorithm}
395
+
396
+ \subsection{SCCA Algorithm}
397
+ \label{sccaalgsec}
398
+ Observe that in the initial condition when $\bfw = \mathbf{0}$ from equations (\ref{sccdev1}) we are able to treat the scale parameters $\mu, \gamma$ as dual Lagrangian variables and set them to
399
+ \begin{eqnarray*}
400
+ \mu & = & \frac{1}{m}\sum_i^{m} |( 2XK\bm{e})_i|\\
401
+ \gamma & =& \frac{1}{\ell} \sum_i^{\ell}| (2K^2\bm{e})_i|.
402
+ \end{eqnarray*}
403
+ We emphasise that this is to provide an underlying automatic determination of sparsity and may not be the optimal setting although we show in Section \ref{sec:hyper} that this method works well in practice. Combining all the pieces we give the SCCA algorithm as pseudo-code in Algorithm \ref{alg:scca}, which takes $k$ as a parameter.
404
+ In order to choose the optimal value of $k$ we would need to run the algorithm with all values of $k$ and select the one giving the best objective value. This would be chosen as the first feature. \\
405
+ \\
406
+ To ensure orthogonality of the extracted features \cite{JohnNelloKM} for each $\bfe_j$ and corresponding $\bfw_j$, we compute the residual matrices ${X}_j$, $j=1,\ldots,\ell$ by projecting the columns of the data onto the orthogonal complement of ${X}_j'({X}_j{X}_j'\bfw_j)$, a procedure known as deflation,
407
+ \begin{equation*}
408
+ {X}_{j+1} = X_j\left(I - \mathbf{u}_j\mathbf{p}_j'\right),
409
+ \end{equation*}
410
+ where $U$ is a matrix with columns $\mathbf{u}_j = X_jX_j'\bfw_j$ and $P$ is a matrix with columns $\mathbf{p}_j = \frac{X_jX_j'\mathbf{u}_j}{\mathbf{u}_j'X_jX_j'\mathbf{u}_j}$. The extracted projection directions can be computed (following \cite{JohnNelloKM}) as $ U(P'U)^{-1}$. Similarly we deflate for the dual view
411
+ \begin{equation*}
412
+ {K}_{j+1} = \left(I - \frac{\tau_j\tau_j'}{\tau_j'\tau_j}\right){K}_j\left(I - \frac{\tau_j\tau_j'}{\tau_j'\tau_j}\right),
413
+ \end{equation*}
414
+ where $\tau_j = K_j'(K_j'\bfe_j)$ and compute the projection directions as $B(T'K B)^{-1}T$ where $B$ is a matrix with columns $K_j\bfe_j$ and $T$ has columns $\tau_j$. The deflation procedure is illustrated in pseudocode in Algorithm \ref{completealgo}, for a detailed review on deflation we refer the reader to \cite{JohnNelloKM}.\\
415
+ \\
416
+ Checking each value of $k$ at each iteration is computationally impractical. In our experiments we adopt the very simplistic strategy of picking the values of $k$ in numerical order $k = 1,\ldots,\ell$. Clearly, there exists intermediate options of selecting a small subset of values at each stage, running the algorithm for each and selecting the best of this subset. This and other extension of our work will be focused on in future studies.
417
+ \begin{algorithm}[tp]
418
+ \scriptsize
419
+ \caption{The SCCA algorithm with deflation}
420
+ \label{completealgo}
421
+ input: Data matrix $\mathbf{X}\in \mathbb{R}^{m \times \ell}$, Kernel matrix $\mathbf{K}\in\mathbb{R}^{\ell \times \ell}$.
422
+ \begin{algorithmic}
423
+ \STATE
424
+ \STATE ${X}_1 = X$, ${K}_1 = K$
425
+ \FOR{$j = 1$ to $\ell$}
426
+ \STATE $k = j$
427
+ \STATE $[\bfe_j,\bfw_j] = $ SCCA\_Algorithm$1(X_j,K_j,k)$
428
+ \STATE
429
+ \STATE $\tau_j = K_j'(K_j'\bfe_j)$
430
+ \STATE $\mathbf{u}_j = X_jX_j'\bfw_j$
431
+ \STATE $\mathbf{p}_j = \frac{X_jX_j'\mathbf{u}_j}{\mathbf{u}_j'X_jX_j'\mathbf{u}_j}$
432
+ \STATE
433
+ \IF{$j < \ell$}
434
+ \STATE ${K}_{j+1} = \left(I - \frac{\tau_j\tau_j'}{\tau_j'\tau_j}\right){K}_j\left(I - \frac{\tau_j\tau_j'}{\tau_j'\tau_j}\right)$
435
+ \STATE ${X}_{j+1} = X_j\left(I - \mathbf{u}_j\mathbf{p}_j'\right)$
436
+ \ENDIF
437
+ \ENDFOR
438
+ \STATE
439
+ \end{algorithmic}
440
+ \end{algorithm}
441
+
442
+
443
+
444
+ \section{Experiments}
445
+ \label{sec5}
446
+
447
+ In the following experiments we use two paired English-French and English-Spanish corpora. The English-French corpus consists of $300$ samples with $2637$ English features and $2951$ French features while the English-Spanish corpus consists of $1,000$ samples with $40,629$ English features and $57,796$ Spanish features. The features represent the number of words in each language. Both corpora are pre-processed into a Term Frequency Inverse Document Frequency (TFIDF) representation followed by zero-meaning (centring) and normalisation. The linear kernel was used for the dual view. The best test performance for the KCCA regularisation parameter for the paired corpora was found to be $0.03$. We used this value to ensure that KCCA was not at a disadvantage since SCCA had no parameters to tune.
448
+
449
+ \subsection{Hyperparameter Validation}
450
+ \label{sec:hyper}
451
+ In the following section we demonstrate that the proposed approach for automatically determining the regularisation parameter (hyper-parameter) $\mu$ (or alternatively $\gamma$) is sufficient for our purpose. The SCCA problem
452
+ \begin{equation}
453
+ \label{eqn:stacca}
454
+ \min_{\bfw,\bfe} \| X'\bfw - K\bfe\|^2 + \mu\|\bfw\|_1+\gamma\|\tilde{\bfe}\|_1,
455
+ \end{equation}
456
+ subject to $\|\bfe\|_\infty = 1$ can be simplified to a general LASSO solver by removing the optimisation over $\bfe$, resulting in
457
+ \begin{equation*}
458
+ \min_{\bfw} \| X'\bfw - \mathbf{k}\|^2 + \mu\|\bfw\|_1,
459
+ \end{equation*}
460
+ where, given our paired data, ${\bf k}$ is the inner product between the query and the training samples and $X$ is the second paired data samples. This simplified formulation is trivially solved by Algorithm \ref{alg:scca} by ignoring the loops that adapt $\bfe$. The simplification of equation (\ref{eqn:stacca}) allows us to focus on showing that $\mu$ is close to optimal, which is also true for $\gamma$, and therefore omitted.\\
461
+ \\
462
+ The hyper-parameters control the level of sparsity. Therefore, we test the level of sparsity as a function of the hyper-parameter value. We proceed by creating a new document $d^{*}$ from a paired language that best matches our query\footnote{i.e. given a query in French we want to generate a document in English that best matches the query. The generated document can then be compared to the actual paired English document.} and observe how the change in $\mu$ affects the total number of words being selected. An ``ideal" $\mu$ would generate a new document, in the paired language, and select an equal number of words in the query's actual paired document. Recall that the data has been mean corrected (centred) and therefore no longer sparse.\\
463
+ \\
464
+ We set $\mu$ to be in the range of $[0.001,\ldots ,1]$ with an increment of $0.001$ and use a leave-paired document-out routine for the English-French corpus, which is repeated for all $300$ documents. Figure \ref{gif:larschange} illustrates, for a single query, the effective change in $\mu$ on the level of sparsity. We plot the ratio of the total number of selected words to the total number of words in the original document. An ideal choice of $\mu$ would choose a ratio of $1$ (the horizontal lines) i.e. create a document with exactly the same number of words as the original document or in other words select a $\mu$ such that the cross would lie on the plot. We are able to observe that the method for automatically choosing $\mu$ (the vertical line) is able to create a new document with a close approximation to the total number of words in the original document.
465
+ \begin{figure*}[tbhp]
466
+ \begin{center}
467
+ \includegraphics[width=0.6\textwidth]{hyper.eps}
468
+ \end{center}
469
+ \caption{Document generation for the English-French corpus (visualisation for a single query): We plot the ratio of total number of selected words to the total number of words in the original document. The horizontal line define the ``ideal choice" where the total number of selected words is identical to the total number of words in the original document. The vertical line represent the result using the automatic setting of the hyper-parameter. We are able to observe that the automatic selection of $\mu$ is a good approximation for selecting the level of sparsity.}
470
+ \label{gif:larschange}
471
+ \end{figure*}
472
+
473
+ In Table \ref{tab:rat} we are able to show that the average ratio of total number of selected words for each document generated in the paired language is very close to the ``ideal" level of sparsity, while a non-sparse method (as expected) generates a document with an average of $\approx 28$ times the number of words from the original document. Now that we have established the automatic setting of the hyper-parameters, we proceed in testing how `good' the selected words are in the form of a mate-retreiveal experiment.
474
+ \begin{table}[htdp]
475
+ \caption{French-English Corpus: The ratio of the total number of selected words to the actual total number of words in the paired test document, averaged over all queries. The optimal average ratio if we always generate an `ideal' document is $1$.}
476
+ \begin{center}
477
+ \begin{tabular}{|c|c|}
478
+ \hline
479
+ & Average Selection Ratio\\
480
+ \hline
481
+ Automatic setting of $\mu$ & $1.01\pm0.54$\\
482
+ Non-sparse method & $28.15\pm15.71$\\
483
+ \hline
484
+ \end{tabular}
485
+ \end{center}
486
+ \label{tab:rat}
487
+ \end{table}
488
+
489
+
490
+ \subsection{Mate Retrieval}
491
+ Our experiment is of mate-retrieval, in which a document from the test corpus of one language is considered as the query and only the mate document from the other language is considered relevant. In the following experiments the results are an average of retrieving the mate for both English and French (English and Spanish) and have been repeated $10$ times with a random train-test split.\\
492
+ \\
493
+ We compute the mate-retrieval by projecting the query document as well as the paired (other language) test documents into the learnt semantic space where the inner product between the projected data is computed. Let $q$ be the query in one language and $K_s$ the kernel matrix of the inner product between the second language's testing and training documents
494
+ \begin{equation*}
495
+ l = \left< \frac{q'\bfw}{\|q'\bfw\|}, \frac{K_s \bfe}{\|K_s \bfe\|} \right>.
496
+ \end{equation*}
497
+ The resulting inner products $l$ are then sorted by value. We measure the success of the mate-retrieval task using average precision, this assesses where the correct mate within the sorted inner products $l$ is located. Let $I_j$ be the index location of the retrieved mate from query $q_j$, the average precision $p$ is computed as
498
+ \begin{equation*}
499
+ p = \frac{1}{M}\sum_{j=1}^M \frac{1}{I_j},
500
+ \end{equation*}
501
+ where $M$ is the number of query documents.\\
502
+ \\
503
+ We start by giving the results for the English-French mate-retrieval as shown in Figure \ref{fig:EnFr}. The left plot depicts the average precision ($\pm$ standard deviation) when $50$ documents are used for training and the remaining $250$ are used as test queries. The right plot in Figure \ref{fig:EnFr} gives the average precision ($\pm$ standard deviation) when $100$ documents are used for training and the remaining $200$ for testing. It is interesting to observe that even though SCCA does not learn the common semantic space using all the features (average plotted in Figure \ref{fig:EnFrp}) for either ML primal or dual views (although SCCA will use full dual features when using the full number of projections) its error is extremely similar to that of KCCA and in fact converges with it when a sufficient number of projections are used. It is important to emphasise that KCCA uses the full number of documents ($50$ and $100$) and the full number of words (an average of $2794$ for both languages) to learn the common semantic space. For example, following the left plot in Figure \ref{fig:EnFr} and the additional plots in Figure \ref{fig:EnFrp} we are able to observe that when $35$ projections are used KCCA and SCCA show a similar error. However, SCCA uses approximately $142$ words and $42$ documents to learn the semantic space, while KCCA uses $2794$ words and $50$ documents.\\
504
+ \\
505
+ The second mate-retrieval experiment uses the English-Spanish paired corpus. In each run we randomly split the $1000$ samples into $100$ training and $900$ testing paired documents. The results are plotted in Figure \ref{fig:EnEs} where we are clearly able to observe SCCA outperforming KCCA throughout. We believe this to be a good example of when too many features hinder the learnt semantic space, also explaining the difference in the results obtained from the English-French corpus as the number of features are significantly smaller in that case. The average level of SCCA sparsity is plotted in Figure \ref{fig:EnEsp}. In comparison to KCCA which uses all words ($49,212$) SCCA uses a maximum of $460$ words.\\
506
+ \\
507
+ The performance of SCCA, especially in the latter English-Spanish experiment, shows that we are indeed able to extract meaningful semantics between the two languages, using only the relevant features.
508
+
509
+ \begin{figure*}[tbhp]
510
+ \begin{center}
511
+ \includegraphics[width=0.8\textwidth]{EnFr.eps}
512
+ \end{center}
513
+ \caption{English-French: The average precision error (1-$p$) with $\pm$ standard division error bars for SCCA and KCCA for different number of projections used for the mate-retrieval task. The left figure is for $50$ training and $250$ testing documents while the right figure is for $100$ training and $200$ testing documents.}
514
+ \label{fig:EnFr}
515
+ \end{figure*}
516
+ \begin{figure*}[tbhp]
517
+ \begin{center}
518
+ \includegraphics[width=0.8\textwidth]{EnFrPlot.eps}
519
+ \end{center}
520
+ \caption{English-French: Level of Sparsity - The following figure is an extension of Figure \ref{fig:EnFr} which uses $50$ documents for training. The left figure plots the average number of words used while the right figure plots the average number of documents used with the number of projections. For reference, KCCA uses all the words (average of $2794$) and documents ($50$) for all number of projections.}
521
+ \label{fig:EnFrp}
522
+ \end{figure*}
523
+ \begin{figure*}[tbhp]
524
+ \begin{center}
525
+ \includegraphics[width=0.7\textwidth]{EnEs.eps}
526
+ \end{center}
527
+ \caption{English-Spanish: The average precision error (1-$p$) with $\pm$ standard division error bars of SCCA and KCCA for different number of projections used for the mate-retrieval task. We use $100$ documents for training and $900$ for testing documents.}
528
+ \label{fig:EnEs}
529
+ \end{figure*}
530
+ \begin{figure*}[tbhp]
531
+ \begin{center}
532
+ \includegraphics[width=0.9\textwidth]{EnEsPlo.eps}
533
+ \end{center}
534
+ \caption{English-Spanish: Level of Sparsity - The following figure is an extension of Figure \ref{fig:EnEs} which uses $100$ documents for training. The left figure plots the average number of words used and while the right figure plots the average number of documents used with increasing number of projections. For reference, KCCA uses all the words (average of $49,212$) and documents ($100$) for all number of projections.}
535
+ \label{fig:EnEsp}
536
+ \end{figure*}
537
+
538
+ Despite these already impressive results our intuition is that even better results are attainable if the hyper-parameters would be tuned to give optimal results. The question of hyper-parameter optimality is left for future research. Although, it seems that the main gain of SCCA is sparsity and interpretability of the features.
539
+
540
+
541
+ \section{Conclusions}
542
+ \label{sec6}
543
+ Despite being introduced in $1936$, CCA has proven to be an inspiration for new and continuing research. In this paper we analyse the formulation of CCA and address the issues of sparsity as well as convexity by presenting a novel SCCA method formulated as a convex least squares approach. We also provide a different perspective of solving CCA by using a ML primal-dual formulation which focuses on the scenario when one is interested in (or limited to) a ML-primal representation for the first view while having a ML-dual representation for the second view. A greedy optimisation algorithm is derived. \\
544
+ \\
545
+ The method is demonstrated on a bi-lingual English-French and English-Spanish paired corpora for mate retrieval. The true capacity of SCCA becomes visible when the number of features becomes extremely large as SCCA is able to learn the common semantic space using a very sparse representation of the ML primal-dual views.\\
546
+ \\
547
+ The papers raison d'\^etre was to propose a new efficient algorithm for solving the sparse CCA problem. We believe that while addressing this problem new and interesting questions which need to be addressed have surfaced
548
+ \begin{itemize}
549
+ \item How to automatically compute the hyperparameters $\mu,\gamma$ values so to achieve optimal results?
550
+ \item How do we set $k$ for each $\bfe_j$ when we wish to compute less than $\ell$ projections?
551
+ \item Extending SCCA to a ML primal-primal (ML dual-dual) framework.
552
+ \end{itemize}
553
+ We believe this work to be an initial stage for a new sparse framework to be explored and extended.
554
+
555
+
556
+
557
+
558
+ \section*{Acknowledgment}
559
+ David R. Hardoon is supported by the EPSRC project Le Strum, EP-D063612-1. We would like to thank Zakria Hussain and Nic Schraudolph for insightful discussions. This publication only reflects the authors views.
560
+
561
+ \bibliographystyle{mslapa}
562
+
563
+
564
+ \begin{thebibliography}{}
565
+
566
+ \bibitem[\protect\citeauthoryear{Akaho}{Akaho}{2001}{}]{akaho01}
567
+ Akaho, S. (2001).
568
+ \newblock A kernel method for canonical correlation analysis.
569
+ \newblock In {\em International Meeting of Psychometric Society}. Osaka.
570
+
571
+ \bibitem[\protect\citeauthoryear{Bach \& Jordan}{Bach \&
572
+ Jordan}{2002}{}]{BachJordan}
573
+ Bach, F. \& Jordan, M. (2002).
574
+ \newblock Kernel independent component analysis.
575
+ \newblock {\em Journal of Machine Leaning Research}, {\em 3}, 1--48.
576
+
577
+ \bibitem[\protect\citeauthoryear{Breiman \& Friedman}{Breiman \&
578
+ Friedman}{1985}{}]{Breiman}
579
+ Breiman, L. \& Friedman, L.~H. (1985).
580
+ \newblock Estimating optimal transformations for multiple regression and
581
+ correlation.
582
+ \newblock {\em Journal of the American Statistical Association}, {\em 80},
583
+ 580--598.
584
+
585
+ \bibitem[\protect\citeauthoryear{Chen, Donoho \& Saunders}{Chen
586
+ et~al.}{1999}{}]{chen99}
587
+ Chen, S.~S., Donoho, D.~L. \& Saunders, M.~A. (1999).
588
+ \newblock Atomic decomposition by basis pursuit.
589
+ \newblock {\em SIAM Journal on Scientific Computing}, {\em 20(1)}, 33--61.
590
+
591
+ \bibitem[\protect\citeauthoryear{d'Aspremont, Ghaoui, Jordan \&
592
+ Lanckriet}{d'Aspremont et~al.}{2007}{}]{Dspremont}
593
+ d'Aspremont, A., Ghaoui, L.~E., Jordan, M.~I. \& Lanckriet, G. (2007).
594
+ \newblock A direct formulation for sparse pca using semidefinite programming.
595
+ \newblock {\em SIAM Review}, {\em 49(3)}, 434--448.
596
+
597
+ \bibitem[\protect\citeauthoryear{Dhanjal, Gunn \& Shawe-Taylor}{Dhanjal
598
+ et~al.}{2006}{}]{dhanjal06sparse}
599
+ Dhanjal, C., Gunn, S.~R. \& Shawe-Taylor, J. (2006).
600
+ \newblock Sparse feature extraction using generalised partial least squares.
601
+ \newblock In {\em Proceedings of the {I}{E}{E}{E} International Workshop on
602
+ Machine Learning for Signal Processing} (pp.\ 27--32).
603
+
604
+ \bibitem[\protect\citeauthoryear{Friman, Borga, Lundberg \& Knutsson}{Friman
605
+ et~al.}{2001}{a}]{fblk01}
606
+ Friman, O., Borga, M., Lundberg, P. \& Knutsson, H. (2001a).
607
+ \newblock A correlation framework for functional {MRI} data analysis.
608
+ \newblock In {\em Proceedings of the 12th Scandinavian Conference on Image
609
+ Analysis}. Bergen, Norway.
610
+
611
+ \bibitem[\protect\citeauthoryear{Friman, Carlsson, Lundberg, Borga \&
612
+ Knutsson}{Friman et~al.}{2001}{b}]{fclbk01}
613
+ Friman, O., Carlsson, J., Lundberg, P., Borga, M. \& Knutsson, H. (2001b).
614
+ \newblock Detection of neural activity in functional {MRI} using canonical
615
+ correlation analysis.
616
+ \newblock {\em Magnetic Resonance in Medicine}, {\em 45(2)}, 323--330.
617
+
618
+ \bibitem[\protect\citeauthoryear{Fukumizu, Bach \& Gretton}{Fukumizu
619
+ et~al.}{2007}{}]{Fukumizu}
620
+ Fukumizu, K., Bach, F.~R. \& Gretton, A. (2007).
621
+ \newblock Consistency of kernel canonical correlation analysis.
622
+ \newblock {\em Journal of Machine Learning Research}, {\em 8}, 361--383.
623
+
624
+ \bibitem[\protect\citeauthoryear{Fyfe \& Lai}{Fyfe \& Lai}{2000}{a}]{fyfe-ica}
625
+ Fyfe, C. \& Lai, P. (2000a).
626
+ \newblock {ICA} using kernel canonical correlation analysis.
627
+ \newblock In {\em Proc. of Int. Workshop on Independent Component Analysis and
628
+ Blind Signal Separation}.
629
+
630
+ \bibitem[\protect\citeauthoryear{Fyfe \& Lai}{Fyfe \&
631
+ Lai}{2000}{b}]{pei-kernel}
632
+ Fyfe, C. \& Lai, P.~L. (2000b).
633
+ \newblock Kernel and nonlinear canonical correlation analysis.
634
+ \newblock In {\em IEEE-INNS-ENNS International Joint Conference on Neural
635
+ Networks}, Volume~4.
636
+
637
+ \bibitem[\protect\citeauthoryear{Hardoon, Mourao-Miranda, Brammer \&
638
+ Shawe-Taylor}{Hardoon et~al.}{2007}{}]{Hardoon07}
639
+ Hardoon, D.~R., Mourao-Miranda, J., Brammer, M. \& Shawe-Taylor, J. (2007).
640
+ \newblock Unsupervised analysis of fmri data using kernel canonical
641
+ correlation.
642
+ \newblock {\em NeuroImage}, {\em 37 (4)}, 1250--1259.
643
+
644
+ \bibitem[\protect\citeauthoryear{Hardoon, Saunders, Szedmak \&
645
+ Shawe-Taylor}{Hardoon et~al.}{2006}{}]{Hardoon06a}
646
+ Hardoon, D.~R., Saunders, C., Szedmak, S. \& Shawe-Taylor, J. (2006).
647
+ \newblock A correlation approach for automatic image annotation.
648
+ \newblock In {\em Springer LNAI 4093} (pp.\ 681--692).
649
+
650
+ \bibitem[\protect\citeauthoryear{Hardoon \& Shawe-Taylor}{Hardoon \&
651
+ Shawe-Taylor}{2003}{}]{Hardoon03CBMI}
652
+ Hardoon, D.~R. \& Shawe-Taylor, J. (2003).
653
+ \newblock {KCCA} for different level precision in content-based image
654
+ retrieval.
655
+ \newblock In {\em Proceedings of Third International Workshop on Content-Based
656
+ Multimedia Indexing}. IRISA, Rennes, France.
657
+
658
+ \bibitem[\protect\citeauthoryear{Hardoon \& Shawe-Taylor}{Hardoon \&
659
+ Shawe-Taylor}{In Press}{}]{DRH-JST_08}
660
+ Hardoon, D.~R. \& Shawe-Taylor, J. (In Press).
661
+ \newblock Convergence analysis of kernel canonical correlation analysis: Theory
662
+ and practice.
663
+ \newblock {\em Machine Learning}.
664
+
665
+ \bibitem[\protect\citeauthoryear{Hardoon, Szedmak \& Shawe-Taylor}{Hardoon
666
+ et~al.}{2004}{}]{Hardoon04}
667
+ Hardoon, D.~R., Szedmak, S. \& Shawe-Taylor, J. (2004).
668
+ \newblock Canonical correlation analysis: an overview with application to
669
+ learning methods.
670
+ \newblock {\em Neural Computation}, {\em 16}, 2639--2664.
671
+
672
+ \bibitem[\protect\citeauthoryear{Hastie \& Tibshirani}{Hastie \&
673
+ Tibshirani}{1990}{}]{Hastie}
674
+ Hastie, T.~J. \& Tibshirani, R.~J. (1990).
675
+ \newblock {\em Generalized Additive Models}.
676
+ \newblock Chapman \& Hall/CRC.
677
+
678
+ \bibitem[\protect\citeauthoryear{Heiler \& Schnor}{Heiler \&
679
+ Schnor}{2006}{}]{Heiler}
680
+ Heiler, M. \& Schnor, C. (2006).
681
+ \newblock Learning sparse representations by non-negative matrix factorization
682
+ and sequential cone programming.
683
+ \newblock {\em The Journal of Machine Learning Research}, {\em 7}, 1385--1407.
684
+
685
+ \bibitem[\protect\citeauthoryear{Hotelling}{Hotelling}{1936}{}]{Hotelling36}
686
+ Hotelling, H. (1936).
687
+ \newblock Relations between two sets of variates.
688
+ \newblock {\em Biometrika}, {\em 28}, 312--377.
689
+
690
+ \bibitem[\protect\citeauthoryear{Ketterling}{Ketterling}{1971}{}]{jrketter}
691
+ Ketterling, J.~R. (1971).
692
+ \newblock Canonical analysis of several sets of variables.
693
+ \newblock {\em Biometrika}, {\em 58}, 433--451.
694
+
695
+ \bibitem[\protect\citeauthoryear{Moghaddam, Weiss \& Avidan}{Moghaddam
696
+ et~al.}{2006}{}]{moghaddam-spectral}
697
+ Moghaddam, B., Weiss, Y. \& Avidan, S. (2006).
698
+ \newblock Spectral bounds for sparse pca: Exact and greedy algorithms.
699
+ \newblock In {\em Neural Information Processing Systems (NIPS 06)}.
700
+
701
+ \bibitem[\protect\citeauthoryear{Shawe-Taylor \& Cristianini}{Shawe-Taylor \&
702
+ Cristianini}{2004}{}]{JohnNelloKM}
703
+ Shawe-Taylor, J. \& Cristianini, N. (2004).
704
+ \newblock {\em Kernel Methods for Pattern Analysis}.
705
+ \newblock Cambridge University Press.
706
+
707
+ \bibitem[\protect\citeauthoryear{Sriperumbudur, Torres \&
708
+ Lanckriet}{Sriperumbudur et~al.}{2007}{}]{sriperumbudurICML07}
709
+ Sriperumbudur, B.~K., Torres, D. \& Lanckriet, G. (2007).
710
+ \newblock Sparse eigen methods by d.c. programming.
711
+ \newblock In {\em In Proceedings of 2nd International Conference on Machine
712
+ Learning} (pp.\ 831--838). Morgan Kaufmann, San Francisco, CA.
713
+
714
+ \bibitem[\protect\citeauthoryear{Szedmak, Bie \& Hardoon}{Szedmak
715
+ et~al.}{2007}{}]{Szedmak07}
716
+ Szedmak, S., Bie, T.~D. \& Hardoon, D.~R. (2007).
717
+ \newblock A metamorphosis of canonical correlation analysis into multivariate
718
+ maximum margin learning.
719
+ \newblock In {\em 15'th European Symposium on Artificial Neural Networks
720
+ (ESANN)}.
721
+
722
+ \bibitem[\protect\citeauthoryear{Tibshirani}{Tibshirani}{1994}{}]{Tibshirani94}
723
+ Tibshirani, R. (1994).
724
+ \newblock Regression shrinkage and selection via the lasso.
725
+ \newblock Technical report, University of Toronto.
726
+
727
+ \bibitem[\protect\citeauthoryear{Torres, Turnbull, Barrington \&
728
+ Lanckriet}{Torres et~al.}{2007}{}]{Torres}
729
+ Torres, D., Turnbull, D., Barrington, L. \& Lanckriet, G. (2007).
730
+ \newblock Identifying words that are musically meaningful.
731
+ \newblock In {\em Proceedings of the 8th International Conference on Music
732
+ Information Retrieval}.
733
+
734
+ \bibitem[\protect\citeauthoryear{Weston, Elisseeff, Scholkopf \&
735
+ Tipping}{Weston et~al.}{2003}{}]{Weston}
736
+ Weston, J., Elisseeff, A., Scholkopf, B. \& Tipping, M. (2003).
737
+ \newblock Use of the zero norm with linear models and kernel method.
738
+ \newblock {\em Journal of Machine Learning Research}, {\em 3}, 1439--1461.
739
+
740
+ \bibitem[\protect\citeauthoryear{Zou, Hastie \& Tibshirani}{Zou
741
+ et~al.}{2004}{}]{zou04sparse}
742
+ Zou, H., Hastie, T. \& Tibshirani, R. (2004).
743
+ \newblock Sparse principal component analysis.
744
+ \newblock Technical report, Statistics department, Stanford University.
745
+
746
+ \end{thebibliography}
747
+
748
+
749
+
750
+ \end{document}
papers/0909/0909.0910.tex ADDED
The diff for this file is too large to render. See raw diff
 
papers/1011/1011.5270.tex ADDED
The diff for this file is too large to render. See raw diff
 
papers/1206/1206.5538.tex ADDED
The diff for this file is too large to render. See raw diff
 
papers/1210/1210.1207.tex ADDED
The diff for this file is too large to render. See raw diff
 
papers/1309/1309.6392.tex ADDED
@@ -0,0 +1,999 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[12pt]{article}
2
+
3
+
4
+
5
+
6
+ \clearpage{}
7
+
8
+ \usepackage[auth-sc,affil-sl]{authblk}
9
+ \usepackage{graphicx}
10
+ \usepackage{color}
11
+ \usepackage{amsmath}
12
+ \usepackage{dsfont}
13
+ \usepackage[round]{natbib}
14
+ \usepackage{amssymb}
15
+ \usepackage{abstract}
16
+ \usepackage{cancel}
17
+ \usepackage[margin=1in]{geometry} \usepackage{enumerate}
18
+ \usepackage{listings}
19
+ \usepackage{array}
20
+ \usepackage{algorithm}
21
+ \usepackage{float}
22
+ \usepackage{multirow}
23
+ \usepackage{algorithm}
24
+ \usepackage{algorithmicx}
25
+ \usepackage{algpseudocode}
26
+ \usepackage{subcaption}
27
+ \usepackage{etoolbox}
28
+
29
+ \newcommand{\qu}[1]{``#1''}
30
+
31
+ \newcommand{\treet}[1]{\text{\scriptsize \PHplaneTree}_{#1}}
32
+ \newcommand{\leaf}{\text{\scriptsize \PHrosette}}
33
+
34
+ \lstset{language = R, numbers = left, backgroundcolor = \color{backgcode}, title = \lstname, breaklines = true, basicstyle = \small, commentstyle = \footnotesize\color{Brown}, stringstyle = \ttfamily, tabsize = 2, fontadjust = true, showspaces = false, showstringspaces = false, texcl = true, numbers = none}
35
+
36
+ \newcounter{probnum}
37
+ \setcounter{probnum}{1}
38
+
39
+ \def\changemargin#1#2{\list{}{\rightmargin#2\leftmargin#1}\item[]}
40
+ \let\endchangemargin=\endlist
41
+
42
+ \allowdisplaybreaks
43
+
44
+ \definecolor{gray}{rgb}{0.7,0.7,0.7}
45
+ \newcommand{\ingray}[1]{\color{gray}\textbf{#1} \color{black}}
46
+ \definecolor{black}{rgb}{0,0,0}
47
+ \definecolor{white}{rgb}{1,1,1}
48
+ \definecolor{blue}{rgb}{0,0,0.7}
49
+ \newcommand{\inblue}[1]{\color{blue}\textbf{#1} \color{black}}
50
+ \definecolor{green}{rgb}{0.133,0.545,0.133}
51
+ \newcommand{\ingreen}[1]{\color{green}\textbf{#1} \color{black}}
52
+ \definecolor{yellow}{rgb}{1,0.549,0}
53
+ \newcommand{\inyellow}[1]{\color{yellow}\textbf{#1} \color{black}}
54
+ \definecolor{red}{rgb}{1,0.133,0.133}
55
+ \newcommand{\inred}[1]{\color{red}\textbf{#1} \color{black}}
56
+ \definecolor{purple}{rgb}{0.58,0,0.827}
57
+ \newcommand{\inpurple}[1]{\color{purple}\textbf{#1} \color{black}}
58
+ \definecolor{brown}{rgb}{0.55,0.27,0.07}
59
+ \newcommand{\inbrown}[1]{\color{brown}\textbf{#1} \color{black}}
60
+
61
+ \definecolor{backgcode}{rgb}{0.97,0.97,0.8}
62
+ \definecolor{Brown}{cmyk}{0,0.81,1,0.60}
63
+ \definecolor{OliveGreen}{cmyk}{0.64,0,0.95,0.40}
64
+ \definecolor{CadetBlue}{cmyk}{0.62,0.57,0.23,0}
65
+
66
+ \DeclareMathOperator*{\argmax}{arg\,max~}
67
+ \DeclareMathOperator*{\argmin}{arg\,min~}
68
+ \DeclareMathOperator*{\argsup}{arg\,sup~}
69
+ \DeclareMathOperator*{\arginf}{arg\,inf~}
70
+ \DeclareMathOperator*{\convolution}{\text{\Huge{$\ast$}}}
71
+ \newcommand{\infconv}[2]{\convolution^\infty_{#1 = 1} #2}
72
+
73
+
74
+
75
+
76
+ \newcommand{\bv}[1]{\boldsymbol{#1}}
77
+
78
+ \newcommand{\BetaDistrConst}{\dfrac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)}}
79
+ \newcommand{\NormDistrConst}{\dfrac{1}{\sqrt{2\pi\sigma^2}}}
80
+
81
+ \newcommand{\tsq}{\tau^2}
82
+ \newcommand{\tsqh}{\hat{\tau}^2}
83
+ \newcommand{\sigsq}{\sigma^2}
84
+ \newcommand{\sigsqsq}{\parens{\sigma^2}^2}
85
+ \newcommand{\sigsqovern}{\dfrac{\sigsq}{n}}
86
+ \newcommand{\tausq}{\tau^2}
87
+ \newcommand{\tausqalpha}{\tau^2_\alpha}
88
+ \newcommand{\tausqbeta}{\tau^2_\beta}
89
+ \newcommand{\tausqsigma}{\tau^2_\sigma}
90
+ \newcommand{\betasq}{\beta^2}
91
+ \newcommand{\sigsqvec}{\bv{\sigma}^2}
92
+ \newcommand{\sigsqhat}{\hat{\sigma}^2}
93
+ \newcommand{\Omegahat}{\hat{\Omega}}
94
+ \newcommand{\sigsqhatmlebayes}{\sigsqhat_{\text{Bayes, MLE}}}
95
+ \newcommand{\sigsqhatmle}[1]{\sigsqhat_{#1, \text{MLE}}}
96
+ \newcommand{\bSigma}{\bv{\Sigma}}
97
+ \newcommand{\bSigmainv}{\bSigma^{-1}}
98
+ \newcommand{\thetavec}{\bv{\theta}}
99
+ \newcommand{\thetahat}{\hat{\theta}}
100
+ \newcommand{\thetahatmle}{\hat{\theta}_{\mathrm{MLE}}}
101
+ \newcommand{\thetavechatmle}{\hat{\thetavec}_{\mathrm{MLE}}}
102
+ \newcommand{\pihatmle}{\hat{\pi}_{\mathrm{MLE}}}
103
+ \newcommand{\muhat}{\hat{\mu}}
104
+ \newcommand{\musq}{\mu^2}
105
+ \newcommand{\muvec}{\bv{\mu}}
106
+ \newcommand{\pivec}{\bv{\pi}}
107
+ \newcommand{\muhatmle}{\muhat_{\text{MLE}}}
108
+ \newcommand{\lambdahat}{\hat{\lambda}}
109
+ \newcommand{\lambdahatmle}{\lambdahat_{\text{MLE}}}
110
+ \newcommand{\lambdahatmleone}{\lambdahat_{\text{MLE}, 1}}
111
+ \newcommand{\lambdahatmletwo}{\lambdahat_{\text{MLE}, 2}}
112
+ \newcommand{\etavec}{\bv{\eta}}
113
+ \newcommand{\alphavec}{\bv{\alpha}}
114
+ \newcommand{\minimaxdec}{\delta^*_{\mathrm{mm}}}
115
+ \newcommand{\ybar}{\bar{y}}
116
+ \newcommand{\xbar}{\bar{x}}
117
+ \newcommand{\Xbar}{\bar{X}}
118
+ \newcommand{\Ybar}{\bar{Y}}
119
+
120
+ \newcommand{\iid}{~{\buildrel iid \over \sim}~}
121
+ \newcommand{\inddist}{~{\buildrel ind \over \sim}~}
122
+ \newcommand{\approxdist}{~~{\buildrel approx \over \sim}~~}
123
+ \newcommand{\equalsindist}{~{\buildrel d \over =}~}
124
+ \newcommand{\lik}[1]{L\parens{#1}}
125
+ \newcommand{\loglik}[1]{\ell\parens{#1}}
126
+ \newcommand{\thetahatkminone}{\thetahat^{(k-1)}}
127
+ \newcommand{\thetahatkplusone}{\thetahat^{(k+1)}}
128
+ \newcommand{\thetahatk}{\thetahat^{(k)}}
129
+ \newcommand{\half}{\frac{1}{2}}
130
+ \newcommand{\third}{\frac{1}{3}}
131
+ \newcommand{\twothirds}{\frac{2}{3}}
132
+ \newcommand{\fourth}{\frac{1}{4}}
133
+ \newcommand{\fifth}{\frac{1}{5}}
134
+ \newcommand{\sixth}{\frac{1}{6}}
135
+
136
+ \newcommand{\A}{\bv{A}}
137
+ \newcommand{\At}{\A^T}
138
+ \newcommand{\Ainv}{\inverse{\A}}
139
+ \newcommand{\B}{\bv{B}}
140
+ \newcommand{\C}{\bv{C}}
141
+ \newcommand{\K}{\bv{K}}
142
+ \newcommand{\Kt}{\K^T}
143
+ \newcommand{\Kinv}{\inverse{K}}
144
+ \newcommand{\Kinvt}{(\Kinv)^T}
145
+ \newcommand{\M}{\bv{M}}
146
+ \newcommand{\Bt}{\B^T}
147
+ \newcommand{\Q}{\bv{Q}}
148
+ \newcommand{\E}{\bv{E}}
149
+ \newcommand{\Et}{\E^\top}
150
+ \newcommand{\Qt}{\Q^T}
151
+ \newcommand{\R}{\bv{R}}
152
+ \newcommand{\Rt}{\R^\top}
153
+ \newcommand{\Z}{\bv{Z}}
154
+ \newcommand{\X}{\bv{X}}
155
+ \renewcommand{\H}{\bv{H}}
156
+ \newcommand{\Xsub}{\X_{\text{(sub)}}}
157
+ \newcommand{\Xsubadj}{\X_{\text{(sub,adj)}}}
158
+ \newcommand{\I}{\bv{I}}
159
+ \newcommand{\J}{\bv{J}}
160
+ \newcommand{\0}{\bv{0}}
161
+ \newcommand{\1}{\bv{1}}
162
+ \newcommand{\Y}{\bv{Y}}
163
+ \newcommand{\Yt}{\Y^\top}
164
+ \newcommand{\tvec}{\bv{t}}
165
+ \newcommand{\sigsqI}{\sigsq\I}
166
+ \renewcommand{\P}{\bv{P}}
167
+ \newcommand{\Psub}{\P_{\text{(sub)}}}
168
+ \newcommand{\Pt}{\P^T}
169
+ \newcommand{\Pii}{P_{ii}}
170
+ \newcommand{\Pij}{P_{ij}}
171
+ \newcommand{\IminP}{(\I-\P)}
172
+ \newcommand{\Xt}{\bv{X}^T}
173
+ \newcommand{\XtX}{\Xt\X}
174
+ \newcommand{\XtXinv}{\parens{\Xt\X}^{-1}}
175
+ \newcommand{\XtXinvXt}{\XtXinv\Xt}
176
+ \newcommand{\XXtXinvXt}{\X\XtXinvXt}
177
+ \newcommand{\x}{\bv{x}}
178
+ \newcommand{\onevec}{\bv{1}}
179
+ \newcommand{\zerovec}{\bv{0}}
180
+ \newcommand{\onevectr}{\onevec^\top}
181
+ \newcommand{\oneton}{1, \ldots, n}
182
+ \newcommand{\yoneton}{y_1, \ldots, y_n}
183
+ \newcommand{\yonetonorder}{y_{(1)}, \ldots, y_{(n)}}
184
+ \newcommand{\Yoneton}{Y_1, \ldots, Y_n}
185
+ \newcommand{\iinoneton}{i \in \braces{\oneton}}
186
+ \newcommand{\onetom}{1, \ldots, m}
187
+ \newcommand{\jinonetom}{j \in \braces{\onetom}}
188
+ \newcommand{\xoneton}{x_1, \ldots, x_n}
189
+ \newcommand{\Xoneton}{X_1, \ldots, X_n}
190
+ \newcommand{\xt}{\x^T}
191
+ \newcommand{\y}{\bv{y}}
192
+ \newcommand{\yt}{\y^T}
193
+ \newcommand{\n}{\bv{n}}
194
+ \renewcommand{\c}{\bv{c}}
195
+ \newcommand{\ct}{\c^T}
196
+ \newcommand{\tstar}{\bv{t}^*}
197
+ \renewcommand{\u}{\bv{u}}
198
+ \renewcommand{\v}{\bv{v}}
199
+ \renewcommand{\a}{\bv{a}}
200
+ \newcommand{\s}{\bv{s}}
201
+ \newcommand{\yadj}{\y_{\text{(adj)}}}
202
+ \newcommand{\xjadj}{\x_{j\text{(adj)}}}
203
+ \newcommand{\xjadjM}{\x_{j \perp M}}
204
+ \newcommand{\yhat}{\hat{\y}}
205
+ \newcommand{\yhatsub}{\yhat_{\text{(sub)}}}
206
+ \newcommand{\yhatstarnew}{\yhatstar_{\text{new}}}
207
+ \newcommand{\z}{\bv{z}}
208
+ \newcommand{\zt}{\z^T}
209
+ \newcommand{\bb}{\bv{b}}
210
+ \newcommand{\bbt}{\bb^T}
211
+ \newcommand{\bbeta}{\bv{\beta}}
212
+ \newcommand{\beps}{\bv{\epsilon}}
213
+ \newcommand{\bepst}{\beps^T}
214
+ \newcommand{\e}{\bv{e}}
215
+ \newcommand{\Mofy}{\M(\y)}
216
+ \newcommand{\KofAlpha}{K(\alpha)}
217
+ \newcommand{\ellset}{\mathcal{L}}
218
+ \newcommand{\oneminalph}{1-\alpha}
219
+ \newcommand{\SSE}{\text{SSE}}
220
+ \newcommand{\SSEsub}{\text{SSE}_{\text{(sub)}}}
221
+ \newcommand{\MSE}{\text{MSE}}
222
+ \newcommand{\RMSE}{\text{RMSE}}
223
+ \newcommand{\SSR}{\text{SSR}}
224
+ \newcommand{\SST}{\text{SST}}
225
+ \newcommand{\JSest}{\delta_{\text{JS}}(\x)}
226
+ \newcommand{\Bayesest}{\delta_{\text{Bayes}}(\x)}
227
+ \newcommand{\EmpBayesest}{\delta_{\text{EmpBayes}}(\x)}
228
+ \newcommand{\BLUPest}{\delta_{\text{BLUP}}}
229
+ \newcommand{\MLEest}[1]{\hat{#1}_{\text{MLE}}}
230
+
231
+ \newcommand{\twovec}[2]{\bracks{\begin{array}{c} #1 \\ #2 \end{array}}}
232
+ \newcommand{\threevec}[3]{\bracks{\begin{array}{c} #1 \\ #2 \\ #3 \end{array}}}
233
+ \newcommand{\fivevec}[5]{\bracks{\begin{array}{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \end{array}}}
234
+ \newcommand{\twobytwomat}[4]{\bracks{\begin{array}{cc} #1 & #2 \\ #3 & #4 \end{array}}}
235
+ \newcommand{\threebytwomat}[6]{\bracks{\begin{array}{cc} #1 & #2 \\ #3 & #4 \\ #5 & #6 \end{array}}}
236
+
237
+ \newcommand{\thetainthetas}{\theta \in \Theta}
238
+ \newcommand{\reals}{\mathbb{R}}
239
+ \newcommand{\complexes}{\mathbb{C}}
240
+ \newcommand{\rationals}{\mathbb{Q}}
241
+ \newcommand{\integers}{\mathbb{Z}}
242
+ \newcommand{\naturals}{\mathbb{N}}
243
+ \newcommand{\forallninN}{~~\forall n \in \naturals}
244
+ \newcommand{\forallxinN}[1]{~~\forall #1 \in \reals}
245
+ \newcommand{\matrixdims}[2]{\in \reals^{\,#1 \times #2}}
246
+ \newcommand{\inRn}[1]{\in \reals^{\,#1}}
247
+ \newcommand{\mathimplies}{\quad\Rightarrow\quad}
248
+ \newcommand{\mathequiv}{\quad\Leftrightarrow\quad}
249
+ \newcommand{\eqncomment}[1]{\quad \text{(#1)}}
250
+ \newcommand{\limitn}{\lim_{n \rightarrow \infty}}
251
+ \newcommand{\limitN}{\lim_{N \rightarrow \infty}}
252
+ \newcommand{\limitd}{\lim_{d \rightarrow \infty}}
253
+ \newcommand{\limitt}{\lim_{t \rightarrow \infty}}
254
+ \newcommand{\limitsupn}{\limsup_{n \rightarrow \infty}~}
255
+ \newcommand{\limitinfn}{\liminf_{n \rightarrow \infty}~}
256
+ \newcommand{\limitk}{\lim_{k \rightarrow \infty}}
257
+ \newcommand{\limsupn}{\limsup_{n \rightarrow \infty}}
258
+ \newcommand{\limsupk}{\limsup_{k \rightarrow \infty}}
259
+ \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor}
260
+ \newcommand{\ceil}[1]{\left\lceil #1 \right\rceil}
261
+
262
+ \newcommand{\beqn}{\vspace{-0.25cm}\begin{eqnarray*}}
263
+ \newcommand{\eeqn}{\end{eqnarray*}}
264
+ \newcommand{\bneqn}{\vspace{-0.25cm}\begin{eqnarray}}
265
+ \newcommand{\eneqn}{\end{eqnarray}}
266
+
267
+ \newcommand{\beans}{\color{blue} \beqn \text{Ans:}~~~}
268
+ \newcommand{\eeans}{\eeqn \color{black}}
269
+
270
+ \newcommand{\parens}[1]{\left(#1\right)}
271
+ \newcommand{\squared}[1]{\parens{#1}^2}
272
+ \newcommand{\tothepow}[2]{\parens{#1}^{#2}}
273
+ \newcommand{\prob}[1]{\mathbb{P}\parens{#1}}
274
+ \newcommand{\cprob}[2]{\prob{#1~|~#2}}
275
+ \newcommand{\littleo}[1]{o\parens{#1}}
276
+ \newcommand{\bigo}[1]{O\parens{#1}}
277
+ \newcommand{\Lp}[1]{\mathbb{L}^{#1}}
278
+ \renewcommand{\arcsin}[1]{\text{arcsin}\parens{#1}}
279
+ \newcommand{\prodonen}[2]{\prod_{#1=1}^n #2}
280
+ \newcommand{\mysum}[4]{\sum_{#1=#2}^{#3} #4}
281
+ \newcommand{\sumonen}[2]{\sum_{#1=1}^n #2}
282
+ \newcommand{\infsum}[2]{\sum_{#1=1}^\infty #2}
283
+ \newcommand{\infprod}[2]{\prod_{#1=1}^\infty #2}
284
+ \newcommand{\infunion}[2]{\bigcup_{#1=1}^\infty #2}
285
+ \newcommand{\infinter}[2]{\bigcap_{#1=1}^\infty #2}
286
+ \newcommand{\infintegral}[2]{\int^\infty_{-\infty} #2 ~\text{d}#1}
287
+ \newcommand{\supthetas}[1]{\sup_{\thetainthetas}\braces{#1}}
288
+ \newcommand{\bracks}[1]{\left[#1\right]}
289
+ \newcommand{\braces}[1]{\left\{#1\right\}}
290
+ \newcommand{\set}[1]{\left\{#1\right\}}
291
+ \newcommand{\abss}[1]{\left|#1\right|}
292
+ \newcommand{\norm}[1]{\left|\left|#1\right|\right|}
293
+ \newcommand{\normsq}[1]{\norm{#1}^2}
294
+ \newcommand{\inverse}[1]{\parens{#1}^{-1}}
295
+ \newcommand{\rowof}[2]{\parens{#1}_{#2\cdot}}
296
+
297
+ \newcommand{\realcomp}[1]{\text{Re}\bracks{#1}}
298
+ \newcommand{\imagcomp}[1]{\text{Im}\bracks{#1}}
299
+ \newcommand{\range}[1]{\text{range}\bracks{#1}}
300
+ \newcommand{\colsp}[1]{\text{colsp}\bracks{#1}}
301
+ \newcommand{\rowsp}[1]{\text{rowsp}\bracks{#1}}
302
+ \newcommand{\tr}[1]{\text{tr}\bracks{#1}}
303
+ \newcommand{\diag}[1]{\text{diag}\bracks{#1}}
304
+ \newcommand{\rank}[1]{\text{rank}\bracks{#1}}
305
+ \newcommand{\proj}[2]{\text{Proj}_{#1}\bracks{#2}}
306
+ \newcommand{\projcolspX}[1]{\text{Proj}_{\colsp{\X}}\bracks{#1}}
307
+ \newcommand{\median}[1]{\text{median}\bracks{#1}}
308
+ \newcommand{\mean}[1]{\text{mean}\bracks{#1}}
309
+ \newcommand{\dime}[1]{\text{dim}\bracks{#1}}
310
+ \renewcommand{\det}[1]{\text{det}\bracks{#1}}
311
+ \newcommand{\expe}[1]{\mathbb{E}\bracks{#1}}
312
+ \newcommand{\cexpe}[2]{\expe{#1 ~ | ~ #2}}
313
+ \newcommand{\expeabs}[1]{\expe{\abss{#1}}}
314
+ \newcommand{\expesub}[2]{\mathbb{E}_{#1}\bracks{#2}}
315
+ \newcommand{\indic}[1]{\mathds{1}_{#1}}
316
+ \newcommand{\var}[1]{\mathbb{V}\text{ar}\bracks{#1}}
317
+ \newcommand{\varhat}[1]{\hat{\mathbb{V}\text{ar}}\bracks{#1}}
318
+ \newcommand{\cov}[2]{\mathbb{C}\text{ov}\bracks{#1, #2}}
319
+ \newcommand{\corr}[2]{\text{Corr}\bracks{#1, #2}}
320
+ \newcommand{\se}[1]{\text{SE}\bracks{#1}}
321
+ \newcommand{\seest}[1]{\hat{\text{SE}}\bracks{#1}}
322
+ \newcommand{\bias}[1]{\text{Bias}\bracks{#1}}
323
+ \newcommand{\partialop}[2]{\dfrac{\partial}{\partial #1}\bracks{#2}}
324
+ \newcommand{\secpartialop}[2]{\dfrac{\partial^2}{\partial #1^2}\bracks{#2}}
325
+ \newcommand{\mixpartialop}[3]{\dfrac{\partial^2}{\partial #1 \partial #2}\bracks{#3}}
326
+
327
+ \renewcommand{\exp}[1]{\mathrm{exp}\parens{#1}}
328
+ \renewcommand{\cos}[1]{\text{cos}\parens{#1}}
329
+ \renewcommand{\sin}[1]{\text{sin}\parens{#1}}
330
+ \newcommand{\sign}[1]{\text{sign}\parens{#1}}
331
+ \newcommand{\are}[1]{\mathrm{ARE}\parens{#1}}
332
+ \newcommand{\natlog}[1]{\ln\parens{#1}}
333
+ \newcommand{\oneover}[1]{\frac{1}{#1}}
334
+ \newcommand{\overtwo}[1]{\frac{#1}{2}}
335
+ \newcommand{\overn}[1]{\frac{#1}{n}}
336
+ \newcommand{\oneoversqrt}[1]{\oneover{\sqrt{#1}}}
337
+ \newcommand{\sqd}[1]{\parens{#1}^2}
338
+ \newcommand{\loss}[1]{\ell\parens{\theta, #1}}
339
+ \newcommand{\losstwo}[2]{\ell\parens{#1, #2}}
340
+ \newcommand{\cf}{\phi(t)}
341
+
342
+ \newcommand{\ie}{\textit{i.e.} }
343
+ \newcommand{\AKA}{\textit{AKA} }
344
+ \renewcommand{\iff}{\textit{iff}}
345
+ \newcommand{\eg}{\textit{e.g.} }
346
+ \newcommand{\st}{\textit{s.t.} }
347
+ \newcommand{\wrt}{\textit{w.r.t.} }
348
+ \newcommand{\mathst}{~~\text{\st}~~}
349
+ \newcommand{\mathand}{~~\text{and}~~}
350
+ \newcommand{\mathor}{~~\text{or}~~}
351
+ \newcommand{\ala}{\textit{a la} }
352
+ \newcommand{\ppp}{posterior predictive p-value}
353
+ \newcommand{\dd}{dataset-to-dataset}
354
+
355
+ \newcommand{\logistic}[2]{\mathrm{Logistic}\parens{#1,\,#2}}
356
+ \newcommand{\bernoulli}[1]{\mathrm{Bern}\parens{#1}}
357
+ \newcommand{\betanot}[2]{\mathrm{Beta}\parens{#1,\,#2}}
358
+ \newcommand{\stdbetanot}{\betanot{\alpha}{\beta}}
359
+ \newcommand{\multnormnot}[3]{\mathcal{N}_{#1}\parens{#2,\,#3}}
360
+ \newcommand{\normnot}[2]{\mathcal{N}\parens{#1,\,#2}}
361
+ \newcommand{\classicnormnot}{\normnot{\mu}{\sigsq}}
362
+ \newcommand{\stdnormnot}{\normnot{0}{1}}
363
+ \newcommand{\uniform}[2]{\mathrm{U}\parens{#1,\,#2}}
364
+ \newcommand{\stduniform}{\uniform{0}{1}}
365
+ \newcommand{\exponential}[1]{\mathrm{Exp}\parens{#1}}
366
+ \newcommand{\stdexponential}{\mathrm{Exp}\parens{1}}
367
+ \newcommand{\gammadist}[2]{\mathrm{Gamma}\parens{#1, #2}}
368
+ \newcommand{\poisson}[1]{\mathrm{Poisson}\parens{#1}}
369
+ \newcommand{\geometric}[1]{\mathrm{Geometric}\parens{#1}}
370
+ \newcommand{\binomial}[2]{\mathrm{Binomial}\parens{#1,\,#2}}
371
+ \newcommand{\rayleigh}[1]{\mathrm{Rayleigh}\parens{#1}}
372
+ \newcommand{\multinomial}[2]{\mathrm{Multinomial}\parens{#1,\,#2}}
373
+ \newcommand{\gammanot}[2]{\mathrm{Gamma}\parens{#1,\,#2}}
374
+ \newcommand{\cauchynot}[2]{\text{Cauchy}\parens{#1,\,#2}}
375
+ \newcommand{\invchisqnot}[1]{\text{Inv}\chisq{#1}}
376
+ \newcommand{\invscaledchisqnot}[2]{\text{ScaledInv}\ncchisq{#1}{#2}}
377
+ \newcommand{\invgammanot}[2]{\text{InvGamma}\parens{#1,\,#2}}
378
+ \newcommand{\chisq}[1]{\chi^2_{#1}}
379
+ \newcommand{\ncchisq}[2]{\chi^2_{#1}\parens{#2}}
380
+ \newcommand{\ncF}[3]{F_{#1,#2}\parens{#3}}
381
+
382
+ \newcommand{\logisticpdf}[3]{\oneover{#3}\dfrac{\exp{-\dfrac{#1 - #2}{#3}}}{\parens{1+\exp{-\dfrac{#1 - #2}{#3}}}^2}}
383
+ \newcommand{\betapdf}[3]{\dfrac{\Gamma(#2 + #3)}{\Gamma(#2)\Gamma(#3)}#1^{#2-1} (1-#1)^{#3-1}}
384
+ \newcommand{\normpdf}[3]{\frac{1}{\sqrt{2\pi#3}}\exp{-\frac{1}{2#3}(#1 - #2)^2}}
385
+ \newcommand{\normpdfvarone}[2]{\dfrac{1}{\sqrt{2\pi}}e^{-\half(#1 - #2)^2}}
386
+ \newcommand{\chisqpdf}[2]{\dfrac{1}{2^{#2/2}\Gamma(#2/2)}\; {#1}^{#2/2-1} e^{-#1/2}}
387
+ \newcommand{\invchisqpdf}[2]{\dfrac{2^{-\overtwo{#1}}}{\Gamma(#2/2)}\,{#1}^{-\overtwo{#2}-1} e^{-\oneover{2 #1}}}
388
+ \newcommand{\exponentialpdf}[2]{#2\exp{-#2#1}}
389
+ \newcommand{\poissonpdf}[2]{\dfrac{e^{-#1} #1^{#2}}{#2!}}
390
+ \newcommand{\binomialpdf}[3]{\binom{#2}{#1}#3^{#1}(1-#3)^{#2-#1}}
391
+ \newcommand{\rayleighpdf}[2]{\dfrac{#1}{#2^2}\exp{-\dfrac{#1^2}{2 #2^2}}}
392
+ \newcommand{\gammapdf}[3]{\dfrac{#3^#2}{\Gamma\parens{#2}}#1^{#2-1}\exp{-#3 #1}}
393
+ \newcommand{\cauchypdf}[3]{\oneover{\pi} \dfrac{#3}{\parens{#1-#2}^2 + #3^2}}
394
+ \newcommand{\Gammaf}[1]{\Gamma\parens{#1}}
395
+
396
+ \newcommand{\notesref}[1]{\marginpar{\color{gray}\tt #1\color{black}}}
397
+
398
+
399
+
400
+ \newcommand{\zeroonecl}{\bracks{0,1}}
401
+ \newcommand{\forallepsgrzero}{\forall \epsilon > 0~~}
402
+ \newcommand{\lessthaneps}{< \epsilon}
403
+ \newcommand{\fraccomp}[1]{\text{frac}\bracks{#1}}
404
+
405
+ \newcommand{\yrep}{y^{\text{rep}}}
406
+ \newcommand{\yrepisq}{(\yrep_i)^2}
407
+ \newcommand{\yrepvec}{\bv{y}^{\text{rep}}}
408
+
409
+
410
+ \newcommand{\SigField}{\mathcal{F}}
411
+ \newcommand{\ProbMap}{\mathcal{P}}
412
+ \newcommand{\probtrinity}{\parens{\Omega, \SigField, \ProbMap}}
413
+ \newcommand{\convp}{~{\buildrel p \over \rightarrow}~}
414
+ \newcommand{\convLp}[1]{~{\buildrel \Lp{#1} \over \rightarrow}~}
415
+ \newcommand{\nconvp}{~{\buildrel p \over \nrightarrow}~}
416
+ \newcommand{\convae}{~{\buildrel a.e. \over \longrightarrow}~}
417
+ \newcommand{\convau}{~{\buildrel a.u. \over \longrightarrow}~}
418
+ \newcommand{\nconvau}{~{\buildrel a.u. \over \nrightarrow}~}
419
+ \newcommand{\nconvae}{~{\buildrel a.e. \over \nrightarrow}~}
420
+ \newcommand{\convd}{~{\buildrel \mathcal{D} \over \rightarrow}~}
421
+ \newcommand{\nconvd}{~{\buildrel \mathcal{D} \over \nrightarrow}~}
422
+ \newcommand{\setequals}{~{\buildrel \text{set} \over =}~}
423
+ \newcommand{\withprob}{~~\text{w.p.}~~}
424
+ \newcommand{\io}{~~\text{i.o.}}
425
+
426
+ \newcommand{\Acl}{\bar{A}}
427
+ \newcommand{\ENcl}{\bar{E}_N}
428
+ \newcommand{\diam}[1]{\text{diam}\parens{#1}}
429
+
430
+ \newcommand{\taua}{\tau_a}
431
+
432
+ \newcommand{\myint}[4]{\int_{#2}^{#3} #4 \,\text{d}#1}
433
+ \newcommand{\laplacet}[1]{\mathscr{L}\bracks{#1}}
434
+ \newcommand{\laplaceinvt}[1]{\mathscr{L}^{-1}\bracks{#1}}
435
+ \renewcommand{\min}[1]{\text{min}\braces{#1}}
436
+
437
+ \newcommand{\Vbar}[1]{\bar{V}\parens{#1}}
438
+ \newcommand{\expnegrtau}{\exp{-r\tau}}
439
+ \newcommand{\pval}{p_{\text{val}}}
440
+ \newcommand{\alphaovertwo}{\overtwo{\alpha}}
441
+
442
+ \newcommand{\problem}{\vspace{0.4cm} \noindent {\large{\textsf{Problem \arabic{probnum}~}}} \addtocounter{probnum}{1}}
443
+
444
+
445
+ \newcommand{\easysubproblem}{\ingreen{\item}}
446
+ \newcommand{\intermediatesubproblem}{\inyellow{\item}}
447
+ \newcommand{\hardsubproblem}{\inred{\item}}
448
+ \newcommand{\extracreditsubproblem}{\inpurple{\item}}
449
+ \renewcommand{\labelenumi}{(\alph{enumi})}
450
+
451
+ \newcommand{\nonep}{n_{1+}}
452
+ \newcommand{\npone}{n_{+1}}
453
+ \newcommand{\npp}{n_{++}}
454
+ \newcommand{\noneone}{n_{11}}
455
+ \newcommand{\nonetwo}{n_{12}}
456
+ \newcommand{\ntwoone}{n_{21}}
457
+ \newcommand{\ntwotwo}{n_{22}}
458
+
459
+ \newcommand{\sigmahat}{\hat{\sigma}}
460
+ \newcommand{\pihat}{\hat{\pi}}
461
+
462
+
463
+ \newcommand{\probD}{\prob{D}}
464
+ \newcommand{\probDC}{\prob{D^C}}
465
+ \newcommand{\probE}{\prob{E}}
466
+ \newcommand{\probEC}{\prob{E^C}}
467
+ \newcommand{\probDE}{\prob{D,E}}
468
+ \newcommand{\probDEC}{\prob{D,E^C}}
469
+ \newcommand{\probDCE}{\prob{D^C,E}}
470
+ \newcommand{\probDCEC}{\prob{D^C,E^C}}
471
+
472
+ \newcommand{\logit}[1]{\text{logit}\parens{#1}}
473
+
474
+ \newcommand{\errorrv}{\mathcal{E}}
475
+ \newcommand{\berrorrv}{\bv{\errorrv}}
476
+ \newcommand{\DIM}{\mathcal{I}}
477
+ \newcommand{\trans}[1]{#1^\top}
478
+ \newcommand{\transp}[1]{\parens{#1}^\top}
479
+
480
+ \newcommand{\Xjmiss}{X_{j,\text{miss}}}
481
+ \newcommand{\Xjobs}{X_{j,\text{obs}}}
482
+ \newcommand{\Xminjmiss}{X_{-j,\text{miss}}}
483
+ \newcommand{\Xminjobs}{X_{-j,\text{obs}}}
484
+
485
+ \newcommand{\gammavec}{\bv{\gamma}}
486
+
487
+ \newcommand{\Xtrain}{\X_{\text{train}}}
488
+ \newcommand{\ytrain}{\y_{\text{train}}}
489
+ \newcommand{\Xtest}{\X_{\text{test}}}
490
+
491
+
492
+ \renewcommand{\r}{\bv{r}}
493
+ \newcommand{\rstar}{\bv{r}^{\star}}
494
+ \newcommand{\yhatstar}{\yhat^{\star}}
495
+ \newcommand{\gstar}{g^{\star}}
496
+ \newcommand{\hstar}{h^{\star}}\clearpage{}
497
+
498
+ \newtoggle{usejpgs}
499
+ \toggletrue{usejpgs}
500
+
501
+ \title{\bf Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation}
502
+ \author{Alex Goldstein\thanks{Electronic address: \texttt{alexg@wharton.upenn.edu}; Principal Corresponding author}~}
503
+ \author{Adam Kapelner\thanks{Electronic address: \texttt{kapelner@wharton.upenn.edu}; Corresponding author}}
504
+ \author{Justin Bleich\thanks{Electronic address: \texttt{jbleich@wharton.upenn.edu}; Corresponding author}}
505
+ \author{Emil Pitkin\thanks{Electronic address: \texttt{pitkin@wharton.upenn.edu}; Corresponding author}}
506
+ \affil{The Wharton School of the University of Pennsylvania}
507
+
508
+ \begin{document}
509
+ \maketitle
510
+
511
+
512
+ \begin{abstract}
513
+ This article presents Individual Conditional Expectation (ICE) plots, a tool for visualizing the model estimated by any supervised learning algorithm. Classical partial dependence plots (PDPs) help visualize the average partial relationship between the predicted response and one or more features. In the presence of substantial interaction effects, the partial response relationship can be heterogeneous. Thus, an average curve, such as the PDP, can obfuscate the complexity of the modeled relationship. Accordingly, ICE plots refine the partial dependence plot by graphing the functional relationship between the predicted response and the feature for \textit{individual} observations. Specifically, ICE plots highlight the variation in the fitted values across the range of a covariate, suggesting where and to what extent heterogeneities might exist. In addition to providing a plotting suite for exploratory analysis, we include a visual test for additive structure in the data generating model. Through simulated examples and real data sets, we demonstrate how ICE plots can shed light on estimated models in ways PDPs cannot. Procedures outlined are available in the \texttt{R} package \texttt{ICEbox}.
514
+ \end{abstract}
515
+
516
+ \section{Introduction}\label{sec:introduction}
517
+
518
+ The goal of this article is to present Individual Conditional Expectation (ICE) plots, a toolbox for visualizing models produced by \qu{black box} algorithms. These algorithms use training data $\{\x_i,\,y_i\}_{i=1}^{N}$ (where $\x_i = (x_{i,1}, \ldots, x_{i,p})$ is a vector of predictors and $y_i$ is the response) to construct a model $\hat{f}$ that maps the features $\x$ to fitted values $\hat{f}(\x)$. Though these algorithms can produce fitted values that enjoy low generalization error, it is often difficult to understand how the resultant $\hat{f}$ uses $\x$ to generate predictions. The ICE toolbox helps visualize this mapping.
519
+
520
+ ICE plots extend \citet{Friedmana}'s Partial Dependence Plot (PDP), which highlights the average partial relationship between a set of predictors and the predicted response. ICE plots disaggregate this average by displaying the estimated functional relationship for each observation. Plotting a curve for each observation helps identify interactions in $\hat{f}$ as well as extrapolations in predictor space.
521
+
522
+ The paper proceeds as follows. Section~\ref{sec:background} gives background on visualization in machine learning and introduces PDPs more formally. Section~\ref{sec:algorithm} describes the procedure for generating ICE plots and its associated plots. In Section~\ref{sec:simulations} simulated data examples illustrate that ICE plots can be used to identify features of $\hat{f}$ that are not visible in PDPs, or where the PDPs may even be misleading. Each example is chosen to illustrate a particular principle. Section~\ref{sec:real_data} provides examples of ICE plots on real data. In Section \ref{sec:testing} we shift the focus from the fitted $\hat{f}$ to a data generating process $f$ and use ICE plots as part of a visual test for additivity in $f$. Section~\ref{sec:discussion} concludes.
523
+
524
+ \section{Background}\label{sec:background}
525
+
526
+ \subsection{Survey of Black Box Visualization}
527
+
528
+ There is an extensive literature that attests to the superiority of black box machine learning algorithms in minimizing predictive error, both from a theoretical and an applied perspective. \citet{Breiman2001}, summarizing, states \qu{accuracy generally requires more complex prediction methods ...[and] simple and interpretable functions do not make the most accurate predictors.} Problematically, black box models offer little in the way of interpretability, unless the data is of very low dimension. When we are willing to compromise interpretability for improved predictive accuracy, any window into black box's internals can be beneficial.
529
+
530
+
531
+
532
+ Authors have devised a variety of algorithm-specific techniques targeted at improving the interpretability of a particular statistical learning procedure's output. \citet{Rao1997} offers a technique for visualizing the decision boundary produced by bagging decision trees. Although applicable to high dimensional settings, their work primarily focuses on the low dimensional case of two covariates. \citet{Tzeng2005} develops visualization of the layers of neural networks to understand dependencies between the inputs and model outputs and yields insight into classification uncertainty. \citet{Jakulin2005} improves the interpretability of support vector machines by using a device called \qu{nomograms} which provide graphical representation of the contribution of variables to the model fit. Pre-specified interaction effects of interest can be displayed in the nomograms as well. \cite{Breiman2001a} uses randomization of out-of-bag observations to compute a variable importance metric for Random Forests (\texttt{RF}). Those variables for which predictive performance degrades the most vis-a-vis the original model are considered the strongest contributors to forecasting accuracy. This method is also applicable to stochastic gradient boosting \citep{Friedman2002}. \citet{Plate2000} plots neural network predictions in a scatterplot for each variable by sampling points from covariate space. Amongst the existing literature, this work is the most similar to ICE, but was only applied to neural networks and does not have a readily available implementation.
533
+
534
+ Other visualization proposals are model agnostic and can be applied to a host of supervised learning procedures. For instance, \citet{Strumbelj2011} consider a game-theoretic approach to assess the contributions of different features to predictions that relies on an efficient approximation of the Shapley value. \cite{Jiang2002} use quasi-regression estimation of black box functions. Here, the function is expanded into an orthonormal basis of coefficients which are approximated via Monte Carlo simulation. These estimated coefficients can then be used to determine which covariates influence the function and whether any interactions exist.
535
+
536
+ \subsection{Friedman's PDP}
537
+
538
+ Another particularly useful model agnostic tool is \citet{Friedmana}'s PDP, which this paper extends. The PDP plots the change in the average predicted value as specified feature(s) vary over their marginal distribution. Many supervised learning models applied across a number of disciplines have been better understood thanks to PDPs. \citet{Green2010a} use PDPs to understand the relationship between predictors and the conditional average treatment effect for a voter mobilization experiment, with the predictions being made by Bayesian Additive Regression Trees \citep[\texttt{BART},][]{Chipman2010a}. \citet{Berk2013} demonstrate the advantage of using \texttt{RF} and the associated PDPs to accurately model predictor-response relationships under asymmetric classification costs that often arise in criminal justice settings. In the ecological literature, \citet{Elith2008a}, who rely on stochastic gradient boosting, use PDPs to understand how different environmental factors influence the distribution of a particular freshwater eel.
539
+
540
+ To formally define the PDP, let $S\subset \{1,...,p\}$ and let $C$ be the complement set of $S$. Here $S$ and $C$ index subsets of predictors; for example, if $S=\{1,2,3\}$, then $\x_S$ refers to a $3 \times 1$ vector containing the values of the first three coordinates of $\x$. Then the partial dependence function of $f$ on $\x_S$ is given by
541
+
542
+ \bneqn\label{eq:true_pdp}
543
+ f_S = \expesub{\x_C}{f(\x_S,\x_C)} = \int f(\x_S,\x_C) \mathrm{dP}\parens{\x_C}.
544
+ \eneqn
545
+
546
+ Each subset of predictors $S$ has its own partial dependence function $f_S$, which gives the average value of $f$ when $\x_S$ is fixed and $\x_C$ varies over its marginal distribution $\mathrm{dP}\parens{\x_C}$. As neither the true $f$ nor $\mathrm{dP}\parens{\x_C}$ are known, we estimate Equation \ref{eq:true_pdp} by computing
547
+
548
+ \bneqn\label{est_pdp}
549
+ \hat{f}_S = \frac{1}{N}\sum\limits_{i=1}^N \hat{f}(\x_S,\x_{Ci})
550
+ \eneqn
551
+
552
+ \noindent where $\{\x_{C1},...,\x_{CN}\}$ represent the different values of $\x_C$ that are observed in the training data. Note that the approximation here is twofold: we estimate the true model with $\hat{f}$, the output of a statistical learning algorithm, and we estimate the integral over $\x_C$ by averaging over the $N$ $\x_C$ values observed in the training set.
553
+
554
+ This is a visualization tool in the following sense: if $\hat{f}_S$ is evaluated at the $\x_S$ observed in the data, a set of $N$ ordered pairs will result: $\{(\x_{S\ell}, \hat{f}_{S\ell})\}_{\ell=1}^{N}$, where $\hat{f}_{S\ell}$ refers to the estimated partial dependence function evaluated at the $\ell$th coordinate of $\x_S$, denoted $\x_{S\ell}$. Then for one or two dimensional $\x_S$, \citet{Friedmana} proposes plotting the $N$ $\x_{S\ell}$'s versus their associated $\hat{f}_{S\ell}$'s, conventionally joined by lines. The resulting graphic, which is called a partial dependence plot, displays the average value of $\hat{f}$ as a function of $\x_S$. For the remainder of the paper we consider a single predictor of interest at a time ($|S|=1$) and write $x_S$ without boldface accordingly.
555
+
556
+ As an extended example, consider the following data generating process with a simple interaction:
557
+
558
+ \bneqn\label{eq:criss_cross_model0}
559
+ && Y = 0.2 X_{1} - 5X_{2} + 10X_{2} \indic{X_{3} \geq 0} + \errorrv, \\ \nonumber
560
+ && \errorrv \iid \normnot{0}{1}, \quad X_{1}, X_{2}, X_{3} \iid \uniform{-1}{1}.
561
+ \eneqn
562
+
563
+ We generate 1,000 observations from this model and fit a stochastic gradient boosting model (\texttt{SGB}) via the \texttt{R} package \texttt{gbm} \citep{Ridgeway2013} where the number of trees is chosen via cross-validation and the interaction depth is set to 3. We now consider the association between predicted $Y$ values and $X_2$ ($S=X_2)$. In Figure \ref{fig:criss_cross_x2vy} we plot $X_2$ versus $Y$ in our sample. Figure \ref{fig:criss_cross_x2_pdp} displays the fitted model's partial dependence plot for predictor $X_2$. The PDP suggests that on average, $X_2$ is not meaningfully associated with the predicted $Y$. In light of Figure \ref{fig:criss_cross_x2vy}, this conclusion is plainly wrong. Clearly $X_2$ is associated with $Y$; it is simply that the averaging inherent in the PDP shields this discovery from view.
564
+
565
+ \begin{figure}[htp]
566
+ \centering
567
+ \begin{subfigure}[b]{0.48\textwidth}
568
+ \centering
569
+ \includegraphics[width=2.35in]{criss_cross_x2_vy.\iftoggle{usejpgs}{jpg}{\iftoggle{usejpgs}{jpg}{pdf}}}
570
+ \caption{Scatterplot of $Y$ versus $X_2$}
571
+ \label{fig:criss_cross_x2vy}
572
+ \end{subfigure}~~
573
+ \begin{subfigure}[b]{0.48\textwidth}
574
+ \centering
575
+ \includegraphics[width=2.35in]{criss_cross_x2_gbm_pdp.pdf}
576
+ \caption{PDP}
577
+ \label{fig:criss_cross_x2_pdp}
578
+ \end{subfigure}
579
+ \caption{Scatterplot and PDP of $X_2$ versus $Y$ for a sample of size 1000 from the process described in Equation \ref{eq:criss_cross_model0}. In this example $\hat{f}$ is fit using \texttt{SGB}. The PDP incorrectly suggests that there is no meaningful relationship between $X_2$ and the predicted $Y$.}
580
+ \label{fig:no_inter_example}
581
+ \end{figure}
582
+
583
+ In fact, the original work introducing PDPs argues that the PDP can be a useful summary for the chosen subset of variables if their dependence on the remaining features is not too strong. When the dependence is strong, however -- that is, when interactions are present -- the PDP can be misleading. Nor is the PDP particularly effective at revealing extrapolations in $\mathcal{X}$-space. ICE plots are intended to address these issues.
584
+
585
+ \section{The ICE Toolbox}\label{sec:algorithm}
586
+
587
+ \subsection{The ICE Procedure}\label{subsec:vanilla_plot}
588
+
589
+ Visually, ICE plots disaggregate the output of classical PDPs. Rather than plot the target covariates' \textit{average} partial effect on the predicted response, we instead plot the $N$ estimated conditional expectation curves: each reflects the predicted response as a function of covariate $x_S$, conditional on an observed $\x_C$.
590
+
591
+ Consider the observations $\braces{\parens{x_{Si},\,\x_{Ci}}}_{i=1}^N$, and the estimated response function $\hat{f}$. For each of the $N$ observed and fixed values of $\x_C$, a curve $\hat{f}^{(i)}_S$ is plotted against the observed values of $x_S$. Therefore, at each x-coordinate, $x_S$ is fixed and the $\x_C$ varies across $N$ observations. Each curve defines the conditional relationship between $x_S$ and $\hat{f}$ at fixed values of $\x_C$. Thus, the ICE algorithm gives the user insight into the several variants of conditional relationships estimated by the black box.
592
+
593
+ The ICE algorithm is given in Algorithm \ref{algo:ice} in Appendix \ref{app:algorithms}. Note that the PDP curve is the average of the $N$ ICE curves and can thus be viewed as a form of post-processing. Although in this paper we focus on the case where $|S|=1$, the pseudocode is general. All plots in this paper are produced using the \texttt{R} package \texttt{ICEbox}, available on CRAN.
594
+
595
+ Returning to the simulated data described by Equation \ref{eq:criss_cross_model0}, Figure \ref{fig:criss_cross_x2_ice} shows the ICE plot for the \texttt{SGB} when $S=X_2$. In contrast to the PDP in Figure \ref{fig:criss_cross_x2_pdp}, the ICE plot makes it clear that the fitted values \emph{are} related to $X_2$. Specifically, the \texttt{SGB}'s predicted values are approximately linearly increasing or decreasing in $X_2$ depending upon which region of $\mathcal{X}$ an observation is in.
596
+
597
+ \begin{figure}[htp]
598
+ \centering
599
+ \includegraphics[width=2.5in]{criss_cross_x2_ice.\iftoggle{usejpgs}{jpg}{pdf}}
600
+ \caption{\texttt{SGB} ICE plot for $X_2$ from 1000 realizations of the data generating process described by Equation \ref{eq:criss_cross_model0}. We see that the \texttt{SGB}'s fitted values are either approximately linearly increasing or decreasing in $X_2$.}
601
+ \label{fig:criss_cross_x2_ice}
602
+ \end{figure}
603
+
604
+ Now consider the well known Boston Housing Data (BHD). The goal in this dataset is to predict a census tract's median home price using features of the census tract itself. It is important to note that the median home prices for the tracts are truncated at 50, and hence one may observe potential ceiling effects when analyzing the data. We use Random Forests (\texttt{RF}) implemented in \texttt{R} \citep{Liaw2002} to fit $\hat{f}$. The ICE plot in Figure \ref{fig:bh_ice_example} examines the association between the average age of homes in a census tract and the corresponding median home value for that tract ($S=\texttt{age}$). The PDP is largely flat, perhaps displaying a slight decrease in predicted median home price as \texttt{age} increases. The ICE plot shows those observations for which increasing \texttt{age} is actually associated with higher predicted values, thereby describing how individual behavior departs from the average behavior.
605
+
606
+ \begin{figure}[htp]
607
+ \centering
608
+ \includegraphics[width=2.5in]{bhd_age_ice_rf.\iftoggle{usejpgs}{jpg}{pdf}}
609
+ \caption{\texttt{RF} ICE plot for BHD for predictor \texttt{age}. The highlighted thick line is the PDP. For each curve, the location of its observed \texttt{age} is marked by a point. For some observations, higher \texttt{age} is associated with a higher predicted values. The upper set of tick marks on the horizontal axis indicate the observed deciles of \texttt{age}.}
610
+ \label{fig:bh_ice_example}
611
+ \end{figure}
612
+
613
+ \subsection{The Centered ICE Plot}\label{subsec:centered_plot}
614
+
615
+ When the curves have a wide range of intercepts and are consequently ``stacked'' on each other, heterogeneity in the model can be difficult to discern. In Figure \ref{fig:bh_ice_example}, for example, the variation in effects between curves and cumulative effects are veiled. In such cases the \qu{centered ICE} plot (the \qu{c-ICE}), which removes level effects, is useful.
616
+
617
+ c-ICE works as follows. Choose a location $x^*$ in the range of $x_S$ and join or \qu{pinch} all prediction lines at that point. We have found that choosing $x^*$ as the minimum or the maximum observed value results in the most interpretable plots. For each curve $\hat{f}^{(i)}$ in the ICE plot, the corresponding c-ICE curve is given by
618
+
619
+ \bneqn \label{eq:c-ice}
620
+ \hat{f}_{\mathrm{cent}}^{(i)} = \hat{f}^{(i)} - \onevec \hat{f}(x^*,\x_{Ci}),
621
+ \eneqn
622
+
623
+ \noindent where the unadorned $\hat{f}$ denotes the fitted model and $\onevec$ is a vector of 1's of the appropriate dimension. Hence the point $(x^*,\hat{f}(x^*,\x_{Ci}))$ acts as a ``base case'' for each curve. If $x^*$ is the minimum value of $x_S$, for example, this ensures that all curves originate at 0, thus removing the differences in level due to the different $\x_{Ci}$'s. At the maximum $x_S$ value, each centered curve's level reflects the cumulative effect of $x_S$ on $\hat{f}$ relative to the base case. The result is a plot that better isolates the combined effect of $x_S$ on $\hat{f}$, holding $\x_C$ fixed.
624
+
625
+ Figure \ref{fig:bh_c-ice_example} shows a c-ICE plot for the predictor \texttt{age} of the BHD for the same \texttt{RF} model as examined previously. From the c-ICE plot we can now see clearly that the cumulative effect of \texttt{age} on predicted median value increases for some cases, and decreases for others. Such divergences of the centered curves suggest the existence of interactions between $x_S$ and $\x_C$ in the model. Also, the magnitude of the effect, as a fraction of the range of $y$, can be seen in the vertical axis displayed on the right of the graph.
626
+
627
+ \begin{figure}[htp]
628
+ \centering
629
+ \includegraphics[width=2.5in]{bhd_age_c-ice_rf.\iftoggle{usejpgs}{jpg}{pdf}}
630
+ \caption{c-ICE plot for \texttt{age} with $x^*$ set to the minimum value of \texttt{age}. The right vertical axis displays changes in $\hat{f}$ over the baseline as a fraction of $y$'s observed range. In this example, interactions between \texttt{age} and other predictors create cumulative differences in fitted values of up to about 14\% of the range of $y$.}
631
+ \label{fig:bh_c-ice_example}
632
+ \end{figure}
633
+
634
+ \subsection{The Derivative ICE Plot}\label{subsec:derivative_plot}
635
+
636
+ To further explore the presence of interaction effects, we develop plots of the partial derivative of $\hat{f}$ with respect to $x_S$. To illustrate, consider the scenario in which $x_S$ does not interact with the other predictors in the fitted model. This implies $\hat{f}$ can be written as
637
+
638
+ \bneqn\label{eq:additive_model}
639
+ \hat{f}(\x) = \hat{f}(x_S, \x_{C}) = g(x_S) + h(\x_{C}), \quad \text{so that} \quad \frac{\partial \hat{f}(\x)}{\partial{x_S}} = g'(x_S),
640
+ \eneqn
641
+
642
+ \noindent meaning the relationship between $x_S$ and $\hat{f}$ does not depend on $\x_{C}$. Thus the ICE plot for $x_S$ would display a set of $N$ curves that share a single common shape but differ by level shifts according to the values of $\x_{C}$.
643
+
644
+ As it can be difficult to visually assess derivatives from ICE plots, it is useful to plot an estimate of the partial derivative directly. The details of this procedure are given in Algorithm \ref{algo:nppb} in Appendix \ref{app:algorithms}. We call this a \qu{derivative ICE} plot, or \qu{d-ICE.} When no interactions are present in the fitted model, all curves in the d-ICE plot are equivalent, and the plot shows a single line. When interactions do exist, the derivative lines will be heterogeneous.
645
+
646
+ As an example, consider the d-ICE plot for the \texttt{RF} model in Figure \ref{fig:bh_d-ice_example}. The plot suggests that when \texttt{age} is below approximately 60, $g' \approx 0$ for all observed values of $\x_C$. In contrast, when \texttt{age} is above 60 there are observations for which $g' > 0$ and others for which $g' < 0$, suggesting an interaction between \texttt{age} and the other predictors. Also, the standard deviation of the partial derivatives at each point, plotted in the lower panel, serves as a useful summary to highlight regions of heterogeneity in the estimated derivatives (i.e., potential evidence of interactions in the fitted model).
647
+
648
+ \begin{figure}[htp]
649
+ \centering
650
+ \includegraphics[width=2.7in]{bhd_age_d-ice_rf.\iftoggle{usejpgs}{jpg}{pdf}}
651
+ \caption{d-ICE plot for \texttt{age} in the BHD. The left vertical axis' scale gives the partial derivative of the fitted model. Below the d-ICE plot we plot the standard deviation of the derivative estimates at each value of \texttt{age}. The scale for this standard deviation plot is on the bottom of the right vertical axis.}
652
+ \label{fig:bh_d-ice_example}
653
+ \end{figure}
654
+
655
+
656
+ \subsection{Visualizing a Second Feature}\label{subsec:second_feature}
657
+
658
+ Color allows overloading of ICE, c-ICE and d-ICE plots with information regarding a second predictor of interest $x_k$. Specifically, one can assess how the second predictor influences the relationship between $x_S$ and $\hat{f}$. If $x_k$ is categorical, we assign colors to its levels and plot each prediction line $\hat{f}^{(i)}$ in the color of $x_{ik}$'s level. If $x_k$ is continuous, we vary the color shade from light (low $x_k$) to dark (high $x_k$).
659
+
660
+ We replot the c-ICE from Figure \ref{fig:bh_c-ice_example} with lines colored by a newly constructed predictor, $x=1(\texttt{rm} > \mathrm{median}(\texttt{rm}))$. Lines are colored red if the average number of rooms in a census tract is greater than the median number of rooms across all all census tracts and are colored blue otherwise. Figure \ref{fig:bh_c-ice_colored_example} suggests that for census tracts with a larger number of average rooms, predicted median home price value is positively associated with \texttt{age} and for census tracts with a lesser number of average rooms, the association is negative.
661
+
662
+ \begin{figure}[htp]
663
+ \centering
664
+ \includegraphics[width=2.5in]{bhd_age_c-ice_colored_by_rm_rf.\iftoggle{usejpgs}{jpg}{pdf}}
665
+ \caption{The c-ICE plot for \texttt{age} of Figure \ref{fig:bh_c-ice_example} in the BHD. Red lines correspond to observations with \texttt{rm} greater than the median \texttt{rm} and blue lines correspond to those with fewer.}
666
+ \label{fig:bh_c-ice_colored_example}
667
+ \end{figure}
668
+
669
+
670
+ \section{Simulations}\label{sec:simulations}
671
+
672
+ Each of the following examples is designed to emphasize a particular model characteristic that the ICE toolbox can detect. To more clearly demonstrate given scenarios with minimal interference from issues that one typically encounters in actual data, such as noise and model misspecification, the examples are purposely stylized.
673
+
674
+ \subsection{Additivity Assessment}\label{subsec:sim_additivity}
675
+
676
+ We begin by showing that ICE plots can be used as a diagnostic in evaluating the extent to which a fitted model $\hat{f}$ fits an additive model.
677
+
678
+ Consider again the prediction task in which $\hat{f}(\x) = g(x_S) + h(\x_{C})$. For arbitrary vectors $\x_{Ci}$ and $\x_{Cj}$, $\hat{f}(x_S, \x_{Ci}) - \hat{f}(x_S, \x_{Cj}) = h(\x_{Ci}) - h(\x_{Cj})$ for all values of $x_S$. The term $h(\x_{Ci}) - h(\x_{Cj})$ represents the shift in level due to the difference between $\x_{Ci}$ and $\x_{Cj}$ and is independent of the value of $x_S$. Thus the ICE plot for $x_S$ will display a set of $N$ curves that share a common shape but differ by level shifts according to the unique values of $\x_{C}$.
679
+
680
+ As an illustration, consider the following additive data generating model
681
+
682
+ \beqn
683
+ Y = X_{1}^2 + X_{2} + \errorrv, \quad X_{1}, X_{2} \iid \uniform{-1}{1}, \quad \errorrv \iid \normnot{0}{1}.
684
+ \eeqn
685
+
686
+ We simulate 1000 independent $(\X_i, Y_i)$ pairs according to the above and fit a generalized additive model \citep[\texttt{GAM},][]{Hastie1986} via the \texttt{R} package \texttt{gam} \citep{Hastie2013}. As we have specified it, the \texttt{GAM} assumes
687
+
688
+ \beqn
689
+ f(\X)=f_1(X_1)+f_2(X_2)+f_3(X_1 X_2)
690
+ \eeqn
691
+
692
+ \noindent where $f_1$, $f_2$ and $f_3$ are unknown functions estimated internally by the procedure using smoothing splines. Because $f_3$ appears in the model specification but not in the data generating process, any interaction effects that \texttt{GAM} fits are spurious.\footnote{If we were to eliminate $f_3$ from the \texttt{GAM} then we would know a priori that $\hat{f}$ would not display interaction effects.} Here, ICE plots inform us of the degree to which interactions were fit. Were there no interaction in $\hat{f}$ between $X_1$ and $X_2$, the ICE plots for $X_1$ would display a set of curves equivalent in shape but differing in level.
693
+
694
+ Figure \ref{fig:no_inter_ice} displays the ICE plots for $X_1$ and indicates that this is indeed the case: all curves display a similar parabolic relationship between $\hat{f}$ and $X_1$, shifted by a constant, and independent of the value of $X_2$. Accordingly, the associated d-ICE plot in Figure \ref{fig:no_inter_d-ice} displays little variation between curves. The ICE suite makes it apparent that $f_3$ (correctly) contributes relatively little to the \texttt{GAM} model fit. Note that additive structure cannot be observed from the PDP alone in this example (or any other).
695
+
696
+ \begin{figure}[htp]
697
+ \centering
698
+ \begin{subfigure}[b]{0.48\textwidth}
699
+ \centering
700
+ \includegraphics[width=2.35in]{no_inter_example.\iftoggle{usejpgs}{jpg}{pdf}}
701
+ \caption{ICE}
702
+ \label{fig:no_inter_ice}
703
+ \end{subfigure}~~
704
+ \begin{subfigure}[b]{0.48\textwidth}
705
+ \centering
706
+ \includegraphics[width=2.35in]{no_inter_example_d-ice.\iftoggle{usejpgs}{jpg}{pdf}}
707
+ \caption{d-ICE}
708
+ \label{fig:no_inter_d-ice}
709
+ \end{subfigure}
710
+ \caption{ICE and d-ICE plots for $S=X_1$ when $\hat{f}$ is a \texttt{GAM} with possible interaction effects between $X_1$ and $X_2$. So as to keep the plot uncluttered we plot only a fraction of all 1000 curves. In the ICE plots the dots indicate the actual location of $X_1$ for each curve.}
711
+ \label{fig:no_inter_example}
712
+ \end{figure}
713
+
714
+ \subsection{Finding interactions and regions of interactions}\label{subsec:sim_interactions}
715
+
716
+ As noted in \citet{Friedmana}, the PDP is most instructive when there are no interactions between $x_S$ and the other features. In the presence of interaction effects, the averaging procedure in the PDP can obscure any heterogeneity in $\hat{f}$. Let us return to the simple interaction model
717
+
718
+ \bneqn\label{eq:criss_cross_model}
719
+ && Y = 0.2 X_{1} - 5X_{2} + 10X_{2} \indic{X_{3} \geq 0} + \errorrv, \\ \nonumber
720
+ && \errorrv \iid \normnot{0}{1}, \quad X_{1}, X_{2}, X_{3} \iid \uniform{-1}{1}
721
+ \eneqn
722
+
723
+ \noindent to examine the relationship between \texttt{SGB}'s $\hat{f}$ and $X_3$. Figure \ref{fig:criss_cross_sim_amdps_gbm} displays an ICE plot for $X_3$. Similar to the PDP we saw in Section \ref{sec:introduction}, the plot suggests that averaged over $X_1$ and $X_2$, $\hat{f}$ is not associated with $X_3$. By following the non-parallel ICE curves, however, it is clear that $X_3$ modulates the fitted value through interactions with $X_1$ and $X_2$.
724
+
725
+ Where in the range of $X_3$ do these interactions occur? The d-ICE plot of Figure \ref{fig:criss_cross_sim_amdps_gbm_d} shows that interactions are in a neighborhood around $X_3 \approx 0$. This is expected; in the model given by Equation \ref{eq:criss_cross_model}, being above or below $X_3 = 0$ changes the response level. The plot suggests that the fitted model's interactions are concentrated in $X_3 \in \bracks{-0.025,\,0.025}$ which we call the \qu{region of interaction} (ROI).
726
+
727
+ Generally, ROIs are identified by noting where the derivative lines are variable. In our example, the lines have highly variable derivatives (both positive and negative) in $\bracks{-0.025,\,0.025}$. The more heterogeneity in these derivative lines, the larger the effect of the interaction between $x_S$ and $\x_C$ on the model fit. ROIs can be seen most easily by plotting the standard deviation of the derivative lines at each $x_S$ value. In this example, the standard deviation function is plotted in the bottom pane of Figure \ref{fig:criss_cross_sim_amdps_gbm_d} and demonstrates that fitted interactions peak at $X_3 \approx 0$.
728
+
729
+ \begin{figure}[htp]
730
+ \centering
731
+ \begin{subfigure}[b]{0.48\textwidth}
732
+ \centering
733
+ \includegraphics[width=2.35in]{criss_cross_sim_gbm.\iftoggle{usejpgs}{jpg}{pdf}}
734
+ \caption{ICE (10 curves)}
735
+ \label{fig:criss_cross_sim_amdps_gbm}
736
+ \end{subfigure}~~
737
+ \begin{subfigure}[b]{0.48\textwidth}
738
+ \centering
739
+ \includegraphics[width=2.35in]{criss_cross_sim_gbm_d.\iftoggle{usejpgs}{jpg}{pdf}}
740
+ \caption{d-ICE}
741
+ \label{fig:criss_cross_sim_amdps_gbm_d}
742
+ \end{subfigure}
743
+ \caption{ICE plots for an \texttt{SGB} fit to the simple interaction model of Equation \ref{eq:criss_cross_model}.}
744
+ \label{fig:criss_cross_sim_amdps}
745
+ \end{figure}
746
+
747
+ \subsection{Extrapolation Detection}\label{subsec:sim_extrap}
748
+
749
+ As the number of predictors $p$ increases, the sample vectors $\x_1,\ldots \x_N$ are increasingly sparse in the feature space $\mathcal{X}$. A consequence of this curse of dimensionality is that for many $\x \in \mathcal{X}$, $\hat{f}(\x)$ represents an extrapolation rather than an interpolation (see \citealp{Hastie2009} for a more complete discussion).
750
+
751
+ Extrapolation may be of particular concern when using a black-box algorithm to forecast $\x_{\text{new}}$. Not only may $\hat{f}(\x_{\text{new}})$ be an extrapolation of the $(\x,y)$ relationship observed in the training data, but the black-box nature of $\hat{f}$ precludes us from gaining any insight into what the extrapolation might look like. Fortunately, ICE plots can cast light into these extrapolations.
752
+
753
+ Recall that each curve in the ICE plot includes the fitted value $\hat{f}(x_{Si}, \x_{Ci})$ where $x_{Si}$ is actually observed in the training data for the $i$th observation. The other points on this curve represent extrapolations in $\mathcal{X}$. Marking each curve in the ICE plot at the observed point helps us assess the presence and nature of $\hat{f}$'s hypothesized extrapolations in $\mathcal{X}$.
754
+
755
+ Consider the following model:
756
+
757
+ \bneqn \label{eq:extrap}
758
+ && Y = 10X_1^2 + \indic{X_{2} \geq 0} + \errorrv, \\
759
+ && \errorrv \iid \normnot{0}{.1^2}, \quad \twovec{X_1}{X_2} \sim \begin{cases}
760
+ \uniform{-1}{0}, ~~\uniform{-1}{0} & \withprob \third \\
761
+ \uniform{0}{1}, ~~\uniform{-1}{0} & \withprob \third \\ \nonumber
762
+ \uniform{-1}{0}, ~~\uniform{0}{1} & \withprob \third \\ \nonumber
763
+ \uniform{0}{1}, ~~\uniform{0}{1} & \withprob 0.
764
+ \end{cases}
765
+ \eneqn
766
+
767
+ Notice $\prob{X_1>0,\,X_2>0}=0$, leaving the quadrant $[0, 1] \times [0, 1]$ empty. We simulate 1000 observations and fit a \texttt{RF} model to the data. The ICE plot for $x_1$ is displayed in Figure \ref{fig:extrapsim a} with the points corresponding to the 1000 observed $(x_1,\,x_2)$ values marked by dots. We highlight observations with $x_2 < 0$ in red and those with $x_2 \geq 0$ in blue. The two subsets are plotted separately in Figures \ref{fig:extrapsim b} and \ref{fig:extrapsim c}.
768
+
769
+ The absence on the blue curves of points where both $x_1,x_2 > 0$ confirms that the probability of $X_1 > 0$ and $X_2 > 0$ equals zero. From Figure \ref{fig:extrapsim c}, we see that in this region of $\mathcal{X}$, $\hat{f}$ increases roughly in proportion with $x_1^2$ even though no data exists. Ostensibly the \texttt{RF} model has extrapolated the polynomial relationship from the observed $\mathcal{X}$-space to where both $x_1 > 0$ and $x_2 > 0$.
770
+
771
+ Whether it is desirable for $\hat{f}$ to display such behavior in unknown regions of $\mathcal{X}$ is dependent on the character of the extrapolations in conjunction with the application at hand. Moreover, different algorithms will likely give different extrapolations. Examining the ICE plots can reveal the nature of these extrapolations and guide the user to a suitable choice.
772
+
773
+ \begin{figure}[htp]
774
+ \centering
775
+ \begin{subfigure}[b]{0.32\textwidth}
776
+ \centering
777
+ \includegraphics[width=1.9in]{extrap_sim_full_red.\iftoggle{usejpgs}{jpg}{pdf}}
778
+ \caption{All observations}
779
+ \label{fig:extrapsim a}
780
+ \end{subfigure}~~
781
+ \begin{subfigure}[b]{0.32\textwidth}
782
+ \centering
783
+ \includegraphics[width=1.9in]{extrap_sim_noextrap_red.\iftoggle{usejpgs}{jpg}{pdf}}
784
+ \caption{Observations with $x_2<0$}
785
+ \label{fig:extrapsim b}
786
+ \end{subfigure}
787
+ \begin{subfigure}[b]{0.32\textwidth}
788
+ \centering
789
+ \includegraphics[width=1.9in]{extrap_sim_extrap_blue.\iftoggle{usejpgs}{jpg}{pdf}}
790
+ \caption{Observations with $x_2 \geq 0$}
791
+ \label{fig:extrapsim c}
792
+ \end{subfigure}
793
+ \caption{ICE plots for $S=x_1$ of a \texttt{RF} model fit to Equation \ref{eq:extrap}. The left plot shows the ICE plot for the entire dataset where $x_2 < 0$ is colored red and $x_2 \geq 0$ in blue. The middle plot shows only the red curves and the right only the blue. Recall that there is no training data in the quadrant $[0, 1] \times [0, 1]$, and so Figure \ref{fig:extrapsim c} contains no points for observed values when $x_1 > 0$ (when both $x_1$ and $x_2$ are positive). Nevertheless, from Figure \ref{fig:extrapsim c}'s ICE curves it is apparent that the fitted values are increasing in $x_1$ for values above $0$. Here, the ICE plot elucidates the existence and nature of the \texttt{RF}'s extrapolation outside the observed $\mathcal{X}$-space.}
794
+ \label{fig:extrapsim}
795
+ \end{figure}
796
+
797
+ \section{Real Data}\label{sec:real_data}
798
+
799
+ We now demonstrate the ICE toolbox on three real data examples. We emphasize features of $\hat{f}$ that might otherwise have been overlooked.
800
+
801
+ \subsection{Depression Clinical Trial}\label{subsec:depression}
802
+
803
+ The first dataset comes from a depression clinical trial \citep{DeRubeis2013}. The response variable is the Hamilton Depression Rating Scale (a common composite score of symptoms of depression where lower scores correspond to being less depressed) after 15 weeks of treatment. The treatments are placebo, cognitive therapy (a type of one-on-one counseling), and paroxetine (an anti-depressant medication). The study also collected 37 covariates which are demographic (e.g. age, gender, income) or related to the medical history of the subject (e.g. prior medications and whether the subject was previously treated). For this illustration, we drop the placebo subjects to focus on the 156 subjects who received either of the two active treatments.
804
+
805
+ The goal of the analysis in \citet{DeRubeis2013} is to understand how different subjects respond to different treatments, conditional on their \textit{personal} covariates. The difference between the two active treatments, assuming the classic linear (and additive) model for treatment, was found to be statistically insignificant. If the clinician believes that the treatment effect is heterogeneous and the relationship between the covariates and response is complex, then flexible nonparametric models could be an attractive exploratory tool.
806
+
807
+ Using the ICE toolbox, one can visualize the impact of the treatment variable on an $\hat{f}$ given by a black box algorithm. Note that extrapolations in the treatment indicator (i.e. predicting at 0 for an observed 1 or vice versa) correspond to counterfactuals in a clinical setting, allowing the researcher to see how the same patient might have responded to a different treatment.
808
+
809
+ We first modeled the response as a function of the 37 covariates as well as treatment to obtain the best fit of the functional relationship using the black-box algorithm \texttt{BART} (implemented by \citealp{Kapelner2013}) and obtained an in-sample $R^2 \approx 0.40$.
810
+
811
+ Figure \ref{fig:depression} displays an ICE plot of the binary treatment variable, with cognitive therapy coded as ``0'' and paroxetine coded as ``1'', colored by marital status (blue if married and red if unmarried). The plot shows a flat PDP, demonstrating no relationship between the predicted response and treatment when averaging over the effects of other covariates. However, the crossing of ICE curves indicates the presence of interactions in $\hat{f}$, which is confirmed by the c-ICE plot in Figure \ref{fig:depression_c}. After centering, it becomes clear that the flat PDP obscures a complex relationship: the model predicts between -3 and +3 points on the Hamilton scale, which is a highly clinically significant range (and almost 20\% of the observed response's range). Further, we can see that \texttt{BART} fits an interaction between treatment and marital status: married subjects are generally predicted to do better on cognitive therapy and unmarried subjects are predicted to do better with paroxetine.
812
+
813
+ \begin{figure}[htp]
814
+ \centering
815
+ \begin{subfigure}[b]{0.48\textwidth}
816
+ \centering
817
+ \includegraphics[width=2.5in]{cog_therapy_married_red.\iftoggle{usejpgs}{jpg}{pdf}}
818
+ \caption{ICE}
819
+ \label{fig:depression}
820
+ \end{subfigure}~~
821
+ \begin{subfigure}[b]{0.48\textwidth}
822
+ \centering
823
+ \includegraphics[width=2.5in]{cog_therapy_married_red_c.\iftoggle{usejpgs}{jpg}{pdf}}
824
+ \caption{c-ICE}
825
+ \label{fig:depression_c}
826
+ \end{subfigure}
827
+ \caption{ICE plots of a \texttt{BART} model for the effect of treatment on depression score after 15 weeks. Married subjects are colored in blue and unmarried subjects are colored in red.}
828
+ \label{fig:depression_ices}
829
+ \end{figure}
830
+
831
+ \subsection{White Wine}\label{subsec:white_wine}
832
+
833
+ The second data set concerns 5,000 white wines produced in the \textit{vinto verde} region of Portugal obtained from the UCI repository \citep{Bache2013}. The response variable is a wine quality metric, taken to be the median preference score of three blind tasters on a scale of 1-10, treated as continuous. The 11 covariates are physicochemical metrics that are commonly collected for wine quality control such as citric acid content, sulphates, etc. The model is fit with a neural network (\texttt{NN}) using the R package \texttt{nnet} \citep{Venables2002}. We fit a \texttt{NN} with 3 hidden units and a small parameter value for weight decay\footnote{Note that \texttt{NN} models are highly sensitive to the number of hidden units and weight decay parameter. We therefore offer the following results as merely representative of the type of plots which \texttt{NN} models can generate.} and achieved an in-sample $R^2$ of approximately 0.37.
834
+
835
+ \begin{figure}[htp]
836
+ \centering
837
+ \begin{subfigure}[b]{0.48\textwidth}
838
+ \centering
839
+ \includegraphics[width=2.35in]{white_wine_ph_by_median_alcohol_c_nnets.\iftoggle{usejpgs}{jpg}{pdf}}
840
+ \caption{c-ICE for \texttt{NN}}
841
+ \label{fig:white_wine_c_nn}
842
+ \end{subfigure}~~
843
+ \begin{subfigure}[b]{0.48\textwidth}
844
+ \centering
845
+ \includegraphics[width=2.35in]{white_wine_ph_by_median_alcohol_d_nnets.\iftoggle{usejpgs}{jpg}{pdf}}
846
+ \caption{d-ICE for \texttt{NN}}
847
+ \label{fig:white_wine_d_nn}
848
+ \end{subfigure}
849
+ \caption{ICE plots of \texttt{NN} model for wine ratings versus \texttt{pH} of white wine colored by whether the alcohol content is high (blue) or low (red). To prevent cluttering, only a fraction of the 5,000 observations are plotted.}
850
+ \label{fig:white_wine}
851
+ \end{figure}
852
+
853
+ We find the covariate \texttt{pH} to be the most illustrative. The c-ICE plot is displayed in Figure \ref{fig:white_wine_c_nn}. Wines with high alcohol content are colored blue and wines with low alcohol content are colored red. Note that the PDP shows a linear trend, indicating that on average, higher \texttt{pH} is associated with higher fitted preference scores. While this is the general trend for wines with higher alcohol content, the ICE plots reveal that interaction effects are present in $\hat{f}$. For many white wines with low alcohol content, the illustration suggests a nonlinear and cumulatively \textit{negative} association. For these wines, the predicted preference score is actually negatively associated with \texttt{pH} for low values of \texttt{pH} and then begins to increase --- a severe departure from what the PDP suggests. However, the area of increase contains no data points, signifying that the increase is merely an extrapolation likely driven by the positive trend of the high alcohol wines. Overall, the ICE plots indicate that for more alcoholic wines, the predicted score is increasing in \texttt{pH} while the opposite is true for wines with low alcohol content. Also, the difference in cumulative effect is meaningful; when varied from the minimum to maximum values of \texttt{pH}, white wine scores vary by roughly 40\% of the range of the response variable.
854
+
855
+ Examining the derivative plot of Figure \ref{fig:white_wine_d_nn} confirms the observations made above. The \texttt{NN} model suggests interactions exist for lower values of \texttt{pH} in particular. Wines with high alcohol content have mostly positive derivatives while those with low alcohol content have mostly negative derivatives. As \texttt{pH} increases, the standard deviation of the derivatives decreases, suggesting that interactions are less prevalent at higher levels of \texttt{pH}.
856
+
857
+ \subsection{Diabetes Classification in Pima Indians}\label{subsec:pima}
858
+
859
+ The last dataset consists of 332 Pima Indians \citep{Smith1988} obtained from the \texttt{R} library \texttt{MASS}. Of the 332 subjects, 109 were diagnosed with diabetes, the binary response variable which was fit using seven predictors (with body metrics such as blood pressure, glucose concentration, etc.). We model the data using a \texttt{RF} and achieve an out-of-bag misclassification rate of 22\%.
860
+
861
+ \begin{figure}[htp]
862
+ \centering
863
+ \begin{subfigure}[b]{0.48\textwidth}
864
+ \centering
865
+ \includegraphics[width=2.35in]{pima_skin_by_age_c.\iftoggle{usejpgs}{jpg}{pdf}}
866
+ \caption{c-ICE}
867
+ \label{fig:pima_skin_age_c}
868
+ \end{subfigure}~~
869
+ \begin{subfigure}[b]{0.48\textwidth}
870
+ \centering
871
+ \includegraphics[width=2.35in]{pima_skin_by_age_d.\iftoggle{usejpgs}{jpg}{pdf}}
872
+ \caption{d-ICE}
873
+ \label{fig:pima_skin_age_d}
874
+ \end{subfigure}
875
+ \caption{ICE plots of a \texttt{RF} model for estimated centered logit of the probability of contracting diabetes versus \texttt{skin} colored by subject \texttt{age}.}
876
+ \label{fig:pima_skin_age}
877
+ \end{figure}
878
+
879
+ Once again, ICE plots offer the practitioner a more comprehensive view of the output of the black box. For example, the covariate \texttt{skin} thickness about the triceps is plotted as a c-ICE in Figure \ref{fig:pima_skin_age_c}. The PDP clearly shows an increase in the predicted centered log odds of contracting diabetes. This is expected given that \texttt{skin} is a proxy for obesity, a major risk factor for diabetes. However, the ICE plot illustrates a more elaborate model fit. Many subjects with high \texttt{skin} have a flat risk of diabetes according to $\hat{f}$; others with comparable thickness exhibit a much larger centered log-odds increase.\footnote{The curves at the top of the figure mainly correspond to younger people. Their estimated effect of high thickness is seen to be an extrapolation.} Figure \ref{fig:pima_skin_age_d} shows that the \texttt{RF} model fits interactions across the range of \texttt{skin} with the largest heterogeneity in effect occurring when \texttt{skin} is slightly above 30. This can be seen in the standard deviation of the derivative in the bottom pane of Figure \ref{fig:pima_skin_age_d}.
880
+
881
+
882
+
883
+
884
+ \section{A Visual Test for Additivity}\label{sec:testing}
885
+
886
+ Thus far we have used the ICE toolbox to explore the output of black box models. We have explored whether $\hat{f}$ has additive structure or if interactions exist, and also examined $\hat{f}$'s extrapolations in $\mathcal{X}$-space. To better visualize interactions, we plotted individual curves in colors according to the value of a second predictor $x_k$. We have \emph{not} asked whether these findings are reflective of phenomena in any underlying model.
887
+
888
+ When heterogeneity in ICE plots is observed, the researcher can adopt two mindsets. When one considers $\hat{f}$ to be the fitted model used for subsequent predictions, the heterogeneity is of interest because it determines future fitted values. This is the mindset we have considered thus far. Separately, it might be interesting to ascertain whether interactions between $x_S$ and $\x_C$ exist in the data generating model, denoted $f$. This question exists for other discoveries made using ICE plots, but we focus here on interactions.
889
+
890
+ The problem of assessing the statistical validity of discoveries made by examining plots is addressed in \citet{Buja2009} and \citet{Wickham2010}. The central idea in these papers is to insert the observed plot randomly into a lineup of null plots generated from data sampled under a null distribution. If the single real plot is correctly identified amongst 19 null plots, for example, then ``the discovery can be assigned a $p$-value of $0.05$" \citep{Buja2009}. A benefit of this approach is that the procedure is valid despite the fact that we have not specified the form of the alternative distribution --- the simple instruction ``find the plot that appears different" is sufficient.
891
+
892
+ \subsection{Procedure}
893
+ We adapt this framework to the specific problem of using ICE plots to evaluate additivity in a statistically rigorous manner. For the exposition in this section, suppose that the response $y$ is continuous, the covariates $\x$ are fixed, and $y=f(\x) + \errorrv$. Further assume $\expe{\errorrv}=0$ and
894
+
895
+ \bneqn\label{eq:additive}
896
+ f(\x) = g(x_S) + h(\x_C),
897
+ \eneqn
898
+
899
+ \noindent meaning the true $\x$-conditional expectation of $y$ is additive in functions of $x_S$ and $\x_C$. Let $F$ be the distribution of $\hat{f}$ when Equation \ref{eq:additive} holds and $f$ is additive. We wish to test $H_0$: $\hat{f} \sim F$ versus $H_a$: $H_0$ is false.
900
+
901
+ Recall that ICE plots displaying non-parallel curves suggest that $\hat{f}$ is not additive in functions of $x_S$ and $\x_C$. Thus if we can correctly identify a plot displaying such features amongst $K-1$ null plots generated under $F$, the discovery is valid at $\alpha = 1/K$.
902
+
903
+ We sample from $F$ by using backfitting \citep{Breiman1985} to generate $\gstar$ and $\hstar$, estimates of $g$ and $h$, and then bootstrapping the residuals. Both $\gstar$ and $\hstar$ can be obtained via any supervised learning procedures. The general procedure for $|S|=1$ proceeds is as follows.
904
+
905
+ \begin{enumerate}[1]
906
+ \item Using backfitting, obtain $\gstar$ and $\hstar$. Then compute a vector of fitted values $\yhatstar = \gstar(x_S) + \hstar(\x_C)$ and a vector of residuals $\rstar := y - \yhatstar$.
907
+ \item Let $\bv{r}_b$ be a random resampling of $\rstar$. If heteroscedasticity is of concern, one can keep $\rstar$'s absolute values fixed and let $\bv{r}_b$ be a permutation of $\rstar$'s signs. Define $\bv{y}_b:=\yhatstar+\bv{r}_b$. Note that $\cexpe{\bv{y}_b}{\x}$ is additive in $\gstar(x_S)$ and $\hstar(\x_{C})$.
908
+ \item Fit $\bv{y}_b$ to $\X$ using the same learning algorithm that generated the original ICE (c-ICE or d-ICE) plot to produce $\hat{f}_{b}$. This yields a potentially non-additive approximation to null data generated using an additive model.
909
+ \item Display an ICE (or c-ICE or d-ICE) plot for $\hat{f}_b$. Deviations from additivity observed in this plot must be due to sources other than interactions between $x_S$ and $\x_{C}$ in the underlying data.
910
+ \item Repeat steps (2) - (4) $K - 1$ times, then randomly insert the true plot amongst these $K - 1$ null plots.
911
+ \item If the viewer can correctly identify the true plot amongst all $K$ plots, the discovery is valid for level $\alpha = 1 / K$. Note that the discovery is conditional on the procedures for generating $\gstar$ and $\hstar$.
912
+ \end{enumerate}
913
+
914
+ \subsection{Examples}\label{subsec:visualization_additivity_test}
915
+
916
+ An application of this visual test where $g$ is taken to be the \qu{supersmoother} \citep{Friedman1984} and $h$ is a \texttt{BART} model is illustrated using the depression data of Section \ref{subsec:depression}. We sample $\bv{r}_b$ by permuting signs. The data analyst might be curious if the ICE plot is consistent with the treatment being additive in the model. We employ the additivity lineup test in Figure \ref{fig:depression_treatment_additivity_lineup_test} using 20 images. We reject the null hypothesis of additivity of the treatment effect at $\alpha = 1 / 20 = 0.05$ since the true plot (row 2, column 2) is clearly identifiable. This procedure can be a useful test in clinical settings when the treatment effect is commonly considered linear and additive and can alert the practitioner that interactions should be investigated.
917
+
918
+ \begin{figure}[htp]
919
+ \centering
920
+ \includegraphics[width=2.5in]{additivity_lineup_2.\iftoggle{usejpgs}{jpg}{pdf}} \caption{Additivity lineup test for the predictor \texttt{treatment} in the depression clinical trial dataset of Section \ref{subsec:depression}.}
921
+ \label{fig:depression_treatment_additivity_lineup_test}
922
+ \end{figure}
923
+
924
+ Another application of this visual test where $g$ is taken to be the supersmoother and $h$ is a \texttt{NN} model is illustrated using the wine data of Section \ref{subsec:white_wine}. Here again we sample $\bv{r}_b$ by permuting signs. The data analyst may want to know if the fitted model is suggestive of interactions between \texttt{pH} and the remaining features in the underlying model. We employ the additivity lineup test in Figure \ref{fig:wine_additivity_lineup_test}, again using 20 images.
925
+
926
+ Looking closely one sees that the first and third plots in the last row have the largest range of cumulative effects and exhibit more curvature in individual curves than most of the other plots, making them the most extreme violations of the null. Readers that singled out the first plot in the last row would have a valid discovery at $\alpha = .05$, but clearly the evidence of non-additivity is much weaker here than in the previous example. Whereas Figure \ref{fig:depression_treatment_additivity_lineup_test} suggests the real plot is identifiable amongst more than 20 images, it would be easy to confuse Figure \ref{fig:wine_additivity_lineup_test}'s true plot with the one in row 4, column 3. Hence there is only modest evidence that \texttt{pH}'s impact on $\hat{f}$ is different from what a \texttt{NN} might generate if there were no interactions between \texttt{pH} and the other predictors.
927
+
928
+ \begin{figure}[htp]
929
+ \centering
930
+ \includegraphics[width=2.5in]{wine_nnet_lineup_red.\iftoggle{usejpgs}{jpg}{pdf}}
931
+ \caption{Additivity lineup test for the predictor \texttt{pH} in the white wine dataset of Section \ref{subsec:white_wine}.}
932
+ \label{fig:wine_additivity_lineup_test}
933
+ \end{figure}
934
+
935
+ \section{Discussion}\label{sec:discussion}
936
+
937
+ We developed a suite of tools for visualizing the fitted values generated by an arbitrary supervised learning procedure. Our work extends the classical partial dependence plot (PDP), which has rightfully become a very popular visualization tool for black-box machine learning output. The partial functional relationship, however, often varies conditionally on the values of the other variables. The PDP offers the average of these relationships and thus individual conditional relationships are consequently masked, unseen by the researcher. These individual conditional relationships can now be visualized, giving researchers additional insight into how a given black box learning algorithm makes use of covariates to generate predictions.
938
+
939
+ The ICE plot, our primary innovation, plots an entire distribution of individual conditional expectation functions for a variable $x_S$. Through simulations and real data examples, we illustrated much of what can be learned about the estimated model $\hat{f}$ with the help of ICE. For instance, when the remaining features $\x_C$ do not influence the association between $x_S$ and $\hat{f}$, all ICE curves lie on top of another. When $\hat{f}$ is additive in functions of $\x_C$ and $x_S$, the curves lie parallel to each other. And when the partial effect of $x_S$ on $\hat{f}$ is influenced by $\x_{C}$, the curves will differ from each other in shape. Additionally, by marking each curve at the $x_S$ value observed in the training data, one can better understand $\hat{f}$'s extrapolations. Sometimes these properties are more easily distinguished in the complementary \qu{centered ICE} (c-ICE) and \qu{derivative ICE} (d-ICE) plots. In sum, the suite of ICE plots provides a tool for visualizing an arbitrary fitted model's map between predictors and predicted values.
940
+
941
+ The ICE suite has a number of possible uses that were not explored in this work. While we illustrate ICE plots using the same data as was used to fit $\hat{f}$, out-of-sample ICE plots could also be valuable. For instance, ICE plots generated from random vectors in $\reals^p$ can be used to explore other parts of $\mathcal{X}$ space, an idea advocated by \citet{Plate2000}. Further, for a single out-of-sample observation, plotting an ICE curve for each predictor can illustrate the sensitivity of the fitted value to changes in each predictor for this particular observation, which is the goal of the \qu{contribution plots} of \citet{Strumbelj2011}. Additionally, investigating ICE plots from $\hat{f}$'s produced by multiple statistical learning algorithms can help the researcher compare models. Exploring other functionality offered by the \texttt{ICEbox} package, such as the ability to cluster ICE curves, is similarly left for subsequent research.
942
+
943
+ The tools summarized thus far pertain to \emph{exploratory} analysis. Many times the ICE toolbox provides evidence of interactions, but how does this evidence compare to what these plots would have looked like if no interactions existed? Section \ref{sec:testing} proposed a \emph{testing} methodology. By generating additive models from a null distribution and introducing the actual ICE plot into the lineup, interaction effects can be distinguished from noise, providing a test at a known level of significance. Future work will extend the testing methodology to other null hypotheses of interest.
944
+
945
+ \section*{Supplementary Materials}\label{sec:replication} The procedures outlined in Section \ref{sec:algorithm} are implemented in the \texttt{R} package \texttt{ICEbox} available on \texttt{CRAN}. Simulated results, tables, and figures specific to this paper can be replicated via the script included in the supplementary materials.
946
+ The depression data of Section \ref{subsec:depression} cannot be released due to privacy concerns.
947
+
948
+ \section*{Acknowledgements}
949
+
950
+ We thank Richard Berk for insightful comments on multiple drafts and suggesting color overloading. We thank Andreas Buja for helping conceive the testing methodology. We thank Abba Krieger for his helpful suggestions. We also wish to thank Zachary Cohen for the depression data of Section \ref{subsec:depression} and helpful comments. Alex Goldstein acknowledges support from the Simons Foundation Autism Research Initiative. Adam Kapelner acknowledges support from the National Science Foundation's Graduate Research Fellowship Program.
951
+
952
+ \bibliographystyle{apalike}\bibliography{ice_paper}
953
+
954
+ \appendix
955
+
956
+ \section{Algorithms}\label{app:algorithms}
957
+
958
+ \begin{algorithm}[htp]
959
+ \small
960
+ \caption{\small ICE algorithm: Given $\X$, the $N \times p$ feature matrix, $\hat{f}$,
961
+ the fitted model, $S \subset \braces{1, \ldots, p}$, the subset of predictors for which to
962
+ compute partial dependence, return $\hat{f}^{(1)}_S, \ldots, \hat{f}^{(N)}_S$, the estimated partial dependence curves for constant values of $\x_C$.}
963
+ \begin{algorithmic}[1]
964
+ \Function{ICE}{$\X$, $\hat{f}$, $S$}
965
+ \For{$i \gets 1 \ldots N$}
966
+ \State $\hat{f}^{(i)}_S \gets \bv{0}_{N \times 1}$
967
+ \State $\x_C \gets \X[i, C]$ \Comment{fix $\x_C$ at the $i$th observation's $C$ columns}
968
+ \For{$\ell \gets 1 \ldots N$}
969
+ \State $\x_S \gets \X[\ell, S]$ \Comment{vary $\x_S$}
970
+ \State $\hat{f}^{(i)}_{S \ell} \gets \hat{f}(\bracks{\x_S,~\x_C})$
971
+ \Comment{the $i$th curve's $\ell$th coordinate}
972
+ \EndFor
973
+ \EndFor
974
+ \State \Return $[\hat{f}^{(1)}_{S}, \ldots, \hat{f}^{(N)}_{S}]$
975
+ \EndFunction
976
+ \end{algorithmic}
977
+ \label{algo:ice}
978
+ \end{algorithm}
979
+
980
+ \begin{algorithm}[htp]
981
+ \small
982
+ \caption{\small d-ICE algorithm: Given $\X$, the $N \times p$ feature matrix; $\hat{f}^{(1)}_S, \ldots, \hat{f}^{(N)}_S$, the estimated partial dependence functions for subset $S$ in the ICE plot; $D$, a function that computes the numerical derivative; returns $d\hat{f}^{(1)}_S, \ldots, d\hat{f}^{(N)}_S$, the derivatives of the estimated partial dependence. In our implementation $D$ first smooths the ICE plot using the ``supersmoother'' and subsequently estimates the derivative from the smoothed ICE plot.}
983
+ \begin{algorithmic}[1]
984
+ \Function{d-ICE}{$\X, \hat{f}^{(1)}_S, \ldots, \hat{f}^{(N)}_S, D$}
985
+ \For{$i \gets 1 \ldots N$}
986
+ \State $d\hat{f}^{(i)}_S \gets \bv{0}_{N \times 1}$
987
+ \State $\x_C \gets \X[i, C]$ \Comment{row of the $i$th observation, columns corresponding to $C$}
988
+ \For{$\ell \gets 1 \ldots N$}
989
+ \State $\x_S \gets \X[\ell, S]$
990
+ \State $d\hat{f}^{(i)}_{S\ell} \gets D\bracks{\hat{f}^{(i)}(\x_S, \x_C)}$ \Comment{numerical partial derivative at $\hat{f}^{(i)}(\x_S, \x_C)$ w.r.t. $\x_S$}
991
+ \EndFor
992
+ \EndFor
993
+ \State \Return $[d\hat{f}^{(1)}_{S}, \ldots, d\hat{f}^{(N)}_{S}]$
994
+ \EndFunction
995
+ \end{algorithmic}
996
+ \label{algo:nppb}
997
+ \end{algorithm}
998
+
999
+ \end{document}
papers/1311/1311.2524.tex ADDED
The diff for this file is too large to render. See raw diff
 
papers/1311/1311.2901.tex ADDED
@@ -0,0 +1,993 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ \documentclass{article}
4
+
5
+ \usepackage{graphicx} \usepackage{subfigure}
6
+
7
+ \usepackage{natbib,amsmath}
8
+
9
+ \usepackage{algorithm}
10
+ \usepackage{algorithmic}
11
+
12
+ \usepackage{hyperref}
13
+
14
+ \newcommand{\theHalgorithm}{\arabic{algorithm}}
15
+
16
+ \usepackage[accepted]{icml2013}
17
+
18
+
19
+ \def\x{{\mathbf x}}
20
+ \def\h{{\mathbf h}}
21
+ \def\b{{\mathbf b}}
22
+ \def\t{{\mathbf t}}
23
+ \def\L{{\cal L}}
24
+
25
+ \def\BE{\vspace{-0.0mm}\begin{equation}}
26
+ \def\EE{\vspace{-0.0mm}\end{equation}}
27
+ \def\BEA{\vspace{-0.0mm}\begin{eqnarray}}
28
+ \def\EEA{\vspace{-0.0mm}\end{eqnarray}}
29
+
30
+ \newcommand{\alg}[1]{Algorithm \ref{alg:#1}}
31
+ \newcommand{\eqn}[1]{Eqn.~\ref{eqn:#1}}
32
+ \newcommand{\fig}[1]{Fig.~\ref{fig:#1}}
33
+ \newcommand{\tab}[1]{Table~\ref{tab:#1}}
34
+ \newcommand{\secc}[1]{Section~\ref{sec:#1}}
35
+ \def\etal{{\textit{et~al.~}}}
36
+ \def\units{{\text{units of }}}
37
+
38
+
39
+ \icmltitlerunning{Visualizing and Understanding Convolutional Networks}
40
+
41
+ \begin{document}
42
+
43
+ \twocolumn[
44
+ \icmltitle{Visualizing and Understanding Convolutional Networks}
45
+
46
+ \icmlauthor{Matthew D. Zeiler}{zeiler@cs.nyu.edu}
47
+ \icmladdress{Dept. of Computer Science, Courant Institute,
48
+ New York University}
49
+ \icmlauthor{Rob Fergus}{fergus@cs.nyu.edu}
50
+ \icmladdress{Dept. of Computer Science, Courant Institute,
51
+ New York University}
52
+
53
+ \icmlkeywords{boring formatting information, machine learning, ICML}
54
+
55
+ \vskip 0.3in
56
+ ]
57
+
58
+ \begin{abstract}
59
+ Large Convolutional Network models have recently demonstrated
60
+ impressive classification performance on the ImageNet benchmark
61
+ \cite{Kriz12}. However there is no clear understanding of
62
+ why they perform so well, or how they might be improved. In this
63
+ paper we address both issues. We introduce a novel visualization
64
+ technique that gives insight into the function of intermediate feature
65
+ layers and the operation of the classifier. Used in a diagnostic role, these
66
+ visualizations allow us to find model architectures that outperform Krizhevsky
67
+ \etal on the ImageNet classification benchmark. We also perform an
68
+ ablation study to discover the performance contribution from different
69
+ model layers. We show our ImageNet model
70
+ generalizes well to other datasets: when the softmax classifier is
71
+ retrained, it convincingly beats the current state-of-the-art
72
+ results on Caltech-101 and Caltech-256 datasets.
73
+ \end{abstract}
74
+
75
+ \vspace{-4mm}
76
+ \section{Introduction}
77
+ \vspace{-2mm}
78
+ Since their introduction by \cite{Lecun1989} in the early 1990's, Convolutional
79
+ Networks (convnets) have demonstrated excellent performance at
80
+ tasks such as hand-written digit classification and face
81
+ detection. In the last year, several papers have shown that they can also deliver
82
+ outstanding performance on more challenging visual classification tasks. \cite{Ciresan12} demonstrate state-of-the-art performance on NORB and
83
+ CIFAR-10 datasets. Most notably, \cite{Kriz12} show
84
+ record beating performance on the ImageNet 2012 classification
85
+ benchmark, with their convnet model achieving an error rate of 16.4\%,
86
+ compared to the 2nd place result of 26.1\%. Several factors are
87
+ responsible for this renewed interest in convnet models: (i) the
88
+ availability of much larger training sets, with millions of labeled
89
+ examples; (ii) powerful GPU implementations, making the training of
90
+ very large models practical and (iii) better model regularization
91
+ strategies, such as Dropout \cite{Hinton12}.
92
+
93
+ Despite this encouraging progress, there is still little insight into
94
+ the internal operation and behavior of these complex models, or how
95
+ they achieve such good performance. From a scientific standpoint,
96
+ this is deeply unsatisfactory. Without clear understanding of how and
97
+ why they work, the development of better models is reduced to
98
+ trial-and-error. In this paper we introduce a visualization technique
99
+ that reveals the input stimuli that excite individual feature maps at any
100
+ layer in the model. It also allows us to observe the evolution of
101
+ features during training and to diagnose potential problems with the
102
+ model. The visualization technique we propose uses a multi-layered
103
+ Deconvolutional Network (deconvnet), as proposed by \cite{Zeiler11},
104
+ to project the feature activations back to the input pixel space. We also perform a
105
+ sensitivity analysis of the classifier output by occluding portions of
106
+ the input image, revealing which parts of the scene are important for
107
+ classification.
108
+
109
+ Using these tools, we start with the architecture of \cite{Kriz12} and
110
+ explore different architectures, discovering ones that outperform
111
+ their results on ImageNet. We then explore the generalization ability
112
+ of the model to other datasets, just retraining the softmax classifier
113
+ on top. As such, this is a form of supervised pre-training, which
114
+ contrasts with the unsupervised pre-training methods popularized by
115
+ \cite{Hinton2006a} and others \cite{Bengio2007,Vincent2008}. The
116
+ generalization ability of convnet features is also explored in
117
+ concurrent work by \cite{Donahue13}.
118
+
119
+
120
+
121
+ \vspace{-0mm}
122
+ \subsection{Related Work}
123
+ \vspace{-0mm} Visualizing features to gain intuition about the network
124
+ is common practice, but mostly limited to the 1st layer where
125
+ projections to pixel space are possible. In higher layers this is not
126
+ the case, and there are limited methods for interpreting
127
+ activity. \cite{Erhan09} find the optimal stimulus for each unit by
128
+ performing gradient descent in image space to maximize the unit's
129
+ activation. This requires a careful initialization and does not give
130
+ any information about the unit's invariances. Motivated by the
131
+ latter's short-coming, \cite{Le10} (extending an idea by
132
+ \cite{Berkes06}) show how the Hessian of a given unit may be computed
133
+ numerically around the optimal response, giving some insight into
134
+ invariances. The problem is that for higher layers, the invariances
135
+ are extremely complex so are poorly captured by a simple quadratic
136
+ approximation. Our approach, by contrast, provides a non-parametric
137
+ view of invariance, showing which patterns from the training set
138
+ activate the feature map. \cite{Donahue13} show
139
+ visualizations that identify patches within a dataset that are
140
+ responsible for strong activations at higher layers in the model. Our
141
+ visualizations differ in that they are not just crops of input images,
142
+ but rather top-down projections that reveal structures within each
143
+ patch that stimulate a particular feature map.
144
+
145
+ \section{Approach}
146
+
147
+
148
+ We use standard fully supervised convnet models throughout
149
+ the paper, as defined by \cite{Lecun1989} and \cite{Kriz12}. These models map a color 2D input image $x_i$,
150
+ via a series of layers, to a
151
+ probability vector $\hat{y_i}$ over the $C$ different classes. Each
152
+ layer consists of (i) convolution of the previous layer output (or, in
153
+ the case of the 1st layer, the input image) with a set of learned
154
+ filters; (ii) passing the responses through a rectified linear
155
+ function ({\em $relu(x)=\max(x,0)$}); (iii)
156
+ [optionally] max pooling over local neighborhoods and (iv) [optionally]
157
+ a local contrast operation that normalizes the responses across
158
+ feature maps. For more details of these operations, see \cite{Kriz12} and \cite{Jarrett2009}. The top few
159
+ layers of the network are conventional fully-connected networks and
160
+ the final layer is a softmax classifier. \fig{arch} shows
161
+ the model used in many of our experiments.
162
+
163
+ We train these models using a large set of $N$ labeled images
164
+ $\{x,y\}$, where label $y_i$ is a discrete variable indicating the
165
+ true class. A cross-entropy loss function, suitable for image
166
+ classification, is used to compare $\hat{y_i}$ and $y_i$. The
167
+ parameters of the network (filters in the convolutional layers, weight
168
+ matrices in the fully-connected layers and biases) are trained by
169
+ back-propagating the derivative of the loss with respect to the
170
+ parameters throughout the network, and updating the parameters via
171
+ stochastic gradient descent. Full details of training are given in
172
+ \secc{training}.
173
+
174
+ \subsection{Visualization with a Deconvnet}
175
+ Understanding the operation of a convnet requires interpreting the
176
+ feature activity in intermediate layers. We present a novel way to
177
+ {\em map these activities back to the input pixel space}, showing what
178
+ input pattern originally caused a given activation in the feature
179
+ maps. We perform this mapping with a Deconvolutional Network
180
+ (deconvnet) \cite{Zeiler11}. A deconvnet can be thought of as a
181
+ convnet model that uses the same components (filtering, pooling) but
182
+ in reverse, so instead of mapping pixels to features does the
183
+ opposite. In \cite{Zeiler11}, deconvnets were proposed as a way of
184
+ performing unsupervised learning. Here, they are not used in any
185
+ learning capacity, just as a probe of an already trained convnet.
186
+
187
+ To examine a convnet, a deconvnet is attached to each of its layers,
188
+ as illustrated in \fig{deconv}(top), providing a continuous
189
+ path back to image pixels. To start, an input image is
190
+ presented to the convnet and features computed throughout the
191
+ layers. To examine a given convnet activation, we set all other activations in
192
+ the layer to zero and pass the feature maps as input to the attached
193
+ deconvnet layer. Then we successively (i) unpool, (ii) rectify and
194
+ (iii) filter to reconstruct the activity in the layer beneath that
195
+ gave rise to the chosen activation. This is then repeated until input
196
+ pixel space is reached.
197
+
198
+ \noindent {\bf Unpooling:} In the convnet, the max pooling operation
199
+ is non-invertible, however we can obtain an approximate inverse by
200
+ recording the locations of the maxima within each pooling region in a
201
+ set of {\em switch} variables. In the deconvnet, the unpooling
202
+ operation uses these switches to place the reconstructions from the
203
+ layer above into appropriate locations, preserving the structure of
204
+ the stimulus. See \fig{deconv}(bottom) for an illustration of the procedure.
205
+
206
+ \noindent {\bf Rectification:} The convnet uses {\em relu}
207
+ non-linearities, which rectify the feature maps thus ensuring the
208
+ feature maps are always positive. To obtain valid feature
209
+ reconstructions at each layer (which also should be positive), we pass
210
+ the reconstructed signal through a {\em relu} non-linearity.
211
+
212
+ \noindent {\bf Filtering:} The convnet uses learned filters to
213
+ convolve the feature maps from the previous layer. To invert this, the
214
+ deconvnet uses transposed versions of the same filters, but applied to the rectified maps,
215
+ not the output of the layer beneath. In
216
+ practice this means flipping each filter vertically and horizontally.
217
+
218
+
219
+ Projecting down from higher layers uses the switch settings generated
220
+ by the max pooling in the convnet on the way up. As these switch
221
+ settings are peculiar to a given input image, the reconstruction
222
+ obtained from a single activation thus resembles a small piece of the
223
+ original input image, with structures weighted according to their
224
+ contribution toward to the feature activation. Since the model is
225
+ trained discriminatively, they implicitly show which parts of the
226
+ input image are discriminative. Note that these projections are {\em
227
+ not} samples from the model, since there is no generative process involved.
228
+
229
+ \begin{figure}[h!]
230
+ \vspace{-3mm}
231
+ \begin{center}
232
+ \includegraphics[width=3.3in]{conv_deconv.pdf}
233
+ \includegraphics[width=3.3in]{unpool.pdf}
234
+ \end{center}
235
+ \vspace*{-0.3cm}
236
+ \caption{Top: A deconvnet layer (left) attached to a convnet layer
237
+ (right). The deconvnet will reconstruct an approximate version of
238
+ the convnet features from the layer beneath. Bottom:
239
+ An illustration of the unpooling operation in the deconvnet, using {\em switches}
240
+ which record the location of the local max in each pooling region
241
+ (colored zones) during pooling in the convnet. }
242
+ \label{fig:deconv}
243
+ \vspace*{-0.3cm}
244
+ \end{figure}
245
+
246
+
247
+
248
+
249
+ \section{Training Details} \label{sec:training}
250
+ We now describe the large convnet model that will be visualized in
251
+ \secc{vis}. The architecture, shown in \fig{arch}, is similar to that
252
+ used by \cite{Kriz12} for ImageNet
253
+ classification. One difference is that the sparse connections used in
254
+ Krizhevsky's layers 3,4,5 (due to the model being split across 2
255
+ GPUs) are replaced with dense connections in our model. Other
256
+ important differences relating to layers 1 and 2 were made following
257
+ inspection of the visualizations in \fig{compareAlex}, as described in
258
+ \secc{selection}.
259
+
260
+ The model was trained on the ImageNet 2012 training set (1.3 million
261
+ images, spread over 1000 different classes). Each RGB image was
262
+ preprocessed by resizing the smallest dimension to 256, cropping the
263
+ center 256x256 region, subtracting the per-pixel mean (across all
264
+ images) and then using 10 different sub-crops of size 224x224
265
+ (corners $+$ center with(out) horizontal flips). Stochastic gradient
266
+ descent with a mini-batch size of 128 was used to update the
267
+ parameters, starting with a learning rate of $10^{-2}$, in conjunction
268
+ with a momentum term of $0.9$. We anneal the
269
+ learning rate throughout training manually when the validation error
270
+ plateaus. Dropout \cite{Hinton12} is used in the
271
+ fully connected layers (6 and 7) with a rate of 0.5. All weights are initialized to $10^{-2}$ and
272
+ biases are set to 0.
273
+
274
+ Visualization of the first layer filters during training reveals that
275
+ a few of them dominate, as shown in \fig{compareAlex}(a). To combat
276
+ this, we renormalize each filter in the convolutional layers whose RMS
277
+ value exceeds a fixed radius of $10^{-1}$ to this fixed radius. This
278
+ is crucial, especially in the first layer of the model, where the
279
+ input images are roughly in the [-128,128] range. As in
280
+ \cite{Kriz12}, we produce multiple different crops and flips of each
281
+ training example to boost training set size. We stopped training after
282
+ 70 epochs, which took around 12 days on a single
283
+ GTX580 GPU, using an implementation based on \cite{Kriz12}.
284
+
285
+
286
+
287
+ \section{Convnet Visualization}
288
+ \label{sec:vis}
289
+ Using the model described in \secc{training}, we now use the deconvnet
290
+ to visualize the feature activations on the ImageNet validation set.
291
+
292
+ \noindent {\bf Feature Visualization:} \fig{top9feat} shows feature visualizations from our
293
+ model once training is complete. However, instead of showing the
294
+ single strongest activation for a given feature map, we show the top 9
295
+ activations. Projecting each separately down to pixel space reveals
296
+ the different structures that excite a given feature map, hence
297
+ showing its invariance to input deformations. Alongside these
298
+ visualizations we show the corresponding image patches. These have
299
+ greater variation than visualizations as the latter solely focus on
300
+ the discriminant structure within each patch. For example, in layer 5,
301
+ row 1, col 2, the patches appear to have little in common, but the
302
+ visualizations reveal that this particular feature map focuses on the
303
+ grass in the background, not the foreground objects.
304
+
305
+ \onecolumn
306
+
307
+ \begin{figure*}[h!]
308
+ \begin{center}
309
+ \includegraphics[width=6.0in]{rob3_new6.pdf}
310
+ \end{center}
311
+ \vspace*{-0.3cm}
312
+ \caption{Visualization of features in a fully trained model. For
313
+ layers 2-5 we show the top 9 activations in a random subset of
314
+ feature maps across the validation data, projected down to pixel
315
+ space using our deconvolutional network approach. Our
316
+ reconstructions are {\em not} samples from the model: they are
317
+ reconstructed patterns from the validation set that cause high activations in a
318
+ given feature map. For each feature map we also show the
319
+ corresponding image patches. Note: (i) the the strong grouping
320
+ within each feature map, (ii) greater invariance at higher layers
321
+ and (iii) exaggeration of discriminative parts of the image,
322
+ e.g.~eyes and noses of dogs (layer 4, row 1, cols 1). Best viewed in electronic form. }
323
+ \label{fig:top9feat}
324
+ \vspace*{-0.3cm}
325
+ \end{figure*}
326
+
327
+ \twocolumn
328
+
329
+ The projections from each layer show the hierarchical nature of
330
+ the features in the network. Layer
331
+ 2 responds to corners and other edge/color conjunctions. Layer 3 has
332
+ more complex invariances, capturing similar textures (e.g.~mesh
333
+ patterns (Row 1, Col 1); text (R2,C4)). Layer 4 shows significant
334
+ variation, but is more class-specific: dog faces
335
+ (R1,C1); bird's legs (R4,C2). Layer 5 shows entire objects
336
+ with significant pose variation, e.g.~keyboards (R1,C11) and dogs (R4).
337
+
338
+
339
+
340
+ \begin{figure*}[t!]
341
+ \vspace*{-0.2cm}
342
+ \begin{center}
343
+ \includegraphics[width=7in]{arch_old.pdf}
344
+ \end{center}
345
+ \vspace*{-0.4cm}
346
+ \caption{Architecture of our 8 layer convnet model. A 224 by 224 crop of an
347
+ image (with 3 color planes) is presented as the input. This is
348
+ convolved with 96 different 1st layer filters (red), each of size 7 by 7,
349
+ using a stride of 2 in both x and y. The resulting feature maps are
350
+ then: (i) passed through a rectified linear function (not shown),
351
+ (ii) pooled (max
352
+ within 3x3 regions, using stride 2) and (iii) contrast normalized
353
+ across feature maps to give 96 different 55 by 55
354
+ element feature maps. Similar operations are repeated in layers
355
+ 2,3,4,5. The last two layers are fully connected, taking features
356
+ from the top convolutional layer as input in vector form (6 $\cdot$ 6 $\cdot$
357
+ 256 = 9216 dimensions). The final layer is a $C$-way softmax
358
+ function, $C$ being the number of classes. All filters and feature maps are square in shape.}
359
+ \label{fig:arch}
360
+ \vspace*{-0.3cm}
361
+ \end{figure*}
362
+
363
+
364
+ \noindent {\bf Feature Evolution during Training:} \fig{evolve}
365
+ visualizes the progression during training of the strongest activation
366
+ (across all training examples) within a given feature map projected
367
+ back to pixel space. Sudden jumps in appearance result from a change
368
+ in the image from which the strongest activation originates. The lower
369
+ layers of the model can be seen to converge within a few
370
+ epochs. However, the upper layers only develop develop after a
371
+ considerable number of epochs (40-50), demonstrating the need to let
372
+ the models train until fully converged.
373
+
374
+
375
+
376
+
377
+ \noindent {\bf Feature Invariance:} \fig{invariance} shows 5 sample images being translated, rotated and
378
+ scaled by varying degrees while looking at the changes in the feature
379
+ vectors from the top and bottom layers of the model, relative to the
380
+ untransformed feature. Small
381
+ transformations have a dramatic effect in the first layer of the model,
382
+ but a lesser impact at the top feature layer, being quasi-linear for
383
+ translation \& scaling. The network output is stable to translations
384
+ and scalings. In general, the output is not invariant to rotation,
385
+ except for object with rotational symmetry (e.g.~entertainment center).
386
+
387
+
388
+
389
+
390
+
391
+ \begin{figure*}[t!]
392
+ \begin{center}
393
+ \includegraphics[width=7in]{rob2_small.pdf}
394
+ \end{center}
395
+ \vspace*{-0.3cm}
396
+ \caption{Evolution of a randomly chosen subset of model features through training. Each layer's
397
+ features are displayed in a different block. Within each
398
+ block, we show a randomly chosen subset of features at epochs [1,2,5,10,20,30,40,64]. The
399
+ visualization shows the strongest activation (across all
400
+ training examples) for a given feature map, projected down to pixel
401
+ space using our deconvnet approach. Color contrast is artificially
402
+ enhanced and the figure is best viewed in
403
+ electronic form.}
404
+ \label{fig:evolve}
405
+ \vspace*{-0.3cm}
406
+ \end{figure*}
407
+
408
+ \begin{figure*}[t!]
409
+ \begin{center}
410
+ \includegraphics[width=6.5in]{invariance_small.pdf}
411
+ \end{center}
412
+ \vspace*{-0.6cm}
413
+ \caption{Analysis of vertical translation, scale, and rotation invariance within the
414
+ model (rows a-c respectively). Col 1: 5 example images undergoing
415
+ the transformations. Col 2 \& 3: Euclidean distance between feature
416
+ vectors from the original and transformed images in layers 1 and 7
417
+ respectively. Col 4: the probability of the true label for each
418
+ image, as the image is transformed. }
419
+ \label{fig:invariance}
420
+ \vspace*{-0.3cm}
421
+ \end{figure*}
422
+
423
+ \subsection{Architecture Selection} \label{sec:selection} While visualization
424
+ of a trained model gives insight into its operation, it can also
425
+ assist with selecting good architectures in the first place. By
426
+ visualizing the first and second layers of Krizhevsky \etal's
427
+ architecture (\fig{compareAlex}(b) \& (d)), various problems are
428
+ apparent. The first layer filters are a mix of extremely high and low
429
+ frequency information, with little coverage of the mid
430
+ frequencies. Additionally, the 2nd layer visualization shows aliasing
431
+ artifacts caused by the large stride 4 used in the 1st layer
432
+ convolutions. To remedy these problems, we (i) reduced the 1st layer
433
+ filter size from 11x11 to 7x7 and (ii) made the stride of the
434
+ convolution 2, rather than 4. This new architecture retains much more
435
+ information in the 1st and 2nd layer features, as shown in
436
+ \fig{compareAlex}(c) \& (e). More importantly, it also improves the classification
437
+ performance as shown in \secc{modelsizes}.
438
+
439
+
440
+
441
+
442
+ \begin{figure*}[t!]
443
+ \begin{center}
444
+ \includegraphics[width=6.3in]{rob4_small.pdf}
445
+ \end{center}
446
+ \vspace*{-0.3cm}
447
+ \caption{(a): 1st layer features without feature scale clipping. Note
448
+ that one feature dominates. (b): 1st layer features from
449
+ \cite{Kriz12}. (c): Our 1st layer features. The smaller stride (2 vs
450
+ 4) and filter size (7x7 vs 11x11) results in more distinctive
451
+ features and fewer ``dead'' features. (d): Visualizations of 2nd
452
+ layer features from \cite{Kriz12}. (e):
453
+ Visualizations of our 2nd layer features. These are cleaner, with no
454
+ aliasing artifacts that are visible in (d). }
455
+ \label{fig:compareAlex}
456
+ \vspace*{-0.3cm}
457
+ \end{figure*}
458
+
459
+ \subsection{Occlusion Sensitivity}
460
+ With image classification approaches, a natural question is if the
461
+ model is truly identifying the location of the object in the image, or
462
+ just using the surrounding context. \fig{block_expt} attempts to
463
+ answer this question by systematically occluding different portions of
464
+ the input image with a grey square, and monitoring the output of the
465
+ classifier. The examples clearly show the model is localizing the
466
+ objects within the scene, as the probability of the correct class
467
+ drops significantly when the object is occluded. \fig{block_expt} also
468
+ shows visualizations from the strongest feature map of the top
469
+ convolution layer, in addition to
470
+ activity in this map (summed over spatial locations) as a function of occluder position. When the
471
+ occluder covers the image region that appears in the visualization, we
472
+ see a strong drop in activity in the feature map. This shows that the
473
+ visualization genuinely corresponds to the image structure that
474
+ stimulates that feature map, hence validating the other visualizations
475
+ shown in \fig{evolve} and \fig{top9feat}.
476
+
477
+ \begin{figure*}[t!]
478
+ \begin{center}
479
+ \includegraphics[width=7.0in]{rob1_3.pdf}
480
+ \end{center}
481
+ \vspace*{-0.3cm}
482
+ \caption{Three test examples where we systematically cover up different portions
483
+ of the scene with a gray square (1st column) and see how the top
484
+ (layer 5) feature maps ((b) \& (c)) and classifier output ((d) \& (e)) changes. (b): for each
485
+ position of the gray scale, we record the total activation in one
486
+ layer 5 feature map (the one with the strongest response in the
487
+ unoccluded image). (c): a visualization of this feature map projected
488
+ down into the input image (black square), along with visualizations of
489
+ this map from other images. The first row example shows the strongest feature
490
+ to be the dog's face. When this is covered-up the activity in the
491
+ feature map decreases (blue area in (b)). (d): a map of correct class
492
+ probability, as a function of the position of the gray
493
+ square. E.g.~when the dog's face is obscured, the probability for
494
+ ``pomeranian'' drops significantly. (e): the most probable label as a
495
+ function of occluder position. E.g.~in the 1st row, for most locations
496
+ it is ``pomeranian'', but if the dog's face is obscured but not the
497
+ ball, then it predicts ``tennis ball''. In the 2nd example, text on
498
+ the car is the strongest feature in layer 5, but the classifier is
499
+ most sensitive to the wheel. The 3rd example contains multiple
500
+ objects. The strongest feature in layer 5 picks out the faces, but the
501
+ classifier is sensitive to the dog (blue region in (d)),
502
+ since it uses multiple feature maps.}
503
+ \label{fig:block_expt}
504
+ \vspace*{-0.3cm}
505
+ \end{figure*}
506
+
507
+ \subsection{Correspondence Analysis}
508
+ \vspace{-2mm}
509
+ Deep models differ from many existing recognition approaches in that
510
+ there is no explicit mechanism for establishing correspondence between
511
+ specific object parts in different images (e.g. faces have a particular
512
+ spatial configuration of the eyes and nose). However, an intriguing possibility is that deep models might
513
+ be {\em implicitly} computing them. To explore this, we take 5
514
+ randomly drawn dog images with frontal pose and systematically mask out
515
+ the same part of the face in each image (e.g.~all left eyes, see \fig{corr_ims}). For each
516
+ image $i$, we then compute: $\epsilon^l_i = x^l_i - \tilde{x}^l_i$,
517
+ where $x^l_i$ and $\tilde{x}^l_i$ are the feature vectors at layer $l$
518
+ for the original and occluded images respectively. We then measure the
519
+ consistency of this difference vector $\epsilon$ between all related image
520
+ pairs $(i,j)$: $\Delta_l = \sum_{i,j=1, i \neq j}^{5} \mathcal{H}(
521
+ \text{sign}(\epsilon^l_i),\text{sign}(\epsilon^l_j))$, where
522
+ $\mathcal{H}$ is Hamming distance. A lower value indicates
523
+ greater consistency in the change resulting from the masking
524
+ operation, hence tighter correspondence between the same object parts
525
+ in different images (i.e.~blocking the left eye changes the
526
+ feature representation in a consistent way). In \tab{corr} we compare the $\Delta$ score for
527
+ three parts of the face (left eye, right eye and nose) to random parts
528
+ of the object, using features from layer $l=5$ and $l=7$. The lower score
529
+ for these parts, relative to random object regions, for the layer $5$ features
530
+ show the model does establish some degree of correspondence.
531
+
532
+ \begin{figure}[t!]
533
+ \vspace{-0mm}
534
+ \begin{center}
535
+ \includegraphics[width=3.2in]{corr_ims.pdf}
536
+ \end{center}
537
+ \vspace*{-0.4cm}
538
+ \caption{Images used for correspondence experiments. Col 1: Original
539
+ image. Col 2,3,4: Occlusion of the right eye, left eye, and nose
540
+ respectively. Other columns show examples of random occlusions.}
541
+ \label{fig:corr_ims}
542
+ \vspace*{-0.0cm}
543
+ \end{figure}
544
+
545
+
546
+ \begin{table}[!h]
547
+ \vspace*{-0mm}
548
+ \begin{center}
549
+ \small
550
+ \begin{tabular}{|l||c|c|}
551
+ \hline
552
+ & Mean Feature & Mean Feature \\ & Sign Change & Sign Change \\ Occlusion Location & Layer 5 & Layer 7 \\
553
+ \hline
554
+ Right Eye & $0.067 \pm 0.007 $ & $0.069 \pm 0.015 $\\ \hline
555
+ Left Eye & $0.069 \pm 0.007 $ & $0.068 \pm 0.013 $\\ \hline
556
+ Nose & $0.079 \pm 0.017 $ & $0.069 \pm 0.011 $\\ \hline \hline
557
+ Random & $0.107 \pm 0.017 $ & $0.073 \pm 0.014 $\\ \hline
558
+
559
+ \end{tabular}
560
+ \vspace*{-0mm}
561
+ \caption{Measure of correspondence for different object parts in 5
562
+ different dog images. The lower scores for the eyes and nose
563
+ (compared to random object parts) show
564
+ the model implicitly establishing some form of correspondence
565
+ of parts at layer 5 in the model. At layer 7, the scores are more
566
+ similar, perhaps due to upper layers trying to discriminate between the
567
+ different breeds of dog. }
568
+ \label{tab:corr}
569
+ \vspace*{-5mm}
570
+ \end{center}
571
+ \end{table}
572
+
573
+ \section{Experiments}
574
+
575
+ \subsection{ImageNet 2012}
576
+ \label{ImageNet}
577
+ This dataset consists of 1.3M/50k/100k training/validation/test
578
+ examples, spread over 1000 categories. \tab{ImageNet} shows our results
579
+ on this dataset.
580
+
581
+ Using the exact architecture specified in \cite{Kriz12}, we attempt to replicate their result on the validation set. We achieve an
582
+ error rate within $0.1\%$ of their reported value on the
583
+ ImageNet 2012 validation set.
584
+
585
+
586
+
587
+ Next we analyze the performance of our model with the architectural
588
+ changes outlined in \secc{selection} ($7\times7$ filters in layer 1 and
589
+ stride $2$ convolutions in layers 1 \& 2). This model, shown in
590
+ \fig{arch}, significantly outperforms the architecture of
591
+ \cite{Kriz12}, beating their single model result by $1.7\%$ (test top-5). When we combine
592
+ multiple models, we obtain a test error of {\bf ${\bf 14.8\%}$, the best
593
+ published performance on this dataset}\footnote{This performance has
594
+ been surpassed in the recent Imagenet 2013 competition (\url{http://www.image-net.org/challenges/LSVRC/2013/results.php}).} (despite only using the 2012 training
595
+ set). We note that this error is almost half that of the top non-convnet
596
+ entry in the ImageNet 2012 classification challenge, which obtained
597
+ $26.2\%$ error \cite{ISI}.
598
+
599
+ \begin{table}[h!]
600
+ \scriptsize
601
+ \vspace*{-3mm}
602
+ \begin{center}
603
+ \begin{tabular}{|l|l|l|l|}
604
+ \hline
605
+ & Val & Val & Test \\
606
+ Error \% & Top-1 & Top-5 & Top-5 \\
607
+ \hline \hline
608
+ \cite{ISI} & - & - & $26.2$ \\ \hline
609
+ \cite{Kriz12}, 1 convnet & $40.7$ & $18.2$ & $--$ \\
610
+ \cite{Kriz12}, 5 convnets & $38.1$ & $16.4$ & $16.4$ \\
611
+ \cite{Kriz12}$^*$, 1 convnets & $39.0$ & $16.6$ & $--$ \\
612
+ \cite{Kriz12}$^*$, 7 convnets & $36.7$ & $15.4$ & $15.3$ \\
613
+ \hline \hline Our replication of & & & \\
614
+ \cite{Kriz12}, 1 convnet & $40.5$ & $18.1$ & $--$ \\
615
+ \hline 1 convnet as per \fig{arch} & $38.4$ & $16.5$ & $--$ \\
616
+ \hline 5 convnets as per \fig{arch} -- (a) & $36.7$ & $15.3$ & $15.3$ \\
617
+ \hline 1 convnet as per \fig{arch} but with & & & \\ layers 3,4,5: 512,1024,512
618
+ maps -- (b)& $37.5 $ & $16.0$ & $16.1$ \\
619
+ \hline 6 convnets, (a) \& (b) combined & $\bf{36.0} $ & $\bf{14.7}$ & $\bf{14.8}$ \\
620
+ \hline
621
+ \end{tabular}
622
+ \vspace*{-2mm}
623
+ \caption{ImageNet 2012 classification error rates. The $*$ indicates
624
+ models that were trained on both ImageNet 2011 and 2012 training sets.}
625
+ \label{tab:ImageNet}
626
+ \vspace*{-3mm}
627
+ \end{center}
628
+ \end{table}
629
+
630
+
631
+
632
+
633
+
634
+ \noindent {\bf Varying ImageNet Model Sizes:}
635
+ \label{sec:modelsizes} In \tab{modelSizes}, we first explore the
636
+ architecture of \cite{Kriz12} by adjusting the size of layers, or
637
+ removing them entirely. In each case, the model is trained from
638
+ scratch with the revised architecture. Removing the fully connected
639
+ layers (6,7) only gives a slight increase in error. This is surprising, given that they
640
+ contain the majority of model parameters. Removing two of
641
+ the middle convolutional layers also makes a relatively small
642
+ different to the error rate. However, removing both the middle
643
+ convolution layers and the fully connected layers yields a model with
644
+ only 4 layers whose performance is dramatically worse. This would
645
+ suggest that the overall depth of the model is important for
646
+ obtaining good performance. In \tab{modelSizes}, we modify our
647
+ model, shown in \fig{arch}. Changing the size of the fully connected
648
+ layers makes little difference to performance (same for model of
649
+ \cite{Kriz12}). However, increasing the size of the middle convolution layers
650
+ goes give a useful gain in performance. But increasing these, while also
651
+ enlarging the fully connected layers results in over-fitting.
652
+
653
+ \begin{table}[h!]
654
+ \scriptsize
655
+ \vspace*{0mm}
656
+ \begin{center}
657
+ \begin{tabular}{|l|l|l|l|}
658
+ \hline
659
+ & Train & Val & Val \\
660
+ Error \% & Top-1 & Top-1 & Top-5 \\ \hline
661
+ \hline Our replication of & & & \\
662
+ \cite{Kriz12}, 1 convnet & $35.1$ & $40.5$ & $18.1$ \\
663
+ \hline Removed layers 3,4 & $41.8 $ & $45.4 $ & $22.1 $ \\
664
+ \hline Removed layer 7 & $27.4 $ & $40.0$ & $18.4 $ \\
665
+ \hline Removed layers 6,7 & $27.4 $ & $44.8 $ & $22.4 $ \\
666
+ \hline
667
+ Removed layer 3,4,6,7 & $71.1$ & $71.3$ & $50.1$ \\
668
+ \hline Adjust layers 6,7: 2048 units & $40.3 $ & $41.7 $ & $18.8 $ \\
669
+ \hline Adjust layers 6,7: 8192 units & $26.8 $ & $40.0$ & $18.1$ \\
670
+ \hline \hline \hline Our Model (as per \fig{arch}) & $33.1 $ & $38.4 $ & $16.5 $ \\
671
+ \hline Adjust layers 6,7: 2048 units & $38.2 $ & $40.2 $ & $17.6 $ \\
672
+ \hline Adjust layers 6,7: 8192 units & $22.0 $ & $38.8 $ & $17.0 $ \\
673
+ \hline Adjust layers 3,4,5: 512,1024,512 maps & $18.8 $ & $\bf{37.5} $ & $\bf{16.0} $ \\
674
+ \hline Adjust layers 6,7: 8192 units and & & & \\ Layers 3,4,5: 512,1024,512 maps & $\bf{10.0} $ & $38.3 $ & $16.9 $ \\
675
+ \hline
676
+ \end{tabular}
677
+ \vspace*{-2mm}
678
+ \caption{ImageNet 2012 classification error rates with various
679
+ architectural changes to the model of \cite{Kriz12} and our model
680
+ (see \fig{arch}). }
681
+ \label{tab:modelSizes}
682
+ \vspace*{-8mm}
683
+ \end{center}
684
+ \end{table}
685
+
686
+ \subsection{Feature Generalization}
687
+ \vspace*{-2mm}
688
+ The experiments above show the importance of the convolutional part of
689
+ our ImageNet model in obtaining state-of-the-art performance.
690
+ This is supported by the visualizations of \fig{top9feat} which show
691
+ the complex invariances learned in the convolutional layers. We now
692
+ explore the ability of these feature extraction layers to generalize
693
+ to other datasets, namely Caltech-101 \cite{caltech101}, Caltech-256
694
+ \cite{caltech256} and PASCAL
695
+ VOC 2012. To do this, we keep layers 1-7 of our ImageNet-trained model
696
+ fixed and train a new softmax classifier on top (for the appropriate
697
+ number of classes) using the training images of the new dataset. Since
698
+ the softmax contains relatively few parameters, it can be trained
699
+ quickly from a relatively small number of examples, as is the case for
700
+ certain datasets.
701
+
702
+ The classifiers used by our model (a softmax) and other approaches
703
+ (typically a linear SVM) are of similar complexity, thus the
704
+ experiments compare our feature representation, learned from ImageNet,
705
+ with the hand-crafted features used by other methods. It is important
706
+ to note that {\em both} our feature representation and the
707
+ hand-crafted features are designed using images beyond the Caltech and
708
+ PASCAL training sets. For example, the hyper-parameters in HOG
709
+ descriptors were determined through systematic experiments on a
710
+ pedestrian dataset \cite{Dalal05}.
711
+ We also try a second strategy of training a model from scratch,
712
+ i.e.~resetting layers 1-7 to random values and train them, as well as
713
+ the softmax, on the training images of the dataset.
714
+
715
+ One complication is that some of the Caltech datasets have some images that are
716
+ also in the ImageNet training data. Using normalized correlation, we
717
+ identified these few ``overlap'' images\footnote{ For
718
+ Caltech-101, we found 44 images in common (out of 9,144 total images),
719
+ with a maximum overlap of 10 for any given class. For Caltech-256, we
720
+ found 243 images in common (out of 30,607 total images), with a
721
+ maximum overlap of 18 for any given class.} and removed them from our
722
+ Imagenet training set and then retrained our Imagenet models, so avoiding the
723
+ possibility of train/test contamination.
724
+
725
+
726
+
727
+
728
+
729
+
730
+
731
+
732
+ \vspace{3mm}
733
+ \noindent {\bf Caltech-101:} We follow the procedure of
734
+ \cite{caltech101} and randomly select 15 or 30 images per class for
735
+ training and test on up to 50 images per class reporting the average
736
+ of the per-class accuracies in \tab{caltech101}, using 5 train/test
737
+ folds. Training took 17 minutes for 30 images/class. The
738
+ pre-trained model beats the best reported result for 30 images/class
739
+ from \cite{Bo13} by 2.2\%. The convnet model trained from
740
+ scratch however does terribly, only achieving 46.5\%.
741
+
742
+
743
+
744
+
745
+
746
+
747
+
748
+
749
+
750
+ \begin{table}[h!]
751
+ \small
752
+ \vspace*{-3mm}
753
+ \begin{center}
754
+ \begin{tabular}{|l|l|l|}
755
+ \hline
756
+ & Acc \% & Acc \% \\
757
+ \# Train& 15/class & 30/class \\
758
+ \hline
759
+ \cite{Bo13} & $-$ & $81.4 \pm 0.33$ \\
760
+ \cite{Yang09} & $73.2$ & $84.3 $ \\
761
+ \hline \hline \hline Non-pretrained convnet & $22.8 \pm 1.5$ & $46.5 \pm 1.7$ \\
762
+ \hline ImageNet-pretrained convnet & $\bf{83.8 \pm 0.5}$ & $\bf{86.5 \pm 0.5}$ \\
763
+
764
+ \hline
765
+ \end{tabular}
766
+ \vspace*{-3mm}
767
+ \caption{Caltech-101 classification accuracy for our convnet models,
768
+ against two leading alternate approaches.}
769
+ \label{tab:caltech101}
770
+ \vspace*{-3mm}
771
+ \end{center}
772
+ \end{table}
773
+
774
+ \noindent {\bf Caltech-256:} We follow the procedure of
775
+ \cite{caltech256}, selecting 15, 30, 45, or 60 training images per
776
+ class, reporting the average of the per-class accuracies in
777
+ \tab{caltech256}. Our ImageNet-pretrained model beats the current
778
+ state-of-the-art results obtained by Bo \etal \cite{Bo13} by a
779
+ significant margin: 74.2\% vs 55.2\% for 60 training
780
+ images/class. However, as with Caltech-101, the model trained from
781
+ scratch does poorly. In \fig{256plot}, we explore the ``one-shot
782
+ learning'' \cite{caltech101} regime. With our pre-trained model, just
783
+ 6 Caltech-256 training images are needed to beat the leading method
784
+ using 10 times as many images. This shows the power of the ImageNet
785
+ feature extractor.
786
+ \begin{table}[h!]
787
+ \small
788
+ \vspace*{0mm}
789
+ \begin{center}
790
+ \tabcolsep=0.07cm
791
+ \begin{tabular}{|l|l|l|l|l|}
792
+ \hline
793
+ & Acc \% & Acc \% & Acc \% & Acc \%\\
794
+ \# Train & 15/class & 30/class & 45/class & 60/class \\
795
+ \hline
796
+ \cite{Sohn11} & $35.1$ & $42.1 $ & $45.7$ & $47.9$ \\
797
+ \cite{Bo13} & $40.5 \pm 0.4$ & $48.0 \pm 0.2$ & $51.9 \pm 0.2$ & $55.2 \pm 0.3$ \\
798
+ \hline \hline
799
+
800
+ \hline Non-pretr. & $9.0 \pm 1.4$ & $22.5 \pm 0.7$ & $31.2 \pm 0.5$ & $38.8 \pm 1.4$ \\
801
+ \hline ImageNet-pretr. & $\bf{65.7 \pm 0.2}$ & $\bf{70.6 \pm 0.2}$ & $\bf{72.7 \pm 0.4}$ & $\bf{74.2 \pm 0.3}$ \\
802
+ \hline
803
+ \end{tabular}
804
+ \vspace*{-3mm}
805
+ \caption{Caltech 256 classification accuracies.}
806
+ \label{tab:caltech256}
807
+ \vspace*{-4mm}
808
+ \end{center}
809
+ \end{table}
810
+
811
+ \begin{figure}[t!]
812
+ \begin{center}
813
+ \includegraphics[width=2.5in]{Caltech256Plot.pdf}
814
+ \end{center}
815
+ \vspace*{-0.3cm}
816
+ \caption{Caltech-256 classification performance as the number of
817
+ training images per class is varied. Using only 6 training examples
818
+ per class with our pre-trained feature extractor, we surpass best
819
+ reported result by \cite{Bo13}. }
820
+ \label{fig:256plot}
821
+ \vspace*{-0.4cm}
822
+ \end{figure}
823
+
824
+
825
+
826
+
827
+
828
+
829
+
830
+
831
+
832
+
833
+ \vspace{2mm}
834
+ \noindent {\bf PASCAL 2012:} We used the standard training and validation
835
+ images to train a 20-way softmax on top of the ImageNet-pretrained
836
+ convnet. This is not ideal, as PASCAL images can contain multiple
837
+ objects and our model just provides a single exclusive prediction for each
838
+ image. \tab{pascal} shows the results on the test set. The PASCAL
839
+ and ImageNet images are quite different in nature, the former being
840
+ full scenes unlike the latter. This may explain our
841
+ mean performance being $3.2\%$ lower than the leading \cite{nus}
842
+ result, however we do beat them on 5 classes, sometimes by large margins.
843
+
844
+
845
+
846
+
847
+ \begin{table}[!h]
848
+ \tiny
849
+ \begin{center}
850
+ \begin{tabular}{|l|l|l|l||l|l|l|l|}
851
+ \hline
852
+ Acc \% & [A] & [B] & Ours & Acc \% & [A] & [B] & Ours \\ \hline
853
+ \hline
854
+ Airplane& 92.0 & {\bf 97.3} & 96.0 &Dining tab &63.2 & {\bf 77.8}&67.7 \\ \hline
855
+ Bicycle & 74.2 & {\bf 84.2} & 77.1 &Dog& 68.9 & 83.0 & {\bf 87.8} \\ \hline
856
+ Bird& 73.0 & 80.8 & {\bf 88.4}& Horse & 78.2 & {\bf 87.5} & 86.0 \\ \hline
857
+ Boat & 77.5 & 85.3 & {\bf 85.5}&Motorbike & 81.0 & {\bf 90.1} & 85.1 \\ \hline
858
+ Bottle& 54.3 & {\bf 60.8} & 55.8&Person & 91.6 & {\bf 95.0} & 90.9 \\ \hline
859
+ Bus& 85.2 & {\bf 89.9} & 85.8& Potted pl & 55.9 & {\bf 57.8} & 52.2\\ \hline
860
+ Car&81.9&{\bf 86.8} & 78.6&Sheep & 69.4 & 79.2 & {\bf 83.6} \\ \hline
861
+ Cat&76.4 &89.3 & {\bf 91.2}& Sofa & 65.4 & {\bf 73.4} & 61.1 \\ \hline
862
+ Chair&65.2&{\bf 75.4}& 65.0&Train & 86.7 & {\bf 94.5} & 91.8 \\ \hline
863
+ Cow&63.2&{\bf 77.8}& 74.4& Tv & 77.4 & {\bf 80.7} & 76.1 \\ \hline \hline
864
+ Mean & 74.3 & {\bf 82.2} & 79.0 & \# won & 0 & {\bf 15} &5 \\ \hline
865
+ \end{tabular}
866
+ \vspace*{-3mm}
867
+ \caption{PASCAL 2012 classification results, comparing our
868
+ Imagenet-pretrained convnet against the leading two methods ([A]=
869
+ \cite{cvc} and [B] = \cite{nus}). }
870
+ \label{tab:pascal}
871
+ \vspace*{-4mm}
872
+ \end{center}
873
+ \end{table}
874
+
875
+
876
+
877
+
878
+
879
+ \subsection{Feature Analysis} \label{sec:pretrain} We explore how
880
+ discriminative the features in each layer of our Imagenet-pretrained
881
+ model are. We do this by varying the number of layers retained from
882
+ the ImageNet model and place either a linear SVM or softmax classifier
883
+ on top. \tab{supretrain} shows results on Caltech-101 and
884
+ Caltech-256. For both datasets, a steady improvement can be seen as we
885
+ ascend the model, with best results being obtained by using all
886
+ layers. This supports the premise that as the feature hierarchies
887
+ become deeper, they learn increasingly powerful features.
888
+
889
+
890
+
891
+
892
+ \begin{table}[h!]
893
+ \small
894
+ \vspace*{-2mm}
895
+ \begin{center}
896
+ \tabcolsep=0.11cm
897
+ \begin{tabular}{|l|l|l|}
898
+ \hline
899
+
900
+
901
+ & Cal-101 & Cal-256 \\
902
+ & (30/class) & (60/class) \\
903
+ \hline SVM (1) & $44.8 \pm 0.7$ & $24.6 \pm 0.4$ \\
904
+ \hline SVM (2) & $66.2 \pm 0.5$ & $39.6 \pm 0.3$ \\
905
+ \hline SVM (3) & $72.3 \pm 0.4$ & $46.0 \pm 0.3$ \\
906
+ \hline SVM (4) & $76.6 \pm 0.4$ & $51.3 \pm 0.1$ \\
907
+ \hline SVM (5) & $\bf{86.2 \pm 0.8}$ & $65.6 \pm 0.3$ \\
908
+ \hline SVM (7) & $\bf{85.5 \pm 0.4}$ & $\bf{71.7 \pm 0.2}$ \\
909
+ \hline Softmax (5) & $82.9 \pm 0.4$ & $65.7 \pm 0.5$ \\
910
+ \hline Softmax (7) & $\bf{85.4 \pm 0.4}$ & $\bf{72.6 \pm 0.1}$ \\
911
+
912
+
913
+
914
+
915
+
916
+ \hline
917
+ \end{tabular}
918
+ \vspace*{0mm}
919
+ \caption{Analysis of the discriminative information contained in each
920
+ layer of feature maps within our ImageNet-pretrained convnet. We
921
+ train either a linear SVM or softmax on features from different
922
+ layers (as indicated in brackets) from the convnet. Higher layers generally
923
+ produce more discriminative features.}
924
+ \label{tab:supretrain}
925
+ \vspace*{-8mm}
926
+ \end{center}
927
+ \end{table}
928
+
929
+
930
+
931
+
932
+
933
+
934
+
935
+
936
+
937
+
938
+ \section{Discussion}
939
+ We explored large convolutional neural network models, trained for
940
+ image classification, in a number ways. First, we presented a novel
941
+ way to visualize the activity within the model. This reveals the
942
+ features to be far from random, uninterpretable patterns. Rather, they
943
+ show many intuitively desirable properties such as compositionality,
944
+ increasing invariance and class discrimination as we ascend the
945
+ layers. We also showed how these visualization can be used to debug
946
+ problems with the model to obtain better results, for example
947
+ improving on Krizhevsky \etal's \cite{Kriz12} impressive ImageNet 2012
948
+ result. We then demonstrated through a series of occlusion experiments that
949
+ the model, while trained for classification, is highly sensitive to
950
+ local structure in the image and is not just using broad scene
951
+ context. An ablation study on the model revealed that having a minimum
952
+ depth to the network, rather than any individual section, is vital to
953
+ the model's performance.
954
+
955
+ Finally, we showed how the ImageNet trained model can generalize
956
+ well to other datasets. For Caltech-101 and Caltech-256,
957
+ the datasets are similar enough that we can beat the best reported
958
+ results, in the latter case by a significant margin.
959
+ This result brings into question to utility of benchmarks with small
960
+ (i.e.~$<10^4$) training sets. Our convnet model generalized less well
961
+ to the PASCAL data, perhaps suffering from dataset bias
962
+ \cite{Torralba11}, although it was still within $3.2\%$ of the best reported
963
+ result, despite no tuning for the task. For example, our performance
964
+ might improve if a different loss function was used that permitted
965
+ multiple objects per image. This would naturally enable the networks
966
+ to tackle the object detection as well.
967
+
968
+
969
+
970
+
971
+
972
+
973
+
974
+
975
+ \section*{Acknowledgments}
976
+ The authors are very grateful for support by NSF grant IIS-1116923, Microsoft Research
977
+ and a Sloan Fellowship.
978
+
979
+
980
+
981
+
982
+
983
+
984
+
985
+
986
+
987
+
988
+
989
+
990
+ \bibliography{demystify}
991
+ \bibliographystyle{icml2013}
992
+
993
+ \end{document}
papers/1312/1312.1445.tex ADDED
The diff for this file is too large to render. See raw diff
 
papers/1312/1312.6034.tex ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \pdfoutput=1
2
+
3
+ \documentclass{article}
4
+ \usepackage{nips13submit_e,times}
5
+ \usepackage{graphicx}
6
+ \usepackage{xcolor}
7
+ \usepackage{amsmath}
8
+ \usepackage{amssymb}
9
+ \usepackage{xspace}
10
+ \usepackage{multirow}
11
+ \usepackage[numbers,sort]{natbib}
12
+ \usepackage[
13
+ pagebackref=true,
14
+ breaklinks=true,
15
+ colorlinks,
16
+ bookmarks=false]{hyperref}
17
+ \usepackage[compact]{titlesec}
18
+
19
+ \def\ie{\emph{i.e.\!}\xspace}
20
+ \def\eg{\emph{e.g.\!}\xspace}
21
+ \def\Eg{\emph{E.g.\!}\xspace}
22
+ \def\etal{\emph{et al.\!}\xspace}
23
+ \DeclareMathOperator{\Ncal}{\mathcal{N}}
24
+ \DeclareMathOperator{\Rcal}{\mathcal{R}}
25
+ \newcommand{\figref}[1]{Fig.~\ref{#1}}
26
+ \newcommand{\tblref}[1]{Table~\ref{#1}}
27
+ \newcommand{\sref}[1]{Sect.~\ref{#1}}
28
+ \newcommand{\bx}{\mathbf{x}}
29
+ \newcommand{\bw}{\mathbf{w}}
30
+ \newcommand{\red}[1]{{\bf\color{red} #1}}
31
+ \newcommand{\sgn}{\operatorname{sgn}}
32
+
33
+
34
+ \DeclareMathOperator{\xvec}{\mathrm{\textbf{x}}}
35
+ \DeclareMathOperator{\pvec}{\mathrm{\textbf{p}}}
36
+ \DeclareMathOperator{\qvec}{\mathrm{\textbf{q}}}
37
+ \DeclareMathOperator{\indic}{\mathbf{1}}
38
+
39
+ \titlespacing{\section}{0pt}{0pt}{0pt}
40
+ \titlespacing{\subsection}{0pt}{0pt}{0pt}
41
+
42
+ \title{Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps}
43
+
44
+ \author{
45
+ Karen Simonyan \\
46
+ \And
47
+ Andrea Vedaldi \\
48
+ \And
49
+ Andrew Zisserman
50
+ }
51
+
52
+ \nipsfinalcopy
53
+
54
+ \setlength{\textfloatsep}{5pt plus 1.0pt minus 2.0pt}
55
+ \setlength{\floatsep}{2pt plus 1.0pt minus 2.0pt}
56
+
57
+ \begin{document}
58
+
59
+ \maketitle
60
+
61
+ \vspace{-3.5em}
62
+ \begin{center}
63
+ Visual Geometry Group, University of Oxford\\
64
+ \verb!{karen,vedaldi,az}@robots.ox.ac.uk!
65
+ \end{center}
66
+ \vspace{1em}
67
+
68
+ \begin{abstract}
69
+ This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets).
70
+ We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image.
71
+ The first one generates an image, which maximises the class score~\cite{Erhan09}, thus visualising the notion of the class, captured by a ConvNet.
72
+ The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised
73
+ object segmentation using classification ConvNets.
74
+ Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks~\cite{Zeiler13}.
75
+ \end{abstract}
76
+
77
+ \section{Introduction}
78
+ With the deep Convolutional Networks (ConvNets)~\cite{LeCun98} now
79
+ being the architecture of choice for large-scale image
80
+ recognition~\cite{Krizhevsky12,Ciresan12}, the problem of
81
+ understanding the aspects of visual appearance, captured inside a deep
82
+ model, has become particularly relevant and is the subject of this paper.
83
+
84
+ In previous work, Erhan~\etal~\cite{Erhan09} visualised deep models by
85
+ finding an input image which maximises the neuron activity of
86
+ interest by carrying out an optimisation using gradient ascent
87
+ in the image space. The method was used to visualise the hidden
88
+ feature layers of unsupervised deep architectures, such as the Deep
89
+ Belief Network (DBN)~\cite{Hinton06}, and it was later employed by
90
+ \mbox{Le~\etal~\cite{Le12}} to visualise the class models, captured by
91
+ a deep unsupervised auto-encoder.
92
+ Recently, the problem of ConvNet visualisation was addressed by
93
+ Zeiler~\etal~\cite{Zeiler13}. For convolutional layer visualisation,
94
+ they proposed the Deconvolutional Network (DeconvNet) architecture,
95
+ which aims to approximately reconstruct the input of each layer from
96
+ its output.
97
+
98
+ In this paper, we address the visualisation of deep image
99
+ classification ConvNets, trained on the large-scale ImageNet challenge
100
+ dataset~\cite{Berg10a}. To this end, we make the following three
101
+ contributions. First, we demonstrate that understandable
102
+ visualisations of ConvNet classification models can be obtained using
103
+ the numerical optimisation of the input image~\cite{Erhan09}
104
+ (\sref{sec:class_model}). Note, in our case, unlike~\cite{Erhan09},
105
+ the net is trained in a supervised manner, so we know which neuron in
106
+ the final fully-connected classification layer should be maximised to
107
+ visualise the class of interest (in the unsupervised case,
108
+ \cite{Le12} had to use a separate annotated image set
109
+ to find out the neuron responsible for a particular
110
+ class).
111
+ To the best of our knowledge, we are the
112
+ first to apply the method of~\cite{Erhan09} to the visualisation of
113
+ \mbox{ImageNet} classification ConvNets~\cite{Krizhevsky12}. Second, we
114
+ propose a method for computing the spatial support of a given class in
115
+ a given image (image-specific class saliency map) using a single
116
+ back-propagation pass through a classification ConvNet
117
+ (\sref{sec:class_saliency}). As discussed in~\sref{sec:graph_cut},
118
+ such saliency maps can be used for weakly supervised object
119
+ localisation. Finally, we show in~\sref{sec:comp_deconv} that the
120
+ gradient-based visualisation methods generalise the deconvolutional
121
+ network reconstruction procedure~\cite{Zeiler13}.
122
+
123
+ \paragraph{ConvNet implementation details.}
124
+ Our visualisation experiments were carried out using a single deep ConvNet, trained on the ILSVRC-2013 dataset~\cite{Berg10a}, which includes 1.2M training images, labelled into 1000 classes.
125
+ Our ConvNet is similar to that of~\cite{Krizhevsky12} and is implemented using their \verb!cuda-convnet! toolbox\footnote{\url{http://code.google.com/p/cuda-convnet/}}, although our net is less wide,
126
+ and we used additional image jittering, based on zeroing-out random parts of an image.
127
+ Our weight layer configuration is: conv64-conv256-conv256-conv256-conv256-full4096-full4096-full1000,
128
+ where convN denotes a convolutional layer with N filters, fullM -- a fully-connected layer with M outputs.
129
+ On ILSVRC-2013 validation set, the network achieves the top-1/top-5 classification error of $39.7\%/17.7\%$, which is slightly better than
130
+ $40.7\%$/$18.2\%$, reported in~\cite{Krizhevsky12} for a single ConvNet.
131
+
132
+ \begin{figure}[hp]
133
+ \centering
134
+ \includegraphics[width=\textwidth]{class_model}
135
+ \caption{
136
+ \textbf{Numerically computed images, illustrating the class appearance models, learnt by a ConvNet, trained on ILSVRC-2013.}
137
+ Note how different aspects of class appearance are captured in a single image.
138
+ Better viewed in colour.
139
+ }
140
+ \label{fig:class_model}
141
+ \end{figure}
142
+
143
+ \section{Class Model Visualisation}
144
+ \label{sec:class_model}
145
+ In this section we describe a technique for visualising the class models, learnt by the image classification ConvNets.
146
+ Given a learnt classification ConvNet and a class of interest, the visualisation method consists in numerically \emph{generating} an image~\cite{Erhan09},
147
+ which is representative of the class in terms of the ConvNet class scoring model.
148
+
149
+ More formally, let $S_c(I)$ be the score of the class $c$, computed by the classification layer of the ConvNet for an image $I$.
150
+ We would like to find an $L_2$-regularised image, such that the score $S_c$ is high:
151
+ \begin{equation}
152
+ \label{eq:class_img}
153
+ \arg\max_I S_c(I) - \lambda \|I\|_2^2,
154
+ \end{equation}
155
+ where $\lambda$ is the regularisation parameter.
156
+ A locally-optimal $I$ can be found by the back-propagation method. The procedure is related to the ConvNet training procedure, where the back-propagation is used to optimise
157
+ the layer weights. The difference is that in our case the optimisation is performed with respect to the input image, while the weights are fixed to those found during the training stage.
158
+ We initialised the optimisation with the zero image (in our case, the ConvNet was trained on the zero-centred image data), and then added the training set mean image to the result.
159
+ The class model visualisations for several classes are shown in~\figref{fig:class_model}.
160
+
161
+
162
+ It should be noted that we used the (unnormalised) class scores $S_c$, rather than the class posteriors, returned by the soft-max layer: $P_c=\frac{\exp{S_c}}{\sum_c \exp{S_c}}$.
163
+ The reason is that the maximisation of the class posterior can be achieved by minimising the scores of other classes.
164
+ Therefore, we optimise $S_c$ to ensure that the optimisation concentrates only on the class in question $c$.
165
+ We also experimented with optimising the posterior $P_c$, but the results were not visually prominent, thus confirming our intuition.
166
+
167
+ \section{Image-Specific Class Saliency Visualisation}
168
+ \label{sec:class_saliency}
169
+
170
+ In this section we describe how a classification ConvNet can be queried about the spatial support of a particular class in a given image.
171
+ Given an image $I_0$, a class $c$, and a classification ConvNet with the class score function $S_c(I)$,
172
+ we would like to rank the pixels of $I_0$ based on their influence on the score $S_c(I_0)$.
173
+
174
+ We start with a motivational example. Consider the linear score model for the class $c$:
175
+ \begin{equation}
176
+ S_c(I)=w_c^T I + b_c,
177
+ \end{equation}
178
+ where the image $I$ is represented in the vectorised (one-dimensional) form, and $w_c$ and $b_c$ are respectively the weight vector and the bias
179
+ of the model. In this case, it is easy to see that the magnitude of elements of $w$ defines the importance of the corresponding pixels of $I$ for the
180
+ class $c$.
181
+
182
+ In the case of deep ConvNets, the class score $S_c(I)$ is a highly non-linear function of $I$, so the reasoning of the previous paragraph can not be immediately
183
+ applied. However, given an image $I_0$, we can approximate $S_c(I)$ with a linear function in the neighbourhood of $I_0$ by computing the first-order Taylor expansion:
184
+ \begin{equation}
185
+ S_c(I) \approx w^T I + b,
186
+ \end{equation}
187
+ where $w$ is the derivative of $S_c$ with respect to the image $I$ at the point (image) $I_0$:
188
+ \begin{equation}
189
+ \label{eq:deriv_img}
190
+ w=\left . \frac{\partial S_c}{\partial I} \right|_{I_0}.
191
+ \end{equation}
192
+
193
+ Another interpretation of computing the image-specific class saliency using the class score derivative~\eqref{eq:deriv_img}
194
+ is that the magnitude of the derivative indicates which pixels need to be changed the least to affect the class score the most.
195
+ One can expect that such pixels correspond to the object location in the image.
196
+ We note that a similar technique has been previously applied by~\cite{Baehrens10} in the context of Bayesian classification.
197
+
198
+ \subsection{Class Saliency Extraction}
199
+ \label{sec:sal_extraction}
200
+ Given an image $I_0$ (with $m$ rows and $n$ columns) and a class $c$, the class saliency map $M \in \Rcal^{m \times n}$ is computed as follows.
201
+ First, the derivative $w$~\eqref{eq:deriv_img} is found by back-propagation.
202
+ After that, the saliency map is obtained by rearranging the elements of the vector $w$.
203
+ In the case of a grey-scale image, the number of elements in $w$ is equal to the number of pixels in $I_0$, so the map can be computed as
204
+ $M_{ij} = |w_{h(i,j)}|$, where $h(i,j)$ is the index of the element of $w$,
205
+ corresponding to the image pixel in the $i$-th row and $j$-th column.
206
+ In the case of the multi-channel (\eg RGB) image, let us assume that the colour channel $c$ of the pixel $(i,j)$ of image $I$ corresponds to the element of $w$ with the index $h(i,j,c)$.
207
+ To derive a single class saliency value for each pixel $(i,j)$, we took the maximum magnitude of $w$ across all colour channels: $M_{ij} = \max_c |w_{h(i,j,c)}|$.
208
+
209
+ It is important to note that the saliency maps are extracted using a classification ConvNet trained on the image labels, so \emph{no additional annotation is required} (such as
210
+ object bounding boxes or segmentation masks). The computation of the image-specific saliency map for a single class is extremely quick, since it only requires a single back-propagation pass.
211
+
212
+ We visualise the saliency maps for the highest-scoring class (top-1 class prediction) on randomly selected ILSVRC-2013 test set images in~\figref{fig:sal_map}.
213
+ Similarly to the ConvNet classification procedure~\cite{Krizhevsky12}, where the class predictions are computed on 10 cropped and reflected sub-images,
214
+ we computed 10 saliency maps on the 10 sub-images, and then averaged them.
215
+
216
+ \begin{figure}[hp]
217
+ \centering
218
+ \includegraphics[width=\textwidth]{img_sal}
219
+ \caption{
220
+ \textbf{Image-specific class saliency maps for the top-1 predicted class in ILSVRC-2013 test images.}
221
+ The maps were extracted using a single back-propagation pass through a classification ConvNet.
222
+ No additional annotation (except for the image labels) was used in training.
223
+ }
224
+ \label{fig:sal_map}
225
+ \end{figure}
226
+
227
+ \subsection{Weakly Supervised Object Localisation}
228
+ \label{sec:graph_cut}
229
+ The weakly supervised class saliency maps (\sref{sec:sal_extraction}) encode the location of the object of the given class in the given image, and thus can be used for object localisation (in spite of being trained
230
+ on image labels only).
231
+ Here we briefly describe a simple object localisation procedure, which we used for the localisation task of the ILSVRC-2013 challenge~\cite{Simonyan13d}.
232
+
233
+ Given an image and the corresponding class saliency map, we compute the object segmentation mask using the GraphCut colour segmentation~\cite{Boykov01}.
234
+ The use of the colour segmentation is motivated by the fact that the saliency map might capture only the most discriminative part of an object, so saliency thresholding might not be able to
235
+ highlight the whole object. Therefore, it is important to be able to propagate the thresholded map to other parts of the object, which we aim to achieve here using the colour continuity cues.
236
+ Foreground and background colour models were set to be the Gaussian Mixture Models.
237
+ The foreground model was estimated from the pixels with the saliency higher than a threshold, set to the $95\%$ quantile of the saliency distribution in the image;
238
+ the background model was estimated from the pixels with the saliency smaller than the $30\%$ quantile (\figref{fig:seg}, right-middle).
239
+ The GraphCut segmentation~\cite{Boykov01} was then performed using the publicly available implementation\footnote{\url{http://www.robots.ox.ac.uk/~vgg/software/iseg/}}.
240
+ Once the image pixel labelling into foreground and background is computed, the object segmentation mask is set to the largest connected component of the foreground pixels (\figref{fig:seg}, right).
241
+
242
+ We entered our object localisation method into the ILSVRC-2013 localisation challenge.
243
+ Considering that the challenge requires the object bounding boxes to be reported, we computed them as the bounding boxes of the object segmentation masks.
244
+ The procedure was repeated for each of the top-5 predicted classes.
245
+ The method achieved $46.4\%$ top-5 error on the test set of ILSVRC-2013.
246
+ It should be noted that the method is weakly supervised (unlike the challenge winner with $29.9\%$ error), and the object localisation task was not taken into account during training.
247
+ In spite of its simplicity, the method still outperformed our submission to ILSVRC-2012 challenge (which used the same dataset), which achieved $50.0\%$ localisation error using a fully-supervised
248
+ algorithm based on the part-based models~\cite{Felzenswalb08} and Fisher vector feature encoding~\cite{Perronnin10a}.
249
+
250
+ \begin{figure}[hp]
251
+ \centering
252
+ \includegraphics[width=\textwidth]{img_seg}
253
+ \caption{
254
+ \textbf{Weakly supervised object segmentation using ConvNets (\sref{sec:graph_cut}).}
255
+ \emph{Left:} images from the test set of ILSVRC-2013.
256
+ \emph{Left-middle:} the corresponding saliency maps for the top-1 predicted class.
257
+ \emph{Right-middle:} thresholded saliency maps: blue shows the areas used to compute the foreground colour model, cyan -- background colour model, pixels shown in red are not used
258
+ for colour model estimation.
259
+ \emph{Right:} the resulting foreground segmentation masks.
260
+ }
261
+ \label{fig:seg}
262
+ \end{figure}
263
+
264
+ \section{Relation to Deconvolutional Networks}
265
+ \label{sec:comp_deconv}
266
+ In this section we establish the connection between the gradient-based visualisation and the \mbox{DeconvNet} architecture of~\cite{Zeiler13}.
267
+ As we show below, DeconvNet-based reconstruction of the $n$-th layer input $X_n$ is either equivalent or similar to computing the gradient
268
+ of the visualised neuron activity $f$ with respect to $X_n$, so DeconvNet effectively corresponds to the gradient back-propagation through a ConvNet.
269
+
270
+ For the convolutional layer $X_{n+1}=X_n \star K_n$, the gradient is computed as $\partial f / \partial X_n = \partial f / \partial X_{n+1} \star \widehat{K_n}$,
271
+ where $K_n$ and $\widehat{K_n}$ are the convolution kernel and its flipped version, respectively.
272
+ The convolution with the flipped kernel exactly corresponds to computing the $n$-th layer reconstruction $R_n$ in a DeconvNet: $R_n = R_{n+1} \star \widehat{K_n}$.
273
+
274
+ For the RELU rectification layer $X_{n+1}=\max(X_n, 0)$, the sub-gradient takes the form:
275
+ $\partial f / \partial X_n = \partial f / \partial X_{n+1} \indic\left(X_n >0\right)$, where $\indic$ is the element-wise indicator function. This is slightly different from
276
+ the DeconvNet RELU reconstruction: $R_n = R_{n+1} \indic\left(R_{n+1} >0\right)$, where the sign indicator is computed on the output reconstruction $R_{n+1}$ instead of the layer input $X_n$.
277
+
278
+ Finally, consider a max-pooling layer $X_{n+1}(p) = \max_{q \in \Omega(p)} X_n(q)$, where the element $p$ of the output feature map is computed by pooling over the corresponding spatial
279
+ neighbourhood $\Omega(p)$ of the input. The sub-gradient is computed as
280
+ $\partial f / \partial X_n(s) = \partial f / \partial X_{n+1} (p) \indic (s = \arg\max_{q \in \Omega(p)} X_n(q))$.
281
+ Here, $\arg\max$ corresponds to the max-pooling ``switch'' in a DeconvNet.
282
+
283
+ We can conclude that apart from the RELU layer, computing the approximate feature map reconstruction $R_n$ using a DeconvNet is equivalent to computing
284
+ the derivative $\partial f / \partial X_n$ using back-propagation, which is a part of our visualisation algorithms.
285
+ Thus, gradient-based visualisation can be seen as the generalisation of that of~\cite{Zeiler13}, since the gradient-based techniques can be applied to the visualisation of activities
286
+ in any layer, not just a convolutional one. In particular, in this paper we visualised the class score neurons in the final fully-connected layer.
287
+
288
+ It should be noted that our class model visualisation (\sref{sec:class_model}) depicts the notion of a class, memorised by a ConvNet, and is not specific to any particular image.
289
+ At the same time, the class saliency visualisation (\sref{sec:class_saliency}) is image-specific, and in this sense is related to the image-specific convolutional layer visualisation of~\cite{Zeiler13}
290
+ (the main difference being that we visualise a neuron in a fully connected layer rather than a convolutional layer).
291
+
292
+ \section{Conclusion}
293
+ In this paper, we presented two visualisation techniques for deep classification ConvNets.
294
+ The first generates an artificial image, which is representative of a class of interest.
295
+ The second computes an image-specific class saliency map, highlighting the areas of the given image, discriminative with respect to the given class.
296
+ We showed that such saliency map can be used to initialise GraphCut-based object segmentation without the need to train dedicated segmentation or detection models.
297
+ Finally, we demonstrated that gradient-based visualisation techniques generalise the DeconvNet reconstruction procedure~\cite{Zeiler13}.
298
+ In our future research, we are planning to incorporate the image-specific saliency maps into learning formulations in a more principled manner.
299
+
300
+ \section*{Acknowledgements}
301
+ This work was supported by ERC grant VisRec no. 228180.
302
+ We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.
303
+
304
+ \small
305
+ \bibliographystyle{plainnat}
306
+ \bibliography{bib/shortstrings,bib/vgg_local,bib/vgg_other,bib/new}
307
+ \end{document}
papers/1404/1404.1100.tex ADDED
@@ -0,0 +1,553 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[rmp,aps,twocolumn,nofootinbib]{revtex4}
2
+
3
+ \usepackage{pslatex} \usepackage{amsmath}
4
+ \usepackage[]{graphicx}
5
+
6
+ \setlength{\topmargin}{-0.5in}
7
+
8
+ \setlength{\parindent}{0pt} \setlength{\parskip}{1.5ex} \newcommand{\linespacing}[1]{\renewcommand\baselinestretch{#1}\normalsize} \long\def\symbolfootnote[#1]#2{\begingroup \def\thefootnote{\fnsymbol{footnote}}\footnote[#1]{#2}\endgroup}
9
+ \def\argmax{\operatornamewithlimits{arg\,max}}
10
+
11
+ \newcommand{\comment}[1]{}
12
+
13
+
14
+
15
+
16
+ \begin{document}
17
+
18
+ \title{A Tutorial on Principal Component Analysis}
19
+ \date{\today; Version 3.02}
20
+ \author{Jonathon Shlens}
21
+ \email{jonathon.shlens@gmail.com}
22
+ \affiliation{
23
+ Google Research \\
24
+ Mountain View, CA 94043}
25
+
26
+
27
+ \begin{abstract}
28
+ Principal component analysis (PCA) is a mainstay of modern data analysis - a black box that is widely used but (sometimes) poorly understood. The goal of this paper is to dispel the magic behind this black box. This manuscript focuses on building a solid intuition for how and why principal component analysis works. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind PCA. This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of PCA as well as the when, the how and the why of applying this technique.
29
+ \end{abstract}
30
+
31
+ \maketitle
32
+
33
+ \section{Introduction}
34
+ Principal component analysis (PCA) is a standard tool in modern data analysis - in diverse fields from neuroscience to computer graphics - because it is a simple, non-parametric method for extracting relevant information from confusing data sets. With minimal effort PCA provides a roadmap for how to reduce a complex data set to a lower dimension to reveal the sometimes hidden, simplified structures that often underlie it.
35
+
36
+ The goal of this tutorial is to provide both an intuitive feel for PCA, and a thorough discussion of this topic. We will begin with a simple example and provide an intuitive explanation of the goal of PCA. We will continue by adding mathematical rigor to place it within the framework of linear algebra to provide an explicit solution. We will see how and why PCA is intimately related to the mathematical technique of singular value decomposition (SVD). This understanding will lead us to a prescription for how to apply PCA in the real world and an appreciation for the underlying assumptions. My hope is that a thorough understanding of PCA provides a foundation for approaching the fields of machine learning and dimensional reduction.
37
+
38
+ The discussion and explanations in this paper are informal in the spirit of a tutorial. The goal of this paper is to {\it educate}. Occasionally, rigorous mathematical proofs are necessary although relegated to the Appendix. Although not as vital to the tutorial, the proofs are presented for the adventurous reader who desires a more complete understanding of the math. My only assumption is that the reader has a working knowledge of linear algebra. My goal is to provide a thorough discussion by largely building on ideas from linear algebra and avoiding challenging topics in statistics and optimization theory (but see Discussion). Please feel free to contact me with any suggestions, corrections or comments.
39
+
40
+ \section{Motivation: A Toy Example}
41
+
42
+ Here is the perspective: we are an experimenter. We are trying to understand some phenomenon by measuring various quantities (e.g. spectra, voltages, velocities, etc.) in our system. Unfortunately, we can not figure out what is happening because the data appears clouded, unclear and even redundant. This is not a trivial problem, but rather a fundamental obstacle in empirical science. Examples abound from complex systems such as neuroscience, web indexing, meteorology and oceanography - the number of variables to measure can be unwieldy and at times even {\it deceptive}, because the underlying relationships can often be quite simple.
43
+
44
+ Take for example a simple toy problem from physics diagrammed in Figure~\ref{diagram:toy}. Pretend we are studying the motion of the physicist's ideal spring. This system consists of a ball of mass $m$ attached to a massless, frictionless spring. The ball is released a small distance away from equilibrium (i.e. the spring is stretched). Because the spring is ideal, it oscillates indefinitely along the $x$-axis about its equilibrium at a set frequency.
45
+
46
+ This is a standard problem in physics in which the motion along the $x$ direction is solved by an explicit function of time. In other words, the underlying dynamics can be expressed as a function of a single variable $x$.
47
+
48
+ However, being ignorant experimenters we do not know any of this. We do not know which, let alone how many, axes and dimensions are important to measure. Thus, we decide to measure the ball's position in a three-dimensional space (since we live in a three dimensional world). Specifically, we place three movie cameras around our system of interest. At \mbox{120 Hz} each movie camera records an image indicating a two dimensional position of the ball (a projection). Unfortunately, because of our ignorance, we do not even know what are the real $x$, $y$ and $z$ axes, so we choose three camera positions $\vec{\mathbf{a}}, \vec{\mathbf{b}}$ and $\vec{\mathbf{c}}$ at some arbitrary angles with respect to the system. The angles between our measurements might not even be $90^{o}$! Now, we record with the cameras for several minutes. The big question remains: {\it how do we get from this data set to a simple equation of $x$?}
49
+
50
+ We know a-priori that if we were smart experimenters, we would have just measured the position along the $x$-axis with one camera. But this is not what happens in the real world. We often do not know which measurements best reflect the dynamics of our system in question. Furthermore, we sometimes record more dimensions than we actually need.
51
+
52
+ Also, we have to deal with that pesky, real-world problem of noise. In the toy example this means that we need to deal with air, imperfect cameras or even friction in a less-than-ideal spring. Noise contaminates our data set only serving to obfuscate the dynamics further. {\it This toy example is the challenge experimenters face everyday.} Keep this example in mind as we delve further into abstract concepts. Hopefully, by the end of this paper we will have a good understanding of how to systematically extract $x$ using principal component analysis.
53
+
54
+ \begin{figure}[t]
55
+ \centering
56
+ \includegraphics[width=0.47\textwidth]{Toy-Example.pdf}
57
+ \caption{A toy example. The position of a ball attached to an oscillating spring is recorded using three cameras A, B and C. The position of the ball tracked by each camera is depicted in each panel below.}
58
+ \label{diagram:toy}
59
+ \end{figure}
60
+
61
+
62
+ \section {Framework: Change of Basis}
63
+
64
+ The goal of principal component analysis is to identify the most meaningful basis to re-express a data set. The hope is that this new basis will filter out the noise and reveal hidden structure. In the example of the spring, the explicit goal of PCA is to determine: ``the dynamics are along the $x$-axis.'' In other words, the goal of PCA is to determine that $\hat{\mathbf{x}}$, i.e. the unit basis vector along the $x$-axis, is the important dimension. Determining this fact allows an experimenter to discern which dynamics are important, redundant or noise.
65
+
66
+ \subsection{A Naive Basis}
67
+ With a more precise definition of our goal, we need a more precise definition of our data as well. We treat every time sample (or experimental trial) as an individual sample in our data set. At each time sample we record a set of data consisting of multiple measurements (e.g. voltage, position, etc.). In our data set, at one
68
+ point in time, camera {\it A} records a corresponding ball position $\left(x_A,y_A\right)$. One sample or trial can then be expressed as a 6 dimensional column vector
69
+ $$\vec{X}= \left[ \begin{array}{c} x_A \\ y_A \\ x_B \\ y_B \\ x_C \\ y_C \\ \end{array} \right]$$
70
+ where each camera contributes a 2-dimensional projection of the ball's position to the entire vector $\vec{X}$. If we record the ball's position for 10 minutes at 120 Hz, then we have recorded $10\times 60 \times 120 = 72000 $ of these vectors.
71
+
72
+ With this concrete example, let us recast this problem in abstract terms. Each sample $\vec{X}$ is an $m$-dimensional vector, where $m$ is the number of measurement types. Equivalently, every sample is a vector that lies in an $m$-dimensional vector space spanned by some orthonormal basis. From linear algebra we know that all measurement vectors form a linear combination of this set of unit length basis vectors. What is this orthonormal basis?
73
+
74
+ This question is usually a tacit assumption often overlooked. Pretend we gathered our toy example data above, but only looked at camera $A$. What is an orthonormal basis for $(x_A, y_A)$? A naive choice would be $\{(1,0),(0,1)\}$, but why select this basis over $\{ (\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2}), (\frac{-\sqrt{2}}{2},\frac{-\sqrt{2}}{2})\}$ or any other arbitrary rotation? The reason is that the {\it naive basis reflects the method we gathered the data.} Pretend we record the position $(2,2)$. We did not record $2\sqrt{2}$ in the $(\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2})$ direction and $0$ in the perpendicular direction. Rather, we recorded the position $(2,2)$ on our camera meaning 2 units up and 2 units to the left in our camera window. Thus our original basis reflects the method we measured our data.
75
+
76
+ How do we express this naive basis in linear algebra? In the two dimensional case, $\{(1,0),(0,1)\}$ can be recast as individual row vectors. A matrix constructed out of these row vectors is the $2\times 2$ identity matrix $I$. We can generalize this to the $m$-dimensional case by constructing an $m\times m$ identity matrix
77
+ $$\mathbf{B} = \left[ \begin{array}{c} \mathbf{b_1} \\ \mathbf{b_2} \\ \vdots \\ \mathbf{b_m} \end{array} \right] =
78
+ \left[ \begin{array}{cccc} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \\ \end{array} \right] = \mathbf{I}$$
79
+ where each {\it row} is an orthornormal basis vector $\mathbf{b}_i$ with $m$ components. We can consider our naive basis as the effective starting point. All of our data has been recorded in this basis and thus it can be trivially expressed as a linear combination of $\{\mathbf{b}_i\}$.
80
+
81
+ \subsection{Change of Basis}
82
+ With this rigor we may now state more precisely what PCA asks: {\it Is there another basis, which is a linear combination of the original basis, that best re-expresses our data set?}
83
+
84
+ A close reader might have noticed the conspicuous addition of the word {\it linear}. Indeed, PCA makes one stringent but powerful assumption: linearity. Linearity vastly simplifies the problem by restricting the set of potential bases. With this assumption PCA is now limited to re-expressing the data as a {\it linear combination} of its basis vectors.
85
+
86
+ Let $\mathbf{X}$ be the original data set, where each $column$ is a single sample (or moment in time) of our data set (i.e. $\vec{X}$). In the toy example $\mathbf{X}$ is an $m\times n$ matrix where $m=6$ and $n=72000$. Let $\mathbf{Y}$ be another $m \times n$ matrix related by a linear transformation $\mathbf{P}$. $\mathbf{X}$ is the original recorded data set and $\mathbf{Y}$ is a new representation of that data set.
87
+ \begin{equation}
88
+ \mathbf{PX = Y}
89
+ \label{eqn:basis-transform}
90
+ \end{equation}
91
+ Also let us define the following quantities.\footnote{In this section $\mathbf{x_i}$ and $\mathbf{y_i}$ are {\it column} vectors, but be forewarned. In all other sections $\mathbf{x_i}$ and $\mathbf{y_i}$ are {\it row} vectors.}
92
+ \begin{itemize}
93
+ \item $\mathbf{p_i}$ are the rows of $\mathbf{P}$
94
+ \item $\mathbf{x_i}$ are the columns of $\mathbf{X}$ (or individual $\vec{X})$.
95
+ \item $\mathbf{y_i}$ are the columns of $\mathbf{Y}$.
96
+ \end{itemize}
97
+ Equation~\ref{eqn:basis-transform} represents a change of basis and thus can have many interpretations.
98
+ \begin{enumerate}
99
+ \item $\mathbf{P}$ is a matrix that transforms $\mathbf{X}$ into $\mathbf{Y}$.
100
+ \item Geometrically, $\mathbf{P}$ is a rotation and a stretch which again transforms $\mathbf{X}$ into $\mathbf{Y}$.
101
+ \item The rows of $\mathbf{P}$, $\left\{\mathbf{p_1}, \ldots, \mathbf{p_m}\right\}$, are a set of new basis vectors for expressing the columns of $\mathbf{X}$.
102
+ \end{enumerate}
103
+ The latter interpretation is not obvious but can be seen by writing out the explicit dot products of $\mathbf{PX}$.
104
+ \begin{eqnarray*}
105
+ \mathbf{PX} & = & \left[ \begin{array}{c} \mathbf{p_1} \\ \vdots \\ \mathbf{p_m} \\ \end{array} \right] \left[ \begin{array}{ccc} \mathbf{x_1} & \cdots & \mathbf{x_n} \\ \end{array} \right] \\
106
+ \mathbf{Y} & = & \left[ \begin{array}{ccc} \mathbf{p_1 \cdot x_1} & \cdots & \mathbf{p_1 \cdot x_n} \\ \vdots & \ddots & \vdots \\ \mathbf{p_m \cdot x_1} & \cdots & \mathbf{p_m \cdot x_n} \\ \end{array} \right] \\
107
+ \end{eqnarray*}
108
+ We can note the form of each column of $\mathbf{Y}$.
109
+ $$\mathbf{y}_i = \left[ \begin{array}{c} \mathbf{p_1\cdot x_i} \\ \vdots \\ \mathbf{p_m\cdot x_i} \\ \end{array}\right]$$
110
+ We recognize that each coefficient of $\mathbf{y_i}$ is a dot-product of $\mathbf{x_i}$ with the corresponding row in $\mathbf{P}$. In other words, the $j^{th}$ coefficient of $\mathbf{y_i}$ is a projection on to the $j^{th}$ row of $\mathbf{P}$. This is in fact the very form of an equation where $\mathbf{y_i}$ is a projection on to the basis of $\left\{\mathbf{p_1}, \ldots, \mathbf{p_m}\right\}$. Therefore, the rows of $\mathbf{P}$ are a new set of basis vectors for
111
+ representing of columns of $\mathbf{X}$.
112
+
113
+ \subsection{Questions Remaining}
114
+ By assuming linearity the problem reduces to finding the appropriate {\it change of basis}. The row vectors $\left\{\mathbf{p_1}, \ldots, \mathbf{p_m}\right\}$ in this transformation will become the {\it principal components} of $\mathbf{X}$. Several questions now arise.
115
+ \begin{itemize}
116
+ \item What is the best way to re-express $\mathbf{X}$?
117
+ \item What is a good choice of basis $\mathbf{P}$?
118
+ \end{itemize}
119
+ These questions must be answered by next asking ourselves {\it what features we would like $\mathbf{Y}$ to exhibit}. Evidently, additional assumptions beyond linearity are required to arrive at a reasonable result. The selection of these assumptions is the subject of the next section.
120
+
121
+
122
+ \section{Variance and the Goal}
123
+ Now comes the most important question: what does {\it best express} the data mean? This section will build up an intuitive answer to this question and along the way tack on additional assumptions.
124
+
125
+ \subsection{Noise and Rotation}
126
+
127
+ \begin{figure}
128
+ \includegraphics[width=0.25\textwidth]{SNR.pdf}
129
+ \caption{Simulated data of $(x, y)$ for camera {\it A}. The signal and noise variances $\sigma_{signal}^{2}$ and $\sigma_{noise}^{2}$ are graphically represented by the two lines subtending the cloud of data. Note that the largest direction of variance does not lie along the basis of the recording $(x_A, y_A)$ but rather along the best-fit line.}
130
+ \label{fig:snr}
131
+ \end{figure}
132
+
133
+ Measurement noise in any data set must be low or else, no matter the analysis technique, no information about a signal can be extracted. There exists no absolute scale for noise but rather all noise is quantified relative to the signal strength. A common measure is the {\it signal-to-noise ratio} ({\it SNR}), or a ratio of variances $\sigma^2$,
134
+ $$SNR = \frac{\sigma_{signal}^{2}}{\sigma_{noise}^{2}}.$$
135
+ A high SNR ($\gg 1$) indicates a high precision measurement, while a low SNR indicates very noisy data.
136
+
137
+ Let's take a closer examination of the data from camera {\it A} in Figure~\ref{fig:snr}. Remembering that the spring travels in a straight line, every individual camera should record motion in a straight line as well. Therefore, any spread deviating from straight-line motion is noise. The variance due to the signal and noise are indicated by each line in the diagram. The ratio of the two lengths measures how skinny the cloud is: possibilities include a thin line (SNR $\gg 1$), a circle (SNR $ = 1$) or even worse. By positing reasonably good measurements, quantitatively we assume that directions with largest variances in our measurement space contain the dynamics of interest. In Figure~\ref{fig:snr} the direction with the largest variance is not $\hat{x}_A = (1,0)$ nor $\hat{y}_A = (0,1)$, but the direction along the long axis of the cloud. Thus, by assumption the dynamics of interest exist along directions with largest variance and presumably highest SNR.
138
+
139
+ Our assumption suggests that the basis for which we are searching is not the naive basis because these directions (i.e. $(x_A, y_A)$) do not correspond to the directions of largest variance. Maximizing the variance (and by assumption the SNR) corresponds to finding the appropriate rotation of the naive basis. This intuition corresponds to finding the direction indicated by the line $\sigma^2_{signal}$ in Figure~\ref{fig:snr}. In the 2-dimensional case of Figure~\ref{fig:snr} the direction of largest variance corresponds to the best-fit line for the data cloud. Thus, rotating the naive basis to lie parallel to the best-fit line would reveal the direction of motion of the spring for the 2-D case. How do we generalize this notion to an arbitrary number of dimensions? Before we approach this question we need to examine this issue from a second perspective.
140
+
141
+ \subsection{Redundancy}
142
+
143
+ \begin{figure}
144
+ \includegraphics[width=0.47\textwidth]{Redundancy.pdf}
145
+ \caption{A spectrum of possible redundancies in data from the two separate measurements $r_1$ and $r_2$. The two measurements on the left are uncorrelated because one can not predict one from the other. Conversely, the two measurements on the right are highly correlated indicating highly redundant measurements.}
146
+ \label{fig:redundancy}
147
+ \end{figure}
148
+
149
+ Figure~\ref{fig:snr} hints at an additional confounding factor in our data - redundancy. This issue is particularly evident in the example of the spring. In this case multiple sensors record the same dynamic information. Reexamine Figure~\ref{fig:snr} and ask whether it was really necessary to record 2 variables. Figure~\ref{fig:redundancy} might reflect a range of possibile plots between two arbitrary measurement types $r_1$ and $r_2$. The left-hand panel depicts two recordings with no apparent relationship. Because one can not predict $r_1$ from $r_2$, one says that $r_1$ and $r_2$ are uncorrelated.
150
+
151
+ On the other extreme, the right-hand panel of Figure~\ref{fig:redundancy} depicts highly correlated recordings. This extremity might be achieved by several means:
152
+ \begin{itemize}
153
+ \item A plot of $(x_A,x_B)$ if cameras {\it A} and {\it B} are very nearby.
154
+ \item A plot of $(x_A,\tilde{x}_A)$ where $x_A$ is in meters and $\tilde{x}_A$ is in inches.
155
+ \end{itemize}
156
+ Clearly in the right panel of Figure~\ref{fig:redundancy} it would be more meaningful to just have recorded a single variable, not both. Why? Because one can calculate $r_1$ from $r_2$ (or vice versa) using the best-fit line. Recording solely one response would express the data more concisely and reduce the number of sensor recordings ($2\rightarrow 1$ variables). Indeed, this is the central idea behind dimensional reduction.
157
+
158
+ \subsection{Covariance Matrix}
159
+ In a 2 variable case it is simple to identify redundant cases by finding the slope of the best-fit line and judging the quality of the fit. How do we quantify and generalize these notions to arbitrarily higher dimensions? Consider two sets of measurements with zero means
160
+ $$A = \left\{a_1,a_2,\ldots,a_n\right\}\;,\;\;\;B=\left\{b_1,b_2,\ldots,b_n\right\}$$
161
+ where the subscript denotes the sample number. The variance of $A$ and $B$ are individually defined as,
162
+ $$\sigma^{2}_{A} = \frac{1}{n}\sum_i a^2_i, \;\;\;\; \sigma^{2}_{B} = \frac{1}{n}\sum_i b^2_i$$
163
+ The {\it covariance} between $A$ and $B$ is a straight-forward generalization.
164
+ $$covariance\;of\;A\;and\;B \equiv \sigma^{2}_{AB} = \frac{1}{n} \sum_i a_i b_i$$
165
+ The covariance measures the degree of the linear relationship between two variables. A large positive value indicates positively correlated data. Likewise, a large negative value denotes negatively correlated data. The absolute magnitude of the covariance measures the degree of redundancy. Some additional facts about the covariance.
166
+ \begin{itemize}
167
+ \item $\sigma_{AB}$ is zero if and only if $A$ and $B$ are uncorrelated (e.g. Figure~\ref{fig:snr}, left panel).
168
+ \item $\sigma_{AB}^{2} = \sigma_{A}^{2}$ if $A=B$.
169
+ \end{itemize}
170
+ We can equivalently convert $A$ and $B$ into corresponding row vectors.
171
+ \begin{eqnarray*}
172
+ \mathbf{a} & = & \left[a_1\;a_2\;\ldots\;a_n\right] \\
173
+ \mathbf{b} & = & \left[b_1\;b_2\;\ldots\;b_n\right]
174
+ \end{eqnarray*}
175
+ so that we may express the covariance as a dot product matrix computation.\footnote{Note that in practice, the covariance $\sigma^2_{AB}$ is calculated as $\frac{1}{n-1}\sum_i a_i b_i$. The slight change in normalization constant arises from estimation theory, but that is beyond the scope of this tutorial.}
176
+ \begin{equation}
177
+ \sigma^{2}_{\mathbf{ab}} \equiv \frac{1}{n} \mathbf{ab}^T
178
+ \label{eqn:value-covariance}
179
+ \end{equation}
180
+
181
+ Finally, we can generalize from two vectors to an arbitrary number. Rename the row vectors $\mathbf{a}$ and $ \mathbf{b}$ as $\mathbf{x_1}$ and $\mathbf{x_2}$, respectively, and consider additional indexed row vectors $\mathbf{x_3},\ldots,\mathbf{x_m}$. Define a new $m \times n$ matrix $\mathbf{X}$.
182
+ $$\mathbf{X} = \left[ \begin{array}{c} \mathbf{x_1} \\ \vdots \\ \mathbf{x_m} \\ \end{array} \right]$$
183
+ One interpretation of $\mathbf{X}$ is the following. Each {\it row} of $\mathbf{X}$ corresponds to all measurements of a particular type. Each {\it column} of $\mathbf{X}$ corresponds to a set of measurements from one particular trial (this is $\vec{X}$ from section 3.1). We now arrive at a definition for the {\it covariance matrix} $\mathbf{C_X}$.
184
+ $$\mathbf{C_X} \equiv \frac{1}{n}\mathbf{X}\mathbf{X}^T.$$
185
+ Consider the matrix $\mathbf{C_X} = \frac{1}{n}\mathbf{XX}^{T}$. The $ij^{th}$ element of $\mathbf{C_X}$ is the dot product between the vector of the $i^{th}$ measurement type with the vector of the $j^{th}$ measurement type. We can summarize several properties of $\mathbf{C_X}$:
186
+ \begin{itemize}
187
+ \item $\mathbf{C_X}$ is a square symmetric $m \times m$ matrix (Theorem 2 of Appendix A)
188
+ \item The diagonal terms of $\mathbf{C_X}$ are the {\it variance} of particular measurement types.
189
+ \item The off-diagonal terms of $\mathbf{C_X}$ are the {\it covariance} between measurement types.
190
+ \end{itemize}
191
+ $\mathbf{C_X}$ captures the covariance between all possible pairs of measurements. The covariance values reflect the noise and redundancy in our measurements.
192
+ \begin{itemize}
193
+ \item In the diagonal terms, by assumption, large values correspond to interesting structure.
194
+ \item In the off-diagonal terms large magnitudes correspond to high redundancy.
195
+ \end{itemize}
196
+ Pretend we have the option of manipulating $\mathbf{C_X}$. We will suggestively define our manipulated covariance matrix $\mathbf{C_Y}$. What features do we want to optimize in $\mathbf{C_Y}$?
197
+
198
+ \subsection{Diagonalize the Covariance Matrix}
199
+ We can summarize the last two sections by stating that our goals are (1) to minimize redundancy, measured by the magnitude of the covariance, and (2) maximize the signal, measured by the variance. What would the optimized covariance matrix $\mathbf{C_Y}$ look like?
200
+ \begin{itemize}
201
+ \item All off-diagonal terms in $\mathbf{C_Y}$ should be zero. Thus, $\mathbf{C_Y}$ must be a diagonal matrix. Or, said another way, $\mathbf{Y}$ is decorrelated.
202
+ \item Each successive dimension in $\mathbf{Y}$ should be rank-ordered according to variance.
203
+ \end{itemize}
204
+ There are many methods for diagonalizing $\mathbf{C_Y}$. It is curious to note that PCA arguably selects the easiest method: PCA assumes that all basis vectors $\left\{\mathbf{p_1}, \ldots, \mathbf{p_m}\right\}$ are orthonormal, i.e. $\mathbf{P}$ is an {\it orthonormal matrix}. Why is this assumption easiest?
205
+
206
+ Envision how PCA works. In our simple example in Figure~\ref{fig:snr}, $\mathbf{P}$ acts as a generalized rotation to align a basis with the axis of maximal variance. In multiple dimensions this could be performed by a simple algorithm:
207
+ \begin{enumerate}
208
+ \item Select a normalized direction in $m$-dimensional space along which the variance in $\mathbf{X}$ is maximized. Save this vector as $\mathbf{p_1}$.
209
+ \item Find another direction along which variance is maximized, however, because of the orthonormality condition, restrict the search to all directions orthogonal to all previous selected directions. Save this vector as $\mathbf{p_i}$
210
+ \item Repeat this procedure until $m$ vectors are selected.
211
+ \end{enumerate}
212
+ The resulting ordered set of $\mathbf{p}$'s are the {\it principal
213
+ components}.
214
+
215
+ In principle this simple algorithm works, however that would bely the true reason why the orthonormality assumption is judicious. The true benefit to this assumption is that there exists an efficient, analytical solution to this problem. We will discuss two solutions in the following sections.
216
+
217
+ Notice what we gained with the stipulation of rank-ordered variance. We have a method for judging the importance of the principal direction. Namely, the variances associated with each direction $\mathbf{p_i}$ quantify how ``principal'' each direction is by rank-ordering each basis vector $\mathbf{p_i}$ according to the corresponding variances.We will now pause to review the implications of all the assumptions made to arrive at this mathematical goal.
218
+
219
+ \subsection{Summary of Assumptions}
220
+ This section provides a summary of the assumptions behind PCA and hint at when these assumptions might perform poorly.
221
+
222
+ \newcounter{assumptions-count}
223
+ \begin{list}{{\bf \Roman{assumptions-count}}.}
224
+ {\usecounter{assumptions-count}
225
+ \setlength{\rightmargin}{\leftmargin}}
226
+
227
+ \item {\it Linearity}
228
+ \\
229
+ Linearity frames the problem as a change of basis. Several areas of research have explored how extending these notions to nonlinear regimes (see Discussion).
230
+
231
+ \item {\it Large variances have important structure.}
232
+ \\
233
+ This assumption also encompasses the belief that the data has a high SNR. Hence, principal components with larger associated variances represent interesting structure, while those with lower variances represent noise. Note that this is a strong, and sometimes, incorrect assumption (see Discussion).
234
+
235
+ \item {\it The principal components are orthogonal.}
236
+ \\
237
+ This assumption provides an intuitive simplification that makes PCA soluble with linear algebra decomposition techniques. These techniques are highlighted in the two following sections.
238
+ \end{list}
239
+
240
+ We have discussed all aspects of deriving PCA - what remain are the linear algebra solutions. The first solution is somewhat straightforward while the second solution involves understanding an important algebraic decomposition.
241
+
242
+
243
+ \section{Solving PCA Using Eigenvector Decomposition }
244
+
245
+ We derive our first algebraic solution to PCA based on an important property of eigenvector decomposition. Once again, the data set is $\mathbf{X}$,
246
+ an $m \times n$ matrix, where $m$ is the number of measurement types and $n$ is the number of samples. The goal is summarized as follows.
247
+ \begin{quote}
248
+ Find some orthonormal matrix $\mathbf{P}$ in \mbox{$\mathbf{Y=PX}$} such that \mbox{$\mathbf{C_Y} \equiv \frac{1}{n}\mathbf{Y}\mathbf{Y}^T$} is a diagonal matrix. The rows of $\mathbf{P}$ are the {\it principal components} of $\mathbf{X}$.
249
+ \end{quote}
250
+ We begin by rewriting $\mathbf{C_Y}$ in terms of the unknown variable.
251
+ \begin{eqnarray*}
252
+ \mathbf{C_Y} & = & \frac{1}{n}\mathbf{YY}^T \\
253
+ & = & \frac{1}{n}(\mathbf{PX})(\mathbf{PX})^T \\
254
+ & = & \frac{1}{n}\mathbf{PXX}^{T}\mathbf{P}^{T} \\
255
+ & = & \mathbf{P}(\frac{1}{n}\mathbf{XX}^{T})\mathbf{P}^{T} \\
256
+ \mathbf{C_Y} & = & \mathbf{P}\mathbf{C_X P}^T
257
+ \end{eqnarray*}
258
+ Note that we have identified the covariance matrix of $\mathbf{X}$ in the last line.
259
+
260
+ Our plan is to recognize that any symmetric matrix $\mathbf{A}$ is diagonalized by an orthogonal matrix of its eigenvectors (by Theorems 3 and 4 from Appendix A). For a symmetric matrix $\mathbf{A}$ Theorem 4 provides $\mathbf{A}=\mathbf{EDE}^T$, where $\mathbf{D}$ is a diagonal matrix and $\mathbf{E}$ is a matrix of eigenvectors of $\mathbf{A}$ arranged as columns.\footnote{The matrix $\mathbf{A}$ might have $r\leq m$ orthonormal eigenvectors where $r$ is the rank of the matrix. When the rank of $\mathbf{A}$ is less than $m$, $\mathbf{A}$ is {\it degenerate} or all data occupy a subspace of dimension $r\leq m$. Maintaining the constraint of orthogonality, we can remedy this situation by selecting $(m-r)$ additional orthonormal vectors to ``fill up'' the matrix $\mathbf{E}$. These additional vectors do not effect the final solution because the variances associated with these directions are zero.}
261
+
262
+ Now comes the trick. {\it We select the matrix $\mathbf{P}$ to be a matrix where each row $\mathbf{p_i}$ is an eigenvector of $\frac{1}{n}\mathbf{XX}^T$.} By this selection, $\mathbf{P \equiv E^{T}}$. With this relation and Theorem 1 of Appendix A ($\mathbf{P}^{-1}=\mathbf{P}^{T}$) we can finish evaluating $\mathbf{C_Y}$.
263
+ \begin{eqnarray*}
264
+ \mathbf{C_Y} & = & \mathbf{PC_X P}^{T} \\
265
+ & = & \mathbf{P}(\mathbf{E}^{T}\mathbf{DE})\mathbf{P}^{T} \\
266
+ & = & \mathbf{P}(\mathbf{P}^{T}\mathbf{DP})\mathbf{P}^{T} \\
267
+ & = & (\mathbf{PP}^{T})\mathbf{D}(\mathbf{PP}^{T}) \\
268
+ & = & (\mathbf{PP}^{-1})\mathbf{D}(\mathbf{PP}^{-1}) \\
269
+ \mathbf{C_Y} & = & \mathbf{D}
270
+ \end{eqnarray*}
271
+ It is evident that the choice of $\mathbf{P}$ diagonalizes $\mathbf{C_Y}$. This was the goal for PCA. We can summarize the results of PCA in the matrices $\mathbf{P}$ and $\mathbf{C_Y}$.
272
+ \begin{itemize}
273
+ \item The principal components of $\mathbf{X}$ are the eigenvectors of $\mathbf{C_X} = \frac{1}{n}\mathbf{XX}^T$.
274
+ \item The $i^{th}$ diagonal value of $\mathbf{C_Y}$ is the variance of $\mathbf{X}$ along $\mathbf{p_i}$.
275
+ \end{itemize}
276
+ In practice computing PCA of a data set $\mathbf{X}$ entails (1) subtracting off the mean of each measurement type and (2) computing the eigenvectors of $\mathbf{C_X}$. This solution is
277
+ demonstrated in Matlab code included in Appendix B.
278
+
279
+
280
+
281
+ \section{A More General Solution Using SVD}
282
+ This section is the most mathematically involved and can be skipped without much loss of continuity. It is presented solely for completeness. We derive another algebraic solution for PCA and in the process, find that PCA is closely related to singular value decomposition (SVD). In fact, the two are so intimately related that the names are often used interchangeably. What we will see though is that SVD is a more general method of understanding {\it change of basis}.
283
+
284
+ We begin by quickly deriving the decomposition. In the following section we interpret the decomposition and in the last section we relate these results to {\it PCA}.
285
+
286
+ \subsection{Singular Value Decomposition}
287
+ Let $\mathbf{X}$ be an arbitrary $n \times m$ matrix\footnote{Notice that in this section only we are reversing convention from $m \times n$ to $n \times m$. The reason for this derivation will become clear in section 6.3.} and $\mathbf{X}^T\mathbf{X}$ be a rank $r$, square, symmetric $m \times m$ matrix. In a seemingly unmotivated fashion, let us define all of the quantities of interest.
288
+ \begin{itemize}
289
+ \item $\{\mathbf{\hat{v}}_1,\mathbf{\hat{v}}_2,\ldots,\mathbf{\hat{v}}_r\}$ is the set of {\it orthonormal} $m \times 1$ eigenvectors with associated eigenvalues $\{\lambda_1,\lambda_2,\ldots,\lambda_r\}$ for the symmetric matrix $\mathbf{X}^T\mathbf{X}$.
290
+ $$(\mathbf{X}^{T}\mathbf{X})\mathbf{\hat{v}}_i = \lambda_i\mathbf{\hat{v}}_i$$
291
+ \item $\sigma_i \equiv \sqrt{\lambda_i}$ are positive real and termed the {\it singular values}.
292
+ \item $\{\mathbf{\hat{u}}_1,\mathbf{\hat{u}}_2,\ldots,\mathbf{\hat{u}}_r\}$ is the set of $n \times 1$ vectors defined by \mbox{$\mathbf{\mathbf{\hat{u}_i} \equiv \frac{1}{\sigma_i}X\hat{v}_i}$}.
293
+ \end{itemize}
294
+ The final definition includes two new and unexpected properties.
295
+ \begin{itemize}
296
+ \item $\mathbf{\hat{u}_i} \cdot \mathbf{\hat{u}_j} = \left\{\begin{tabular}{cl}1 & \;\;if $i = j$ \\ 0 & \;\;otherwise\\\end{tabular}\right.$
297
+ \item \mbox{$\Vert\mathbf{X\hat{v}_i}\Vert = \sigma_i$}
298
+ \end{itemize}
299
+ These properties are both proven in Theorem 5. We now have all of the pieces to construct the decomposition. The scalar version of singular value decomposition is just a restatement of the third definition.
300
+ \begin{equation}
301
+ \mathbf{X\hat{v}}_i = \sigma_i\mathbf{\hat{u}}_i
302
+ \label{eqn:value-svd}
303
+ \end{equation}
304
+ This result says a quite a bit. $\mathbf{X}$ multiplied by an eigenvector of $\mathbf{X}^{T}\mathbf{X}$ is equal to a scalar times another vector. The set of eigenvectors \mbox{$\{\mathbf{\hat{v}}_1,\mathbf{\hat{v}}_2,\ldots,\mathbf{\hat{v}}_r\}$} and the set of vectors \mbox{$\{\mathbf{\hat{u}}_1,\mathbf{\hat{u}}_2,\ldots,\mathbf{\hat{u}}_r\}$} are both orthonormal sets or bases in $r$-dimensional space.
305
+
306
+ We can summarize this result for all vectors in one matrix multiplication by following the prescribed construction in Figure~\ref{diagram:svd-construction}. We start by constructing a new diagonal matrix $\Sigma$.
307
+ $$\Sigma \equiv \left[ \begin{array}{cccccc} \sigma_{\tilde{1}} & & & & & \\ & \ddots & & & \mbox{{\Huge 0}} & \\ & & \sigma_{\tilde{r}} & & & \\ & & & 0 & & \\ & \mbox{{\Huge 0}} & & & \ddots & \\ & & & & & 0 \end{array} \right]$$
308
+ where \mbox{$\sigma_{\tilde{1}} \geq \sigma_{\tilde{2}} \geq \ldots \geq \sigma_{\tilde{r}}$} are the rank-ordered set of singular values. Likewise we construct accompanying orthogonal matrices,
309
+ \begin{eqnarray*}
310
+ \mathbf{V} & = & \left[\mathbf{\hat{v}_{\tilde{1}}}\;\mathbf{\hat{v}_{\tilde{2}}}\;\ldots\;\mathbf{\hat{v}_{\tilde{m}}} \right] \\
311
+ \mathbf{U} & = & \left[\mathbf{\hat{u}_{\tilde{1}}}\;\mathbf{\hat{u}_{\tilde{2}}}\;\ldots\;\mathbf{\hat{u}_{\tilde{n}}} \right]
312
+ \end{eqnarray*}
313
+ where we have appended an additional $(m-r)$ and $(n-r)$ orthonormal vectors to ``fill up'' the matrices for $\mathbf{V}$ and $\mathbf{U}$ respectively (i.e. to deal with degeneracy issues). Figure~\ref{diagram:svd-construction} provides a graphical representation of how all of the pieces fit together to form the matrix version of {\it SVD}.
314
+ $$\mathbf{XV = U} \Sigma$$
315
+ where each column of $\mathbf{V}$ and $\mathbf{U}$ perform the scalar version of the decomposition (Equation~\ref{eqn:value-svd}). Because $\mathbf{V}$ is orthogonal, we can multiply both sides by $\mathbf{V}^{-1}=\mathbf{V}^{T}$ to arrive at the final form of the decomposition.
316
+ \begin{equation}
317
+ \mathbf{X = U} \Sigma \mathbf{V}^T
318
+ \label{eqn:svd-matrix}
319
+ \end{equation}
320
+ Although derived without motivation, this decomposition is quite powerful. Equation~\ref{eqn:svd-matrix} states that {\it any} arbitrary matrix $\mathbf{X}$ can be converted to an orthogonal matrix, a diagonal matrix and another orthogonal matrix (or a rotation, a stretch and a second rotation). Making sense of Equation~\ref{eqn:svd-matrix} is the subject of the next section.
321
+
322
+ \begin{figure*}[t]
323
+ \begin{quote}{\sf
324
+ The scalar form of SVD is expressed in equation~\ref{eqn:value-svd}.
325
+ \[\mathbf{X\hat{v}}_i = \sigma_i\mathbf{\hat{u}}_i\]
326
+ The mathematical intuition behind the construction of the matrix form is that we want to express all $n$ scalar equations in just one equation. It is easiest to understand this process graphically. Drawing the matrices of equation~\ref{eqn:value-svd} looks likes the following.
327
+
328
+ \centerline{\includegraphics[width=0.45\textwidth]{svd-1.pdf}}
329
+
330
+ We can construct three new matrices $\mathbf{V}$, $\mathbf{U}$ and $\Sigma$. All singular values are first rank-ordered \mbox{$\sigma_{\tilde{1}} \geq \sigma_{\tilde{2}} \geq \ldots \geq \sigma_{\tilde{r}}$}, and the corresponding vectors are indexed in the same rank order. Each pair of associated vectors $\mathbf{\hat{v}_i}$ and $\mathbf{\hat{u}_i}$ is stacked in the $i^{th}$ column along their respective matrices. The corresponding singular value $\sigma_i$ is placed along the diagonal (the $ii^{th}$ position) of $\Sigma$. This generates the equation $\mathbf{XV=U}\Sigma$, which looks like the following.
331
+
332
+ \centerline{\includegraphics[width=0.75\textwidth]{svd-2.pdf}}
333
+
334
+ The matrices $\mathbf{V}$ and $\mathbf{U}$ are $m \times m$ and $n \times n$ matrices respectively and $\Sigma$ is a diagonal matrix with a few non-zero values (represented by the checkerboard) along its diagonal. Solving this single matrix equation solves all $n$ ``value'' form equations.
335
+ }\end{quote}
336
+ \caption{Construction of the matrix form of SVD (Equation~\ref{eqn:svd-matrix}) from the scalar form (Equation~\ref{eqn:value-svd}).}
337
+ \label{diagram:svd-construction}
338
+ \end{figure*}
339
+
340
+ \subsection{Interpreting SVD}
341
+ The final form of SVD is a concise but thick statement. Instead let us reinterpret Equation~\ref{eqn:value-svd} as
342
+ \[\mathbf{Xa} = k\mathbf{b}\]
343
+ where $\mathbf{a}$ and $\mathbf{b}$ are column vectors and $k$ is a scalar constant. The set \mbox{$\{\mathbf{\hat{v}_1},\mathbf{\hat{v}_2},\ldots,\mathbf{\hat{v}_m}\}$} is analogous to $\mathbf{a}$ and the set \mbox{$\{\mathbf{\hat{u}_1},\mathbf{\hat{u}_2},\ldots,\mathbf{\hat{u}_n}\}$} is analogous to $\mathbf{b}$. What is unique though is that \mbox{$\{\mathbf{\hat{v}_1},\mathbf{\hat{v}_2},\ldots,\mathbf{\hat{v}_m}\}$} and \mbox{$\{\mathbf{\hat{u}_1},\mathbf{\hat{u}_2},\ldots,\mathbf{\hat{u}_n}\}$} are orthonormal sets of vectors which {\it span} an $m$ or $n$ dimensional space, respectively. In particular, loosely speaking these sets appear to span all possible ``inputs'' (i.e. $\mathbf{a}$) and ``outputs'' (i.e. $\mathbf{b}$). Can we formalize the view that \mbox{$\{\mathbf{\hat{v}_1},\mathbf{\hat{v}_2},\ldots,\mathbf{\hat{v}_n}\}$} and \mbox{$\{\mathbf{\hat{u}_1},\mathbf{\hat{u}_2},\ldots,\mathbf{\hat{u}_n}\}$} span all possible ``inputs'' and ``outputs''?
344
+
345
+ We can manipulate Equation~\ref{eqn:svd-matrix} to make this fuzzy hypothesis more precise.
346
+ \begin{eqnarray*}
347
+ \mathbf{X} & = & \mathbf{U}\Sigma\mathbf{V}^{T} \\
348
+ \mathbf{U}^{T}\mathbf{X} & = & \Sigma\mathbf{V}^{T} \\
349
+ \mathbf{U}^{T}\mathbf{X} & = & \mathbf{Z}
350
+ \end{eqnarray*}
351
+ where we have defined $\mathbf{Z }\equiv \Sigma \mathbf{V}^{T}$. Note that the previous columns \mbox{$\{\mathbf{\hat{u}_1},\mathbf{\hat{u}_2},\ldots,\mathbf{\hat{u}_n}\}$} are now rows in $\mathbf{U}^T$. Comparing this equation to Equation~\ref{eqn:basis-transform}, \mbox{$\{\mathbf{\hat{u}_1},\mathbf{\hat{u}_2},\ldots,\mathbf{\hat{u}_n}\}$} perform the same role as \mbox{$\{\mathbf{\hat{p}_1},\mathbf{\hat{p}_2},\ldots,\mathbf{\hat{p}_m}\}$}. Hence, $\mathbf{U}^T$ is a {\it change of basis} from $\mathbf{X}$ to $\mathbf{Z}$. Just as before, we were transforming column vectors, we can again infer that we are transforming column vectors. The fact that the orthonormal basis $\mathbf{U}^T$ (or $\mathbf{P}$) transforms column vectors means that $\mathbf{U}^T$ is a basis that spans the columns of $\mathbf{X}$. Bases that span the columns are termed the {\it column space} of $\mathbf{X}$. The column space formalizes the notion of what are the possible ``outputs'' of any matrix.
352
+
353
+ There is a funny symmetry to SVD such that we can define a similar quantity - the {\it row space}.
354
+ \begin{eqnarray*}
355
+ \mathbf{XV} & = & \Sigma\mathbf{U} \\
356
+ (\mathbf{XV})^{T} & = & (\Sigma\mathbf{U})^{T} \\
357
+ \mathbf{V}^{T}\mathbf{X}^{T} & = & \mathbf{U}^{T} \Sigma \\
358
+ \mathbf{V}^{T}\mathbf{X}^{T} & = & \mathbf{Z}
359
+ \end{eqnarray*}
360
+ where we have defined $\mathbf{Z \equiv U^{T}}\Sigma$. Again the rows of $\mathbf{V}^T$ (or the columns of $\mathbf{V}$) are an orthonormal basis for transforming $\mathbf{X}^T$ into $\mathbf{Z}$. Because of the transpose on $\mathbf{X}$, it follows that $\mathbf{V}$ is an orthonormal basis spanning the {\it row space} of $\mathbf{X}$. The row space likewise formalizes the notion of what are possible ``inputs'' into an arbitrary matrix.
361
+
362
+ We are only scratching the surface for understanding the full implications of SVD. For the purposes of this tutorial though, we have enough information to understand how PCA will fall within this framework.
363
+
364
+ \subsection{SVD and PCA}
365
+ It is evident that PCA and SVD are intimately related. Let us return to the original $m \times n$ data matrix $\mathbf{X}$. We can define a new matrix $\mathbf{Y}$ as an $n \times m$ matrix.\footnote{$\mathbf{Y}$ is of the appropriate $n \times m$ dimensions laid out in the derivation of section 6.1. This is the reason for the ``flipping'' of dimensions in 6.1 and Figure 4.}
366
+ $$\mathbf{Y} \equiv \frac{1}{\sqrt{n}}\mathbf{X}^T$$
367
+ where each {\it column} of $\mathbf{Y}$ has zero mean. The choice of $\mathbf{Y}$ becomes clear by analyzing $\mathbf{Y}^T\mathbf{Y}$.
368
+ \begin{eqnarray*}
369
+ \mathbf{Y}^T\mathbf{Y} & = & \left(\frac{1}{\sqrt{n}}\mathbf{X}^T\right)^{T}\left(\frac{1}{\sqrt{n}}\mathbf{X}^T\right) \\
370
+ & = & \frac{1}{n}\mathbf{XX}^{T} \\
371
+ \mathbf{Y}^{T}\mathbf{Y} & = & \mathbf{C_X}
372
+ \end{eqnarray*}
373
+ By construction $\mathbf{Y}^T\mathbf{Y}$ equals the covariance matrix of $\mathbf{X}$. From section 5 we know that the principal components of $\mathbf{X}$ are the eigenvectors of $\mathbf{C_X}$. If we calculate the SVD of $\mathbf{Y}$, the columns of matrix $\mathbf{V}$ contain the eigenvectors of $\mathbf{Y}^T\mathbf{Y} = \mathbf{C_X}$. {\it Therefore, the columns of $\mathbf{V}$ are the principal components of $\mathbf{X}$}. This second algorithm is encapsulated in Matlab code included in Appendix B.
374
+
375
+ What does this mean? $\mathbf{V}$ spans the row space of $\mathbf{Y} \equiv \frac{1}{\sqrt{n}}\mathbf{X}^T$. Therefore, $\mathbf{V}$ must also span the column space of $\frac{1}{\sqrt{n}}\mathbf{X}$. We can conclude that finding the principal components amounts to finding an orthonormal basis that spans the {\it column space} of $\mathbf{X}$.\footnote{If the final goal is to find an orthonormal basis for the coulmn space of $\mathbf{X}$ then we can calculate it directly without constructing $\mathbf{Y}$. By symmetry the columns of $\mathbf{U}$ produced by the SVD of $\frac{1}{\sqrt{n}}\mathbf{X}$ must also be the principal components.}
376
+
377
+ \section{Discussion}
378
+
379
+ \begin{figure}
380
+ \framebox{\parbox{3.2in}{
381
+ {\bf Quick Summary of PCA}
382
+ {\sf
383
+ \begin{enumerate}
384
+ \item Organize data as an $m \times n$ matrix, where $m$ is the number of measurement types and $n$ is the number of samples.
385
+ \item Subtract off the mean for each measurement type.
386
+ \item Calculate the SVD or the eigenvectors of the covariance.
387
+ \end{enumerate}
388
+ }}}
389
+ \caption{A step-by-step instruction list on how to perform principal component analysis}
390
+ \label{fig:summary}
391
+ \end{figure}
392
+
393
+ Principal component analysis (PCA) has widespread applications because it reveals simple underlying structures in complex data sets using analytical solutions from linear algebra. Figure~\ref{fig:summary} provides a brief summary for implementing PCA.
394
+
395
+ A primary benefit of PCA arises from quantifying the importance of each dimension for describing the variability of a data set. In particular, the measurement of the variance along each principle component provides a means for comparing the relative importance of each dimension. An implicit hope behind employing this method is that the variance along a small number of principal components (i.e. less than the number of measurement types) provides a reasonable characterization of the complete data set. This statement is the precise intuition behind any method of {\em dimensional reduction} -- a vast arena of active research. In the example of the spring, PCA identifies that a majority of variation exists along a single dimension (the direction of motion $\hat{\mathbf{x}}$), eventhough 6 dimensions are recorded.
396
+
397
+ Although PCA ``works'' on a multitude of real world problems, any diligent scientist or engineer must ask {\em when does PCA fail?} Before we answer this question, let us note a remarkable feature of this algorithm. PCA is completely {\em non-parametric}: any data set can be plugged in and an answer comes out, requiring no parameters to tweak and no regard for how the data was recorded. From one perspective, the fact that PCA is non-parametric (or plug-and-play) can be considered a positive feature because the answer is unique and independent of the user. From another perspective the fact that PCA is agnostic to the source of the data is also a weakness. For instance, consider tracking a person on a ferris wheel in Figure~\ref{fig:failures}a. The data points can be cleanly described by a single variable, the precession angle of the wheel $\theta$, however PCA would fail to recover this variable.
398
+
399
+ \subsection{Limits and Statistics of Dimensional Reduction}
400
+
401
+ \begin{figure}[t]
402
+ \includegraphics[width=0.47\textwidth]{PCA-Failure.pdf}
403
+ \caption{Example of when PCA fails (red lines). (a) Tracking a person on a ferris wheel (black dots). All dynamics can be described by the phase of the wheel $\theta$, a non-linear combination of the naive basis. (b) In this example data set, non-Gaussian distributed data and non-orthogonal axes causes PCA to fail. The axes with the largest variance do not correspond to the appropriate answer.}
404
+ \label{fig:failures}
405
+ \end{figure}
406
+
407
+ A deeper appreciation of the limits of PCA requires some consideration about the underlying assumptions and in tandem, a more rigorous description of the source of data. Generally speaking, the primary motivation behind this method is to decorrelate the data set, i.e. remove second-order dependencies. The manner of approaching this goal is loosely akin to how one might explore a town in the Western United States: drive down the longest road running through the town. When one sees another big road, turn left or right and drive down this road, and so forth. In this analogy, PCA requires that each new road explored must be perpendicular to the previous, but clearly this requirement is overly stringent and the data (or town) might be arranged along non-orthogonal axes, such as Figure~\ref{fig:failures}b. Figure~\ref{fig:failures} provides two examples of this type of data where PCA provides unsatisfying results.
408
+
409
+ To address these problems, we must define what we consider optimal results. In the context of dimensional reduction, one measure of success is the degree to which a reduced representation can predict the original data. In statistical terms, we must define an error function (or loss function). It can be proved that under a common loss function, mean squared error (i.e. $L_2$ norm), PCA provides the optimal reduced representation of the data. This means that selecting orthogonal directions for principal components is the best solution to predicting the original data. Given the examples of Figure~\ref{fig:failures}, how could this statement be true? Our intuitions from Figure~\ref{fig:failures} suggest that this result is somehow misleading.
410
+
411
+ The solution to this paradox lies in the goal we selected for the analysis. The goal of the analysis is to decorrelate the data, or said in other terms, the goal is to remove second-order dependencies in the data. In the data sets of Figure~\ref{fig:failures}, higher order dependencies exist between the variables. Therefore, removing second-order dependencies is insufficient at revealing all structure in the data.\footnote{When are second order dependencies sufficient for revealing all dependencies in a data set? This statistical condition is met when the first and second order statistics are {\em sufficient statistics} of the data. This occurs, for instance, when a data set is Gaussian distributed.}
412
+
413
+ Multiple solutions exist for removing higher-order dependencies. For instance, if prior knowledge is known about the problem, then a nonlinearity (i.e. {\em kernel}) might be applied to the data to transform the data to a more appropriate naive basis. For instance, in Figure~\ref{fig:failures}a, one might examine the polar coordinate representation of the data. This parametric approach is often termed {\it kernel PCA}.
414
+
415
+ Another direction is to impose more general statistical definitions of dependency within a data set, e.g. requiring that data along reduced dimensions be {\em statistically independent}. This class of algorithms, termed, {\em independent component analysis} (ICA), has been demonstrated to succeed in many domains where PCA fails. ICA has been applied to many areas of signal and image processing, but suffers from the fact that solutions are (sometimes) difficult to compute.
416
+
417
+ Writing this paper has been an extremely instructional experience for me. I hope that this paper helps to demystify the motivation and results of PCA, and the underlying assumptions behind this important analysis technique. Please send me a note if this has been useful to you as it inspires me to keep writing!
418
+
419
+ \appendix
420
+
421
+ \section{Linear Algebra}
422
+ This section proves a few unapparent theorems in linear algebra, which are crucial to this paper. \\
423
+
424
+ {\bf1. The inverse of an orthogonal matrix is its transpose.} \\
425
+
426
+ Let $\mathbf{A}$ be an $m \times n$ orthogonal matrix where $\mathbf{a_i}$ is the $i^{th}$ column vector. The $ij^{th}$ element of $\mathbf{A}^{T}\mathbf{A}$ is
427
+ $$(\mathbf{A}^{T}\mathbf{A})_{ij} = \mathbf{a_{i}}^{T}\mathbf{a_{j}} = \left\{
428
+ \begin{array}{ll}
429
+ 1 & if \;\;i=j \\
430
+ 0 & otherwise \\
431
+ \end{array} \right.$$
432
+ Therefore, because $\mathbf{A}^{T}\mathbf{A}=\mathbf{I}$, it follows that $\mathbf{A}^{-1}=\mathbf{A}^{T}$. \\
433
+
434
+ {\bf 2. For any matrix $\mathbf{A}$, $\mathbf{A}^{T}\mathbf{A}$ and $\mathbf{AA}^{T}$ are symmetric.} \\
435
+ \begin{eqnarray*}
436
+ (\mathbf{AA}^{T})^T & = & \mathbf{A}^{TT}\mathbf{A}^{T} = \mathbf{AA}^{T} \\
437
+ (\mathbf{A}^{T}\mathbf{A})^T & = & \mathbf{A}^{T}\mathbf{A}^{TT} = \mathbf{A}^{T}\mathbf{A}
438
+ \end{eqnarray*}
439
+ {\bf 3. A matrix is symmetric if and only if it is orthogonally diagonalizable.} \\
440
+
441
+ Because this statement is bi-directional, it requires a two-part ``if-and-only-if'' proof. One needs to prove the forward and the backwards ``if-then'' cases.
442
+
443
+ Let us start with the forward case. If $\mathbf{A}$ is orthogonally diagonalizable, then $\mathbf{A}$ is a symmetric matrix. By hypothesis, orthogonally diagonalizable means that there exists some $\mathbf{E}$ such that $\mathbf{A}=\mathbf{EDE}^{T}$, where $\mathbf{D}$ is a diagonal matrix and $\mathbf{E}$ is some special matrix which diagonalizes $\mathbf{A}$. Let us compute $\mathbf{A}^{T}$.
444
+ $$\mathbf{A}^T = (\mathbf{EDE}^{T})^{T} = \mathbf{E}^{TT}\mathbf{D}^{T}\mathbf{E}^{T} = \mathbf{EDE}^{T} = \mathbf{A}$$
445
+
446
+ Evidently, if $\mathbf{A}$ is orthogonally diagonalizable, it must also be symmetric.
447
+
448
+ The reverse case is more involved and less clean so it will be left to the reader. In lieu of this, hopefully the ``forward'' case is suggestive if not somewhat convincing. \\
449
+
450
+ {\bf 4. A symmetric matrix is diagonalized by a matrix of its orthonormal eigenvectors.} \\
451
+
452
+ Let $\mathbf{A}$ be a square \mbox{$n \times n$} symmetric matrix with associated eigenvectors $\{\mathbf{e_1, e_2, \ldots, e_n} \}$. Let $ \mathbf{E}=[\mathbf{e_1\;e_2\;\ldots\;e_n]}$ where the $i^{th}$ column of $\mathbf{E}$ is the eigenvector $\mathbf{e_i}$. This theorem asserts that there exists a diagonal matrix $\mathbf{D}$ such that $\mathbf{A}=\mathbf{EDE}^{T}$.
453
+
454
+ This proof is in two parts. In the first part, we see that the any matrix can be orthogonally diagonalized if and only if it that matrix's eigenvectors are all linearly independent. In the second part of the proof, we see that a symmetric matrix has the special property that all of its eigenvectors are not just linearly independent but also orthogonal, thus completing our proof.
455
+
456
+ In the first part of the proof, let $\mathbf{A}$ be just some matrix, not necessarily symmetric, and let it have independent eigenvectors (i.e. no degeneracy). Furthermore, let $ \mathbf{E}=[\mathbf{e_1\;e_2\;\ldots\;e_n]}$ be the matrix of eigenvectors placed in the columns. Let $\mathbf{D}$ be a diagonal matrix where the $i^{th}$ eigenvalue is placed in the $ii^{th}$ position.
457
+
458
+ We will now show that $\mathbf{AE=ED}$. We can examine the columns of the right-hand and left-hand sides of the equation.
459
+ \begin{displaymath}
460
+ \begin{array}{rrcl}
461
+ \mathsf{Left\;hand\;side:} & \mathbf{AE} & = & [\mathbf{Ae_1}\;\mathbf{Ae_2}\;\ldots\;\mathbf{Ae_n}] \\
462
+ \mathsf{Right\;hand\;side:} & \mathbf{ED} & = & [\lambda_{1}\mathbf{e_1}\:\lambda_{2}\mathbf{e_2}\:\ldots\:\lambda_{n}\mathbf{e_n}]
463
+ \end{array}
464
+ \end{displaymath}
465
+ Evidently, if $\mathbf{AE=ED}$ then $\mathbf{Ae_i}=\lambda_{i}\mathbf{e_i}$ for all $i$. This equation is the definition of the eigenvalue equation. Therefore, it must be that $\mathbf{AE=ED}$. A little rearrangement provides $\mathbf{A=EDE}^{-1}$, completing the first part the proof.
466
+
467
+ For the second part of the proof, we show that a symmetric matrix always has orthogonal eigenvectors. For some symmetric matrix, let $\lambda_{1}$ and $\lambda_{2}$ be distinct eigenvalues for eigenvectors $\mathbf{e_1}$ and $\mathbf{e_2}$.
468
+ \begin{eqnarray*}
469
+ \lambda_1\mathbf{e_1}\cdot\mathbf{e_2} & = & (\lambda_1 \mathbf{e_1})^{T} \mathbf{e_2} \\
470
+ & = & (\mathbf{Ae_1})^{T} \mathbf{e_2} \\
471
+ & = & \mathbf{e_1}^{T}\mathbf{A}^{T} \mathbf{e_2} \\
472
+ & = & \mathbf{e_1}^{T}\mathbf{A} \mathbf{e_2} \\
473
+ & = & \mathbf{e_1}^{T} (\lambda_{2}\mathbf{e_2}) \\
474
+ \lambda_1\mathbf{e_1}\cdot\mathbf{e_2} & = & \lambda_2\mathbf{e_1}\cdot\mathbf{e_2}
475
+ \end{eqnarray*}
476
+ By the last relation we can equate that \mbox{$ (\lambda_1-\lambda_2)\mathbf{e_1}\cdot\mathbf{e_2} = 0$}. Since we have conjectured that the eigenvalues are in fact unique, it must be the case that $\mathbf{e_1}\cdot\mathbf{e_2} = 0$. Therefore, the eigenvectors of a symmetric matrix are orthogonal.
477
+
478
+ Let us back up now to our original postulate that $\mathbf{A}$ is a symmetric matrix. By the second part of the proof, we know that the eigenvectors of $\mathbf{A}$ are all orthonormal (we choose the eigenvectors to be normalized). This means that $\mathbf{E}$ is an orthogonal matrix so by theorem 1, $\mathbf{E}^T=\mathbf{E}^{-1}$ and we can rewrite the final result.
479
+ \[\mathbf{A=EDE}^T\].
480
+ Thus, a symmetric matrix is diagonalized by a matrix of its eigenvectors.\\
481
+
482
+ {\bf 5. For any arbitrary $m \times n$ matrix $\mathbf{X}$, the symmetric matrix $\mathbf{X}^{T}\mathbf{X}$ has a set of orthonormal eigenvectors of $\{\mathbf{\hat{v}_1,\hat{v}_2,\ldots,\hat{v}_n}\}$ and a set of associated eigenvalues $\{\mathbf{\lambda_1,\lambda_2,\ldots,\lambda_n}\}$. The set of vectors $\{\mathbf{X}\mathbf{\hat{v}_1},\mathbf{X}\mathbf{\hat{v}_2},\ldots,\mathbf{X}\mathbf{\hat{v}_n}\}$ then form an orthogonal basis, where each vector $\mathbf{X}\mathbf{\hat{v}_i}$ is of length $\sqrt{\lambda_i}$.} \\
483
+
484
+ All of these properties arise from the dot product of any two vectors from this set.
485
+ \begin{eqnarray*}
486
+ (\mathbf{X\hat{v}_i})\cdot(\mathbf{X\hat{v}_j}) & = & (\mathbf{X\hat{v}_i})^{T}(\mathbf{X\hat{v}_j}) \\
487
+ & = & \mathbf{\hat{v}_i}^{T}\mathbf{X}^{T}\mathbf{X}\mathbf{\hat{v}_j} \\
488
+ & = & \mathbf{\hat{v}_i}^{T}(\lambda_j\mathbf{\hat{v}_j}) \\
489
+ & = & \lambda_j\mathbf{\hat{v}_i}\cdot\mathbf{\hat{v}_j} \\
490
+ (\mathbf{X\hat{v}_i})\cdot(\mathbf{X\hat{v}_j}) & = & \lambda_j\delta_{ij} \\
491
+ \end{eqnarray*}
492
+ The last relation arises because the set of eigenvectors of $\mathbf{X}$ is orthogonal resulting in the Kronecker delta. In more simpler terms the last relation states:
493
+ \begin{displaymath}
494
+ (\mathbf{X\hat{v}_i})\cdot(\mathbf{X\hat{v}_j}) = \left\{
495
+ \begin{array}{ll}
496
+ \lambda_j & i=j \\
497
+ 0 & i \neq j \\
498
+ \end{array} \right.
499
+ \end{displaymath}
500
+ This equation states that any two vectors in the set are orthogonal.
501
+
502
+ The second property arises from the above equation by realizing that the length squared of each vector is defined as:
503
+ $$\|\mathbf{X\hat{v}_i}\|^2 = (\mathbf{X\hat{v}_i})\cdot(\mathbf{X\hat{v}_i}) = \lambda_i$$
504
+
505
+ \section{Code}
506
+
507
+ This code is written for Matlab 6.5 (Release 13) from Mathworks\footnote{{\tt http://www.mathworks.com}}. The code is not computationally efficient but explanatory (terse comments begin with a \%).
508
+ \linebreak\linebreak
509
+ This first version follows Section 5 by examining the covariance of the data set.
510
+
511
+ \begin{verbatim}
512
+ function [signals,PC,V] = pca1(data)
513
+
514
+
515
+ [M,N] = size(data);
516
+
517
+ mn = mean(data,2);
518
+ data = data - repmat(mn,1,N);
519
+
520
+ covariance = 1 / (N-1) * data * data';
521
+
522
+ [PC, V] = eig(covariance);
523
+
524
+ V = diag(V);
525
+
526
+ [junk, rindices] = sort(-1*V);
527
+ V = V(rindices);
528
+ PC = PC(:,rindices);
529
+
530
+ signals = PC' * data;
531
+ \end{verbatim}
532
+ This second version follows section 6 computing PCA through SVD.
533
+
534
+ \begin{verbatim}
535
+ function [signals,PC,V] = pca2(data)
536
+
537
+
538
+ [M,N] = size(data);
539
+
540
+ mn = mean(data,2);
541
+ data = data - repmat(mn,1,N);
542
+
543
+ Y = data' / sqrt(N-1);
544
+
545
+ [u,S,PC] = svd(Y);
546
+
547
+ S = diag(S);
548
+ V = S .* S;
549
+
550
+ signals = PC' * data;
551
+ \end{verbatim}
552
+
553
+ \end{document}
papers/1405/1405.0174.tex ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass{llncs}
2
+ \usepackage{llncsdoc}
3
+ \usepackage{graphicx,epsfig,subfig}
4
+ \usepackage{url}
5
+
6
+
7
+ \usepackage{fancyhdr}
8
+ \pagestyle{fancy}
9
+ \rhead{VSCAN: Video Summarization using Density-based Spatial Clustering }
10
+ \lhead{722}
11
+
12
+
13
+
14
+
15
+ \begin{document}
16
+ \title{VSCAN: An Enhanced Video Summarization using Density-based Spatial Clustering}
17
+ \author{Karim M. Mohamed \and Mohamed A. Ismail \and Nagia M. Ghanem }
18
+ \institute{Computer and Systems Engineering Department\\Faculty of Engineering, Alexandria University\\ Alexandria, Egypt}
19
+ \maketitle
20
+
21
+ \begin{abstract}
22
+ In this paper, we present VSCAN, a novel approach for generating static video summaries. This approach is based on a modified DBSCAN clustering algorithm to summarize the video content utilizing both color and texture features of the video frames. The paper also introduces an enhanced evaluation method that depends on color and texture features. Video Summaries generated by VSCAN are compared with summaries generated by other approaches found in the literature and those created by users. Experimental results indicate that the video summaries generated by VSCAN have a higher quality than those generated by other approaches.
23
+
24
+ \keywords{Video Summarization, Color and Texture, Clustering, Evaluation Method}
25
+
26
+ \end{abstract}
27
+
28
+ \section{Introduction }
29
+ The revolution in digital video has been driven by the rapid development of computer infrastructure in various areas such as improved processing power, enhanced and cheaper capacity of storage devices, and faster networks. This revolution has brought many new applications and as a consequence research into new technologies that aim at improving the effectiveness and efficiency of video acquisition, archiving, cataloguing and indexing as well as increasing the usability of stored videos. This leads to the requirement of efficient management of video data such as video summarization.
30
+
31
+ A video summary is defined as a sequence of still pictures that represent the content of a video in such a way that the respective target group is rapidly provided with concise information about the content, while the essential message of the original video is preserved \cite{pfeiffer1996abstracting}.
32
+
33
+
34
+
35
+ Over the past years, various approaches and techniques have been proposed towards the summarization of video content. However there are many drawbacks for these approaches. First, most of video summarization approaches that achieved a relatively high quality are based only on a single visual descriptor such as the color of the video frames; while other descriptors like texture is not considered. Second, clustering algorithms used in current video summarization techniques could not detect noise frames automatically; instead some of these techniques have to detect noise frames using separate methods which require additional computation. Third, the current video summarization approaches depend on special input parameters that may not be suitable for all cases. For example, many approaches utilizes k-means partitioning-based clustering algorithm that requires the number of clusters as an input; while the number of clusters is not related to the perceptual content of the automatic video summary. To overcome this problem, additional stage is required to filter key frames which increases the complexity of the video summarization process and makes using the clustering algorithm inefficient. Finally, current evaluation methods depend only on color features for comparing different summaries and do not consider other features like texture; which gives a less perceptual assessment of the quality of video summaries.
36
+
37
+ In this paper, we present VSCAN, an enhanced approach for generating static video summaries that operates on the whole video clip. It relies on clustering color and texture features extracted from the video frames using a modified DBSCAN \cite{ester1996density} algorithm to summarize the video content which overcomes the drawbacks of the other approaches. Also, we introduce an enhanced evaluation method that depends on color and texture features. VSCAN approach is evaluated using the enhanced evaluation method and the experimental results show that VSCAN produces video summaries with higher quality than those generated by other approaches.
38
+
39
+
40
+ The rest of this paper is organized as follows. Section 2 introduces some related work. Section 3 presents VSCAN approach and shows how to apply it to summarize a video sequence. Section 4 illustrates the evaluation method and reports the results of our experiments. Finally, we offer our conclusions and directions for future work in Section 5.
41
+
42
+ \section{Related Work}
43
+ A comprehensive review of video summarization approaches can be found in \cite{truong2007video}. Some of the main approaches and techniques related to static video summarization which can be found in the literature are briefly discussed next.
44
+
45
+ In \cite{mundur2006keyframe}, an approach based on clustering the video frames using the Delaunay Triangulation (DT) is developed. The first step in this apporach is pre-sampling the frames of the input video. Then, the video frames are represented by a color histogram in the HSV color space and the Principal Component Analysis (PCA) is applied on the color feature matrix to reduce its dimensionality. After that, the Delaunay diagram is built and clusters are formed by separating edges in the Delaunay diagram. Finally, for each cluster, the frame that is closest to its center is selected as the key frame.
46
+
47
+ In \cite{furini2010stimo}, an approach called STIMO (STIll and MOving Video Storyboard) is introduced. This approach is designed to produce on-the-fly video storyboards and it is composed of three phases. In the first phase, the video frames are pre-sampled and then feature vectors are extracted from the selected video frames by computing a color histogram in the HSV color space. In the second phase, a clustering method based on the Furthest-Point-First (FPF) algorithm is applied. To estimate the number of clusters, the pairwise distance of consecutive frames is computed using Generalized Jaccard Distance (GJD). Finally, a post-processing step is performed for removing noise video frames.
48
+
49
+ In \cite{de2011vsumm}, an approach called VSUMM (Video SUMMarization) is presented. In the first step, the video frames are pre-sampled by selecting one frame per second. In the second step, the color features of video frames are extracted from Hue component only in the HSV color space. In the third step, the meaningless frames are eliminated. In the fourth step, the frames are clustered using k-means algorithm where the number of clusters is estimated by computing the pairwise Euclidean distances between video frames and a key frame is extracted from each cluster. Finally, another extra step occurs in which the key frames are compared among themselves through color histogram to eliminate those similar key frames in the produced summaries.
50
+
51
+ \section{VSCAN Approach}
52
+ \figurename{1} shows the steps of VSCAN approach to produce static video summaries. First, the original video is pre-sampled (step 1). Second, color features are extracted using color histogram in HSV color space (step 2). Third, texture features are extracted using a two-dimensional Haar wavelet transform in HSV color space (step 3). In step 4, video frames are clustered by a modified DBSCAN clustering algorithm. Then, in step 5, the key frames are selected. Finally, the extracted key frames are arranged in the original order of appearance in the video to facilitate the visual understanding of the result. These steps are explained in more details in the following subsections.
53
+
54
+
55
+ \subsection{Video Frames Pre-sampling}
56
+ The first step towards video summarization is pre-sampling the original video which aims to reduce the number of frames to be processed. Choosing a proper sampling rate is very important. A low sampling rate leads to poor video summaries; while a large sampling rate shortens the video summarization time. In VSCAN approach, the sampling rate used is selected to be one frame per second. So, for a video sample of duration one minute, and a frame rate of 30 fps (i.e., 1800 frames); the number of extracted frames is 60 frames.
57
+
58
+ \begin{figure}
59
+ \centering
60
+ \includegraphics[width=4.5in,height=2.5in]{VSCANProcess.jpg}
61
+ \caption{VSCAN Approach}
62
+ \label{fig:image1}
63
+ \end{figure}
64
+
65
+ \subsection{Color Features Extraction}
66
+ In VSCAN, color histogram \cite{swain1991color} is applied to describe the visual content of video frames. In video summarization systems, the color space selected for histogram extraction should reflect the way in which humans perceive color. This can be achieved by using user-oriented color spaces as they employ the characteristics used by humans to distinguish one color from another \cite{stehling2002techniques,de2011vsumm}. One popular choice is the HSV color space, the HSV color space was developed to provide an intuitive representation of color and to be near to the way in which humans perceive color \cite{de2011vsumm}.
67
+
68
+ The color histogram used in VSCAN is computed from the HSV color space using 32 bins of H, 4 bins of S, and 2 bins of V. This quantization of the color histogram is established through experimental tests and aims at reducing the amount of data without losing important information.
69
+
70
+ \subsection{Texture Features Extraction}
71
+ Texture is a powerful low-level feature for representing images. It can be defined as an attribute representing the spatial arrangement of the pixels in a region or image \cite{IEEEStandardGlossary}.
72
+
73
+ Discrete Wavelet Transformation (DWT) is commonly used to extract texture features of an image by transforming it from spatial domain into frequency domain \cite{smith1994transform}. Wavelet transforms extract information from signal at different scales by passing the signal through low pass and high pass filters. Also, Wavelets provide multi-resolution capability and good energy compaction. In addition, they are robust with respect to color intensity shifts and can capture both texture and shape information efficiently \cite{singha2012signal}.
74
+
75
+ In VSCAN, Discrete Haar Wavelet Transforms \cite{stankovic2003haar} is used to compute feature vectors as a texture representation for video frames, because it is fast to compute and also have been found to perform well in practice. Each video frame is divided into color channels and the Discrete Haar Wavelet Transform is applied on each channel. It is well known that the RGB color space is not suitable to reflect human perception of color
76
+ \cite{de2011vsumm,girgensohn2001keyframe,liu2004shot}. Instead of using RGB, the video frame image is converted to HSV color space; moreover the video frame’s size is reduced into 64 X 64 pixels in order to reduce computation without losing significant image information. Next step is applying a two-dimensional Haar Wavelet transform on the reduced HSV image data with decomposition level 3. Finally the texture features of the video frames are extracted from the approximation coefficients of the Haar Wavelet Transforms.
77
+
78
+ \subsection{Video Frames Clustering}
79
+ DBSCAN (density-based spatial clustering of applications with noise) \cite{ester1996density} is a density-based algorithm which discovers clusters with arbitrary shape using minimal number of input parameters. The input parameters required for this algorithm is the radius of the cluster (Eps) and the minimum points required inside the cluster (Minpts) \cite{ester1996density}.
80
+
81
+ Using DBSCAN clustering algorithm has many advantages. First, it does not require specifying the number of clusters in the data a priori, as opposed to partitioning algorithms like k-means \cite{parimala2011survey}. Second, DBSCAN can find arbitrarily shaped clusters. Third, it has a notion of noise. Finally, DBSCAN requires minimal number of input parameters.
82
+
83
+ In VSCAN, we apply a dual feature space DBSCAN algorithm. The proposed clustering algorithm used in VSCAN aims at adapting and modifying DBSCAN to be used by a video summarization system that utilizes both color and texture features. Instead of accepting only one input dataset as in the original DBSCAN, the clustering algorithm in VSCAN accepts both color and texture features of video frames as input datasets, with the Bhattacharya distance \cite{kailath1967divergence} as a dissimilarity measure. The Bhattacharyya distance between two discrete distributions P and Q of size n; is defined as:
84
+ \begin{equation}
85
+ Bhattacharyya Distance = \sum\limits_{i=0}^{n}\sqrt{\sum{Pi}\bullet\sum{Qi}}
86
+ \end{equation}
87
+
88
+ Selecting the Bhattacharyya distance as dissimilarity measure has many advantages \cite{aherne1998bhattacharyya}. First, the Bhattacharyya measure has a self-consistency property, as by using the Bhattacharyya measure all Poisson errors are forced to be constant therefore ensuring the minimum distance between two observations points is indeed a straight line \cite{aherne1998bhattacharyya}. The second advantage is the independence between Bhattacharyya measure and the histogram bin widths, as for the Bhattacharyya metric the contribution to the measure is the same irrespective of how the quantities are divided between bins; therefore it is unaffected by the distribution of data across the histogram \cite{aherne1998bhattacharyya}. Third advantage is that the Bhattacharyya measure is dimensionless; as it is not affected by the measurement scale used \cite{aherne1998bhattacharyya}.
89
+
90
+ The values of the Bhattacharyya distance between features vectors of two frames p and q, occurs between 0 and 1; where 0 means that two frames are completely not similar and 1 means that they are exact similar.
91
+
92
+ Original definitions of DBSCAN Algorithm can be found in \cite{ester1996density}. Following are the definitions of the proposed video clustering algorithm used by VSCAN .
93
+
94
+
95
+
96
+ \begin{definition}
97
+ \textbf{(CD- Color database)} a database containing color features extracted from video frames.
98
+ \end{definition}
99
+
100
+
101
+
102
+ \begin{definition}
103
+ \textbf{(TD- Texture database)} a database containing texture features extracted from video frames.
104
+ \end{definition}
105
+
106
+
107
+
108
+ \begin{definition}
109
+ \textbf{(EpsColor-color-based similarity of video frame)}\\EpsColor-color-based similarity of video frame p, denoted by $S_{EpsColor}(p) $ is defined by:
110
+ $
111
+ S_{EpsColor}(p) = \left\{q \in CD\vert BhatDist(p,q) \geq EpsColor \right\},
112
+ $
113
+ where BhatDist is Bhattacharyya distance.
114
+ \end{definition}
115
+
116
+
117
+
118
+ \begin{definition}
119
+ \textbf{(EpsTexture-texture-based similarity of video frame)}\\EpsTexture-texture-based similarity of video frame p, denoted by $S_{EpsTexture}(p)$ is defined by:
120
+ $
121
+ S_{EpsTexture}(p) = \left\{q \in TD\vert BhatDist(p,q) \geq EpsTexture \right\},
122
+ $
123
+ where BhatDist is Bhattacharyya distance.
124
+ \end{definition}
125
+
126
+
127
+
128
+ \begin{definition}
129
+ \textbf{(Eps composite similarity score of a video frame)}\\ Eps composite similarity of a video frame p, denoted by $S_{Eps}(p)$ is defined by:
130
+ $
131
+ S_{Eps}(p) = \lbrace q \in CD,TD\vert score(p,q ) = Eps \rbrace, where \ Eps \ \& \ score(p,q) \in \lbrace 0, 1, 2 \rbrace
132
+ $
133
+ in which possible values are defined as follows:
134
+ \begin{description}
135
+ \item $ \textbf{0}:if \ p \notin S_{EpsColor}(q) \ AND \ p \notin S_{EpsTexture}(q),$ in this case p,q are NOT similar.
136
+
137
+
138
+ \item $ \textbf{1}:if \ p \in S_{EpsColor}(q) \ OR \ p \in S_{EpsTexture}(q),$ in this case p,q are color-based similar OR texture-based similar.
139
+
140
+ \item $ \textbf{2}:if \ p \in S_{EpsColor}(q) \ AND \ p \in S_{EpsTexture}(q),$ in this case p,q are color-based similar AND texture-based similar.
141
+ \end{description}
142
+
143
+ \end{definition}
144
+
145
+ \begin{definition}
146
+ \textbf{(Directly-similar)} A frame p is directly-similar to a frame q wrt. Eps, MinPts if
147
+ $ p \in S_{Eps}(q)\emph{ and } \vert S_{Eps}(q) \vert \geq MinPts$ (\textbf{core frame condition}).
148
+ \end{definition}
149
+
150
+ \begin{definition}
151
+ \textbf{(Indirectly-similar)} A frame p is indirectly-similar to a frame q wrt. Eps and MinPts if there is a chain of frames
152
+ $ p_{1},...., p_{n}, \ p_{1} = q, \ p_{n} = p \ such \ that \ p_{i+1} \ is \ directly \ similar \ to \ p_{i}$
153
+ \end{definition}
154
+
155
+
156
+ \begin{definition}
157
+ \textbf{(Connected-similar)} A frame p is connected-similar to a frame q wrt. Eps and MinPts, if there is a frame o such that both, p and q are indirectly-similar to o wrt.Eps and MinPts.
158
+ \end{definition}
159
+
160
+ \begin{definition}
161
+ \textbf{(Video cluster)} Let D be a database of frames. A cluster C wrt. Eps and MinPts is a non-empty subset of D satisfying the following conditions:
162
+
163
+ \begin{itemize}
164
+ \item $ \forall \ p,q: \ if \ p \in C$ and q is indirectly-similar to p wrt. Eps and MinPts, then q $\in C $ \textbf{(Maximality)}.
165
+ \item $ \forall \ p,q \in C:$ p is connected-similar to q wrt. Eps and MinPts \textbf{(Connectivity)}.
166
+ \end{itemize}
167
+
168
+ \end{definition}
169
+
170
+
171
+ \begin{definition}
172
+ \textbf{(Noise)} Let
173
+ $ C_{1},...,C_{k} $ \emph{be the video clusters of the database D. The noise is defined as the set of frames in the database D not belonging to any cluster} $ C_{i}$, \emph{i=1,..,k. i.e. Noise = } $ \lbrace p \in D \vert \forall i : p \notin C_{i} \rbrace $
174
+ \end{definition}
175
+
176
+
177
+ The steps involved in VSCAN clustering algorithm are as follows:
178
+
179
+ \begin{enumerate}
180
+ \item Select an arbitrary frame p.
181
+ \item Retrieve all frames that are indirectly-similar from p w.r.t. Eps and Minpts.
182
+ \item If p is a core frame, a video cluster is formed.
183
+ \item If p is a border frame, no frames are indirectly-similar from p and the next frame of the database is visited.
184
+ \item Continue the process until all the frames have been processed.
185
+ \end{enumerate}
186
+
187
+ For clustering the video frames, we apply the proposed clustering Algorithm on extracted color and texture features of the pre-sampled video frames. According to our experimental tests, we set the input parameters in VSCAN algorithm as follows: EpsColor = 0.97, EpsTexture = 0.97, Eps = 2 and Minpts = 1. As per provided definitions, the EpsColor is the Bhattacharyya distance threshold for grouping frames using color features, this means that only frames with Bhattacharyya distance greater than or equal to 0.97 are color-based similar and eligible to be in the same cluster. While EpsTexture is the Bhattacharyya distance threshold for grouping frames using texture features, this means that only frames with Bhattacharyya distance greater than or equal to 0.97 are texture-based similar and eligible to be in the same cluster.
188
+
189
+ As per the clustering algorithm definitions, setting Eps value to 2 means that video frames are eligible to belong in same cluster if they are color-based similar and texture-based similar. While Minpts input parameter value is the key value for noise elimination mechanism, Minpts in the algorithm is the minimum number of neighbor frames allowed to create a cluster within the current frame, i.e. setting Minpts to 1, means that minimum cluster size equals to 2 and any cluster of size 1 will be considered as a noise. Since we have selected sampling rate of 1 frame per second, setting Minpts to 1 is equivalent to discarding those video segments of duration less than 2 seconds.
190
+
191
+ \subsection{Key Frames Extraction}
192
+ After clustering the video frames, the final step is selecting the key frames from the video clusters. In this step the noise frames are discarded and then for each cluster the middle core frame in the ordered frames sequence is selected to construct the video summary. According to our experiments we found that this middle core frame usually is the best representative of the cluster to which it belongs.
193
+
194
+
195
+ \section{Experimental Evaluation}
196
+ In this paper, a modified version of an evaluation method Comparison of User Summaries (CUS) described in \cite{de2011vsumm} is used to evaluate the quality of video summaries. In CUS method, the video summary is built manually by a number of users from the sampled frames and the user summaries are taken as reference (i.e. ground truth) to be compared with the automatic summaries obtained by different methods \cite{de2011vsumm}.
197
+
198
+ The modifications proposed to CUS method aims at providing a more perceptual assessment of the quality of the automatic video summaries. Instead of comparing frames from different summaries using color features only as in CUS method, both color and texture features (as in section 3.2 and section 3.3) are used to detect the similarity of the frames. Once two frames are color-based similar or texture-based similar, they are excluded from the next iteration of the comparison process. In this modified CUS version, the Bhattacharya distance is used to detect both color and texture similarity; in this case the distance threshold value for color and texture similarity is set to 0.97.
199
+
200
+ In order to evaluate the automatic video summary, the F-measure is used as a metric. The F-measure consolidates both Precision and Recall values into one value using the harmonic mean \cite{blanken2007multimedia}, and it is defined as:
201
+
202
+ \begin{equation}
203
+ F \textrm{-} measure = \frac{2 \times Precision \times Recall}{Precision + Recall}
204
+ \end{equation}
205
+
206
+ The Precision measure of video summary is defined as the ratio of the total number of color-based similar frames and texture-based similar frames to the total number of frames in the automatic summary; and the Recall measure is defined as the ratio of the total number of color-based similar frames and texture-based similar frames to the total number of frames in the user summary
207
+
208
+ VSCAN approach is evaluated on a set of 50 videos selected from the Open Video Project \footnote{Open Video Project. \url{http://www.open-video.org}}. All videos are in MPEG-1 format (30 fps, 352 × 240 pixels). They are distributed among several genres (documentary, historical, lecture, educational) and their duration varies from 1 to 4 min. Also, we use the same user summaries used in \cite{de2011vsumm} as a ground-truth data. These user summaries were created by 50 users, each one dealing with 5 videos, meaning that each video has 5 summaries created by five different users. So, the total number of video summaries created by the users is 250 summaries and each user may create different summary.
209
+
210
+ For comparing VSCAN approach with other approaches, we used the results reported by three approaches: VSUMM \cite{de2011vsumm}, STIMO \cite{furini2010stimo}, and DT \cite{mundur2006keyframe}. In addition to that, the automatic video summaries generated by our approach were compared with the OV summaries generated by the algorithm in \cite{dementhon1998video}. All the videos, user summaries, and automatic summaries are available publicly
211
+ \footnote{\url{http://sites.google.com/site/vscansite/}}.
212
+
213
+
214
+ In addition to previous approaches, we implemented a video summarization approach called DB-Color using the original DBSCAN algorithm with the color features only as an input. We used the same color features extraction method used in the proposed VSCAN approach as in section 3.2 and also the same input parameters for color similarity and noise detection as in section 3.3, i.e. Eps = 0.97 and Minpts = 1. The reason for implementing DB-Color is to test the effect of using color only instead of combining both color and texture as in VSCAN.
215
+
216
+ \tablename{ 1} shows the mean F-measure achieved by the different video summarization approaches. The results indicate that VSCAN performs better than all other approaches. Also, we notice that combining both color and texture features together as done in VSCAN gives better results than using color features only as in DB-Color. However, DB-Color achieved better results if compared to the other four approaches (OV, DT, STIMO, and VSUMM), which indicates that using DBSCAN clustering algorithm is efficient for generating static video summary.
217
+
218
+ \begin{table}[!t]
219
+ \centering
220
+ \caption{Mean F-measure achieved by different approaches}
221
+ \begin{tabular}{l l l l l l l} \hline\noalign{\smallskip}
222
+ \textbf{Approach} \ \ & OV \ \ \ & DT \ \ \ & STIMO \ \ & VSUMM \ \ & VSCAN \ \ & DB-Color \ \ \\
223
+ \noalign{\smallskip}
224
+ \hline
225
+ \noalign{\smallskip}
226
+ \textbf{Mean F-Measure} \ \ & 0.67 & 0.61 & 0.65 & 0.72 & \textbf{0.77} & 0.74 \\
227
+ \hline
228
+ \end{tabular}
229
+ \end{table}
230
+
231
+
232
+ \section{Conclusion}
233
+ In this paper, we presented VSCAN, a novel approach for generating static video summaries. VSCAN utilizes a modified DBSCAN clustering algorithm to summarize the video content using both color and texture features of the video frames. Combining both color and texture features enabled VSCAN to overcome the drawback of using color features only as in other approaches. Also, as an advantage of using a density-based clustering algorithm, VSCAN could overcome the drawback of determining a priori number of clusters; thus, the extra step needed for estimating the number of clusters is avoided. Also, as an advantage of using a modified DBSCAN algorithm, VSCAN can detect noise frames automatically without extra computations.
234
+
235
+ As an additional contribution, we proposed an enhanced evaluation method based on color and texture matching. The main advantage of this evaluation method is to provide a more perceptual assessment of the quality of automatic video summaries.
236
+
237
+ Future work includes combining other features to VSCAN approach like edge and motion descriptors. Also, another interesting future work could be generating video skims (dynamic key frames, e.g.\ movie trailers) from the extracted key frames. Since the video summarization step is usually considered as a prerequisite for video skimming \cite{truong2007video}, the extracted key frames from VSCAN can be used to develop an enhanced video skimming system.
238
+
239
+
240
+
241
+
242
+ \bibliographystyle{splncs} \bibliography{VSCANBib}
243
+
244
+
245
+
246
+ \end{document}
papers/1405/1405.0312.tex ADDED
@@ -0,0 +1,363 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,journal,letterpaper,twoside,compsoc]{IEEEtran}
2
+
3
+ \usepackage{graphicx,amsmath,amssymb,verbatim,xspace}
4
+ \usepackage{color,subcaption,tabu,float,rotating,url}
5
+ \usepackage[colorlinks=true,bookmarks=false]{hyperref}
6
+ \pdfoptionpdfminorversion 6
7
+ \newcommand{\COCO}{MS COCO\xspace}
8
+ \newcommand{\myparagraph}[1]{\textbf{#1}~}
9
+ \newcommand{\mysubsection}[1]{\subsection{#1}}
10
+ \newcommand{\myappendix}{the appendix\xspace}
11
+
12
+ \begin{document}
13
+ \title{Microsoft COCO: Common Objects in Context}
14
+ \author{Tsung-Yi~Lin \quad Michael~Maire \quad Serge~Belongie \quad Lubomir~Bourdev \quad Ross~Girshick \\
15
+ James~Hays \quad Pietro~Perona \quad Deva~Ramanan \quad C.~Lawrence~Zitnick \quad Piotr~Doll\'ar
16
+ \IEEEcompsocitemizethanks{
17
+ \IEEEcompsocthanksitem T.Y.~Lin and S.~Belongie are with Cornell NYC Tech and the Cornell Computer Science Department.
18
+ \IEEEcompsocthanksitem M.~Maire is with the Toyota Technological Institute at Chicago.
19
+ \IEEEcompsocthanksitem L.~Bourdev and P.~Doll\'ar are with Facebook AI Research. The majority of this work was performed while P.~Doll\'ar was with Microsoft Research.
20
+ \IEEEcompsocthanksitem R.~Girshick and C.~L.~Zitnick are with Microsoft Research, Redmond.
21
+ \IEEEcompsocthanksitem J.~Hays is with Brown University.
22
+ \IEEEcompsocthanksitem P.~Perona is with the California Institute of Technology.
23
+ \IEEEcompsocthanksitem D.~Ramanan is with the University of California at Irvine.}
24
+ }
25
+
26
+ \IEEEcompsoctitleabstractindextext{\begin{abstract}
27
+ We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
28
+ \end{abstract}}
29
+ \maketitle
30
+
31
+ \section{Introduction}
32
+
33
+ One of the primary goals of computer vision is the understanding of visual scenes. Scene understanding involves numerous tasks including recognizing what objects are present, localizing the objects in 2D and 3D, determining the objects' and scene's attributes, characterizing relationships between objects and providing a semantic description of the scene. The current object classification and detection datasets \cite{Imagenet,PASCAL,SUN,Dollar2012PAMI} help us explore the first challenges related to scene understanding. For instance the ImageNet dataset \cite{Imagenet}, which contains an unprecedented number of images, has recently enabled breakthroughs in both object classification and detection research \cite{Hinton,GirshickDDM13,OverFeat}. The community has also created datasets containing object attributes \cite{farhadi2009describing}, scene attributes \cite{Patterson2012SunAttributes}, keypoints \cite{bourdev2009poselets}, and 3D scene information \cite{NYUDepth}. This leads us to the obvious question: what datasets will best continue our advance towards our ultimate goal of scene understanding?
34
+
35
+ \begin{figure}[!t]\centering
36
+ \includegraphics[width=0.5\textwidth]{figs/teaser}
37
+ \caption{While previous object recognition datasets have focused on (a) image classification, (b) object bounding box localization or (c) semantic pixel-level segmentation, we focus on (d) segmenting individual object instances. We introduce a large, richly-annotated dataset comprised of images depicting complex everyday scenes of common objects in their natural context.\label{fig:teaser}}
38
+ \end{figure}
39
+
40
+ We introduce a new large-scale dataset that addresses three core research problems in scene understanding: detecting non-iconic views (or non-canonical perspectives \cite{Palmer1981}) of objects, contextual reasoning between objects and the precise 2D localization of objects. For many categories of objects, there exists an iconic view. For example, when performing a web-based image search for the object category ``bike,'' the top-ranked retrieved examples appear in profile, unobstructed near the center of a neatly composed photo. We posit that current recognition systems perform fairly well on iconic views, but struggle to recognize objects otherwise -- in the background, partially occluded, amid clutter \cite{diagnosing} -- reflecting the composition of actual everyday scenes. We verify this experimentally; when evaluated on everyday scenes, models trained on our data perform better than those trained with prior datasets. A challenge is finding natural images that contain multiple objects. The identity of many objects can only be resolved using context, due to small size or ambiguous appearance in the image. To push research in contextual reasoning, images depicting scenes \cite{SUN} rather than objects in isolation are necessary. Finally, we argue that detailed spatial understanding of object layout will be a core component of scene analysis. An object's spatial location can be defined coarsely using a bounding box \cite{PASCAL} or with a precise pixel-level segmentation \cite{brostow2009semantic,LabelMe,bell13opensurfaces}. As we demonstrate, to measure either kind of localization performance it is essential for the dataset to have every instance of every object category labeled and fully segmented. Our dataset is unique in its annotation of instance-level segmentation masks, Fig.~\ref{fig:teaser}.
41
+
42
+ To create a large-scale dataset that accomplishes these three goals we employed a novel pipeline for gathering data with extensive use of Amazon Mechanical Turk. First and most importantly, we harvested a large set of images containing contextual relationships and non-iconic object views. We accomplished this using a surprisingly simple yet effective technique that queries for pairs of objects in conjunction with images retrieved via scene-based queries \cite{ordonez2011im2text,SUN}. Next, each image was labeled as containing particular object categories using a hierarchical labeling approach \cite{Olga}. For each category found, the individual instances were labeled, verified, and finally segmented. Given the inherent ambiguity of labeling, each of these stages has numerous tradeoffs that we explored in detail.
43
+
44
+ The Microsoft Common Objects in COntext (\COCO) dataset contains 91 common object categories with 82 of them having more than 5,000 labeled instances, Fig.~\ref{fig:exampleimages}. In total the dataset has 2,500,000 labeled instances in 328,000 images. In contrast to the popular ImageNet dataset \cite{Imagenet}, COCO has fewer categories but more instances per category. This can aid in learning detailed object models capable of precise 2D localization. The dataset is also significantly larger in number of instances per category than the PASCAL VOC \cite{PASCAL} and SUN \cite{SUN} datasets. Additionally, a critical distinction between our dataset and others is the number of labeled instances per image which may aid in learning contextual information, Fig.~\ref{fig:dataanalysis}. \COCO contains considerably more object instances per image (7.7) as compared to ImageNet (3.0) and PASCAL (2.3). In contrast, the SUN dataset, which contains significant contextual information, has over 17 objects and ``stuff'' per image but considerably fewer object instances overall.
45
+
46
+ An abridged version of this work appeared in~\cite{eccv}.
47
+
48
+ \section{Related Work}
49
+
50
+ \begin{figure*}[!t]\centering
51
+ \includegraphics[width=\textwidth]{figs/iconic}
52
+ \caption{Example of (a) iconic object images, (b) iconic scene images, and (c) non-iconic images.\label{fig:iconic}}
53
+ \end{figure*}
54
+
55
+ Throughout the history of computer vision research datasets have played a critical role. They not only provide a means to train and evaluate algorithms, they drive research in new and more challenging directions. The creation of ground truth stereo and optical flow datasets \cite{scharstein2002taxonomy,baker2011database} helped stimulate a flood of interest in these areas. The early evolution of object recognition datasets \cite{Caltech101,Caltech256,Dalal} facilitated the direct comparison of hundreds of image recognition algorithms while simultaneously pushing the field towards more complex problems. Recently, the ImageNet dataset \cite{Imagenet} containing millions of images has enabled breakthroughs in both object classification and detection research using a new class of deep learning algorithms \cite{Hinton,GirshickDDM13,OverFeat}.
56
+
57
+ Datasets related to object recognition can be roughly split into three groups: those that primarily address object classification, object detection and semantic scene labeling. We address each in turn.
58
+
59
+ \myparagraph{Image Classification} The task of object classification requires binary labels indicating whether objects are present in an image; see Fig.~\ref{fig:teaser}(a). Early datasets of this type comprised images containing a single object with blank backgrounds, such as the MNIST handwritten digits \cite{mnist} or COIL household objects \cite{nene1996columbia}. Caltech 101 \cite{Caltech101} and Caltech 256 \cite{Caltech256} marked the transition to more realistic object images retrieved from the internet while also increasing the number of object categories to 101 and 256, respectively. Popular datasets in the machine learning community due to the larger number of training examples, CIFAR-10 and CIFAR-100 \cite{krizhevsky2009learning} offered 10 and 100 categories from a dataset of tiny $32 \times 32$ images \cite{torralba200880}. While these datasets contained up to 60,000 images and hundreds of categories, they still only captured a small fraction of our visual world.
60
+
61
+ Recently, ImageNet \cite{Imagenet} made a striking departure from the incremental increase in dataset sizes. They proposed the creation of a dataset containing 22k categories with 500-1000 images each. Unlike previous datasets containing entry-level categories \cite{ordonezlarge}, such as ``dog'' or ``chair,'' like \cite{torralba200880}, ImageNet used the WordNet Hierarchy \cite{wordnet} to obtain both entry-level and fine-grained \cite{Birds200} categories. Currently, the ImageNet dataset contains over 14 million labeled images and has enabled significant advances in image classification \cite{Hinton,GirshickDDM13,OverFeat}.
62
+
63
+ \myparagraph{Object detection} Detecting an object entails both stating that an object belonging to a specified class is present, and localizing it in the image. The location of an object is typically represented by a bounding box, Fig.~\ref{fig:teaser}(b). Early algorithms focused on face detection \cite{hjelmaas2001face} using various ad hoc datasets. Later, more realistic and challenging face detection datasets were created \cite{LFWTech}. Another popular challenge is the detection of pedestrians for which several datasets have been created \cite{Dalal,Dollar2012PAMI}. The Caltech Pedestrian Dataset \cite{Dollar2012PAMI} contains 350,000 labeled instances with bounding boxes.
64
+
65
+ For the detection of basic object categories, a multi-year effort from 2005 to 2012 was devoted to the creation and maintenance of a series of benchmark datasets that were widely adopted. The PASCAL VOC \cite{PASCAL} datasets contained 20 object categories spread over 11,000 images. Over 27,000 object instance bounding boxes were labeled, of which almost 7,000 had detailed segmentations. Recently, a detection challenge has been created from 200 object categories using a subset of 400,000 images from ImageNet \cite{ILSVRCanalysis_ICCV2013}. An impressive 350,000 objects have been labeled using bounding boxes.
66
+
67
+ Since the detection of many objects such as sunglasses, cellphones or chairs is highly dependent on contextual information, it is important that detection datasets contain objects in their natural environments. In our dataset we strive to collect images rich in contextual information. The use of bounding boxes also limits the accuracy for which detection algorithms may be evaluated. We propose the use of fully segmented instances to enable more accurate detector evaluation.
68
+
69
+ \vspace{1mm}
70
+
71
+ \myparagraph{Semantic scene labeling} The task of labeling semantic objects in a scene requires that each pixel of an image be labeled as belonging to a category, such as sky, chair, floor, street, etc. In contrast to the detection task, individual instances of objects do not need to be segmented, Fig.~\ref{fig:teaser}(c). This enables the labeling of objects for which individual instances are hard to define, such as grass, streets, or walls. Datasets exist for both indoor \cite{NYUDepth} and outdoor \cite{shotton2009textonboost,brostow2009semantic} scenes. Some datasets also include depth information \cite{NYUDepth}. Similar to semantic scene labeling, our goal is to measure the pixel-wise accuracy of object labels. However, we also aim to distinguish between individual instances of an object, which requires a solid understanding of each object's extent.
72
+
73
+ A novel dataset that combines many of the properties of both object detection and semantic scene labeling datasets is the SUN dataset \cite{SUN} for scene understanding. SUN contains 908 scene categories from the WordNet dictionary \cite{wordnet} with segmented objects. The 3,819 object categories span those common to object detection datasets (person, chair, car) and to semantic scene labeling (wall, sky, floor). Since the dataset was collected by finding images depicting various scene types, the number of instances per object category exhibits the long tail phenomenon. That is, a few categories have a large number of instances (wall: 20,213, window: 16,080, chair: 7,971) while most have a relatively modest number of instances (boat: 349, airplane: 179, floor lamp: 276). In our dataset, we ensure that each object category has a significant number of instances, Fig.~\ref{fig:dataanalysis}.
74
+
75
+ \myparagraph{Other vision datasets} Datasets have spurred the advancement of numerous fields in computer vision. Some notable datasets include the Middlebury datasets for stereo vision \cite{scharstein2002taxonomy}, multi-view stereo \cite{seitz2006comparison} and optical flow \cite{baker2011database}. The Berkeley Segmentation Data Set (BSDS500) \cite{amfm_pami2011} has been used extensively to evaluate both segmentation and edge detection algorithms. Datasets have also been created to recognize both scene \cite{Patterson2012SunAttributes} and object attributes \cite{farhadi2009describing,lampert2009learning}. Indeed, numerous areas of vision have benefited from challenging datasets that helped catalyze progress.
76
+
77
+ \begin{figure*}[!t]\centering
78
+ \includegraphics[width=\textwidth]{figs/ui_pipeline_summary}
79
+ \caption{Our annotation pipeline is split into 3 primary tasks: (a) labeling the categories present in the image (\S\ref{sec:category-labeling}), (b) locating and marking all instances of the labeled categories (\S\ref{sec:instance-spotting}), and (c) segmenting each object instance (\S\ref{sec:instance-segmentation}).\label{fig:pipeline}}\vspace{-2mm}
80
+ \end{figure*}
81
+
82
+ \section{Image Collection}\label{sec:image_collection}
83
+
84
+ We next describe how the object categories and candidate images are selected.
85
+
86
+ \mysubsection{Common Object Categories} The selection of object categories is a non-trivial exercise. The categories must form a representative set of all categories, be relevant to practical applications and occur with high enough frequency to enable the collection of a large dataset. Other important decisions are whether to include both ``thing'' and ``stuff'' categories \cite{heitz2008learning} and whether fine-grained \cite{Birds200,Imagenet} and object-part categories should be included. ``Thing'' categories include objects for which individual instances may be easily labeled (person, chair, car) where ``stuff'' categories include materials and objects with no clear boundaries (sky, street, grass). Since we are primarily interested in precise localization of object instances, we decided to only include ``thing'' categories and not ``stuff.'' However, since ``stuff'' categories can provide significant contextual information, we believe the future labeling of ``stuff'' categories would be beneficial.
87
+
88
+ The specificity of object categories can vary significantly. For instance, a dog could be a member of the ``mammal'', ``dog'', or ``German shepherd'' categories. To enable the practical collection of a significant number of instances per category, we chose to limit our dataset to entry-level categories, i.e. category labels that are commonly used by humans when describing objects (dog, chair, person). It is also possible that some object categories may be parts of other object categories. For instance, a face may be part of a person. We anticipate the inclusion of object-part categories (face, hands, wheels) would be beneficial for many real-world applications.
89
+
90
+ We used several sources to collect entry-level object categories of ``things.'' We first compiled a list of categories by combining categories from PASCAL VOC \cite{PASCAL} and a subset of the 1200 most frequently used words that denote visually identifiable objects \cite{wordbank}. To further augment our set of candidate categories, several children ranging in ages from 4 to 8 were asked to name every object they see in indoor and outdoor environments. The final 272 candidates may be found in \myappendix. Finally, the co-authors voted on a 1 to 5 scale for each category taking into account how commonly they occur, their usefulness for practical applications, and their diversity relative to other categories. The final selection of categories attempts to pick categories with high votes, while keeping the number of categories per super-category (animals, vehicles, furniture, etc.) balanced. Categories for which obtaining a large number of instances (greater than 5,000) was difficult were also removed. To ensure backwards compatibility all categories from PASCAL VOC \cite{PASCAL} are also included. Our final list of 91 proposed categories is in Fig.~\ref{fig:dataanalysis}(a).
91
+
92
+ \mysubsection{Non-iconic Image Collection} Given the list of object categories, our next goal was to collect a set of candidate images. We may roughly group images into three types, Fig.~\ref{fig:iconic}: iconic-object images \cite{berg2009finding}, iconic-scene images \cite{SUN} and non-iconic images. Typical iconic-object images have a single large object in a canonical perspective centered in the image, Fig.~\ref{fig:iconic}(a). Iconic-scene images are shot from canonical viewpoints and commonly lack people, Fig.~\ref{fig:iconic}(b). Iconic images have the benefit that they may be easily found by directly searching for specific categories using Google or Bing image search. While iconic images generally provide high quality object instances, they can lack important contextual information and non-canonical viewpoints.
93
+
94
+ Our goal was to collect a dataset such that a majority of images are non-iconic, Fig.~\ref{fig:iconic}(c). It has been shown that datasets containing more non-iconic images are better at generalizing \cite{torralba2011unbiased}. We collected non-iconic images using two strategies. First as popularized by PASCAL VOC \cite{PASCAL}, we collected images from Flickr which tends to have fewer iconic images. Flickr contains photos uploaded by amateur photographers with searchable metadata and keywords. Second, we did not search for object categories in isolation. A search for ``dog'' will tend to return iconic images of large, centered dogs. However, if we searched for pairwise combinations of object categories, such as ``dog + car'' we found many more non-iconic images. Surprisingly, these images typically do not just contain the two categories specified in the search, but numerous other categories as well. To further supplement our dataset we also searched for scene/object category pairs, see \myappendix. We downloaded at most 5 photos taken by a single photographer within a short time window. In the rare cases in which enough images could not be found, we searched for single categories and performed an explicit filtering stage to remove iconic images. The result is a collection of 328,000 images with rich contextual relationships between objects as shown in Figs.~\ref{fig:iconic}(c) and \ref{fig:exampleimages}.
95
+
96
+ \section{Image Annotation}
97
+
98
+ \begin{figure*}[!t]\centering
99
+ \begin{subfigure}[b]{0.47\textwidth}
100
+ \includegraphics[width=\textwidth]{figs/expert-worker}\vspace{-3mm}
101
+ \label{fig:recall}\caption{}
102
+ \end{subfigure}\quad\
103
+ \begin{subfigure}[b]{0.50\textwidth}
104
+ \includegraphics[width=\textwidth]{figs/worker_all}\vspace{-3mm}
105
+ \label{fig:all_worker}\caption{}
106
+ \end{subfigure}
107
+ \caption{Worker precision and recall for the category labeling task. (a) The union of multiple AMT workers (blue) has better recall than any expert (red). Ground truth was computed using majority vote of the experts. (b) Shows the number of workers (circle size) and average number of jobs per worker (circle color) for each precision/recall range. Most workers have high precision; such workers generally also complete more jobs. For this plot ground truth for each worker is the \emph{union} of responses from all other AMT workers. See \S\ref{sec:annotation-performance} for details.\label{fig:workers}}
108
+ \end{figure*}
109
+
110
+ We next describe how we annotated our image collection. Due to our desire to label over 2.5 million object instances, the design of a cost efficient yet high quality annotation pipeline was critical. The annotation pipeline is outlined in Fig.~\ref{fig:pipeline}. For all crowdsourcing tasks we used workers on Amazon's Mechanical Turk (AMT). Our user interfaces are described in detail in \myappendix. Note that, since the original version of this work \cite{eccv}, we have taken a number of steps to further improve the quality of the annotations. In particular, we have increased the number of annotators for the category labeling and instance spotting stages to eight. We also added a stage to verify the instance segmentations.
111
+
112
+ \mysubsection{Category Labeling}\label{sec:category-labeling} The first task in annotating our dataset is determining which object categories are present in each image, Fig.~\ref{fig:pipeline}(a). Since we have 91 categories and a large number of images, asking workers to answer 91 binary classification questions per image would be prohibitively expensive. Instead, we used a hierarchical approach \cite{Olga}.
113
+
114
+ We group the object categories into 11 super-categories (see \myappendix). For a given image, a worker was presented with each group of categories in turn and asked to indicate whether any instances exist for that super-category. This greatly reduces the time needed to classify the various categories. For example, a worker may easily determine no animals are present in the image without having to specifically look for cats, dogs, etc. If a worker determines instances from the super-category (animal) are present, for each subordinate category (dog, cat, etc.) present, the worker must drag the category's icon onto the image over one instance of the category. The placement of these icons is critical for the following stage. We emphasize that only a single instance of each category needs to be annotated in this stage. To ensure high recall, 8 workers were asked to label each image. A category is considered present if any worker indicated the category; false positives are handled in subsequent stages. A detailed analysis of performance is presented in \S\ref{sec:annotation-performance}. This stage took $\sim$20k worker hours to complete.
115
+
116
+ \mysubsection{Instance Spotting}\label{sec:instance-spotting} In the next stage all instances of the object categories in an image were labeled, Fig.~\ref{fig:pipeline}(b). In the previous stage each worker labeled one instance of a category, but multiple object instances may exist. Therefore, for each image, a worker was asked to place a cross on top of each instance of a specific category found in the previous stage. To boost recall, the location of the instance found by a worker in the previous stage was shown to the current worker. Such priming helped workers quickly find an initial instance upon first seeing the image. The workers could also use a magnifying glass to find small instances. Each worker was asked to label at most 10 instances of a given category per image. Each image was labeled by 8 workers for a total of $\sim$10k worker hours.
117
+
118
+ \mysubsection{Instance Segmentation}\label{sec:instance-segmentation} Our final stage is the laborious task of segmenting each object instance, Fig.~\ref{fig:pipeline}(c). For this stage we modified the excellent user interface developed by Bell et al.~\cite{bell13opensurfaces} for image segmentation. Our interface asks the worker to segment an object instance specified by a worker in the previous stage. If other instances have already been segmented in the image, those segmentations are shown to the worker. A worker may also indicate there are no object instances of the given category in the image (implying a false positive label from the previous stage) or that all object instances are already segmented.
119
+
120
+ Segmenting 2,500,000 object instances is an extremely time consuming task requiring over 22 worker hours per 1,000 segmentations. To minimize cost we only had a single worker segment each instance. However, when first completing the task, most workers produced only coarse instance outlines. As a consequence, we required all workers to complete a training task for each object category. The training task required workers to segment an object instance. Workers could not complete the task until their segmentation adequately matched the ground truth. The use of a training task vastly improved the quality of the workers (approximately 1 in 3 workers passed the training stage) and resulting segmentations. Example segmentations may be viewed in Fig.~\ref{fig:exampleimages}.
121
+
122
+ While the training task filtered out most bad workers, we also performed an explicit verification step on each segmented instance to ensure good quality. Multiple workers (3 to 5) were asked to judge each segmentation and indicate whether it matched the instance well or not. Segmentations of insufficient quality were discarded and the corresponding instances added back to the pool of unsegmented objects. Finally, some approved workers consistently produced poor segmentations; all work obtained from such workers was discarded.
123
+
124
+ For images containing 10 object instances or fewer of a given category, every instance was individually segmented (note that in some images up to 15 instances were segmented). Occasionally the number of instances is drastically higher; for example, consider a dense crowd of people or a truckload of bananas. In such cases, many instances of the same category may be tightly grouped together and distinguishing individual instances is difficult. After 10-15 instances of a category were segmented in an image, the remaining instances were marked as ``crowds'' using a single (possibly multi-part) segment. For the purpose of evaluation, areas marked as crowds will be ignored and not affect a detector's score. Details are given in \myappendix.
125
+
126
+ \begin{figure*}[!t]\centering
127
+ \includegraphics[width=\textwidth]{figs/dataanalysis}\vspace{2mm}
128
+ \caption{(a) Number of annotated instances per category for \COCO and PASCAL VOC. (b,c) Number of annotated categories and annotated instances, respectively, per image for \COCO, ImageNet Detection, PASCAL VOC and SUN (average number of categories and instances are shown in parentheses). (d) Number of categories vs. the number of instances per category for a number of popular object recognition datasets. (e) The distribution of instance sizes for the \COCO, ImageNet Detection, PASCAL VOC and SUN datasets.\label{fig:dataanalysis}}\vspace{3mm}
129
+ \end{figure*}
130
+
131
+ \mysubsection{Annotation Performance Analysis}\label{sec:annotation-performance} We analyzed crowd worker quality on the category labeling task by comparing to dedicated expert workers, see Fig.~\ref{fig:workers}(a). We compared precision and recall of seven expert workers (co-authors of the paper) with the results obtained by taking the union of one to ten AMT workers. Ground truth was computed using majority vote of the experts. For this task recall is of primary importance as false positives could be removed in later stages. Fig.~\ref{fig:workers}(a) shows that the union of 8 AMT workers, the same number as was used to collect our labels, achieved greater recall than any of the expert workers. Note that worker recall saturates at around 9-10 AMT workers.
132
+
133
+ Object category presence is often ambiguous. Indeed as Fig.~\ref{fig:workers}(a) indicates, even dedicated experts often disagree on object presence, e.g.~due to inherent ambiguity in the image or disagreement about category definitions. For any unambiguous examples having a probability of over 50\% of being annotated, the probability all 8 annotators missing such a case is at most $.5^8 \approx .004$. Additionally, by observing how recall increased as we added annotators, we estimate that in practice over 99\% of all object categories not later rejected as false positives are detected given 8 annotators. Note that a similar analysis may be done for instance spotting in which 8 annotators were also used.
134
+
135
+ Finally, Fig.~\ref{fig:workers}(b) re-examines precision and recall of AMT workers on category labeling on a much larger set of images. The number of workers (circle size) and average number of jobs per worker (circle color) is shown for each precision/recall range. Unlike in Fig.~\ref{fig:workers}(a), we used a leave-one-out evaluation procedure where a category was considered present if \emph{any} of the remaining workers named the category. Therefore, overall worker precision is substantially higher. Workers who completed the most jobs also have the highest precision; all jobs from workers below the black line were rejected.
136
+
137
+ \mysubsection{Caption Annotation}
138
+
139
+ We added five written caption descriptions to each image in \COCO. A full description of the caption statistics and how they were gathered will be provided shortly in a separate publication.
140
+
141
+ \section{Dataset Statistics}
142
+
143
+ Next, we analyze the properties of the Microsoft Common Objects in COntext (\COCO) dataset in comparison to several other popular datasets. These include ImageNet \cite{Imagenet}, PASCAL VOC 2012 \cite{PASCAL}, and SUN \cite{SUN}. Each of these datasets varies significantly in size, list of labeled categories and types of images. ImageNet was created to capture a large number of object categories, many of which are fine-grained. SUN focuses on labeling scene types and the objects that commonly occur in them. Finally, PASCAL VOC's primary application is object detection in natural images. \COCO is designed for the detection and segmentation of objects occurring in their natural context.
144
+
145
+ The number of instances per category for all 91 categories is shown in Fig.~\ref{fig:dataanalysis}(a). A summary of the datasets showing the number of object categories and the number of instances per category is shown in Fig.~\ref{fig:dataanalysis}(d). While \COCO has fewer categories than ImageNet and SUN, it has more instances per category which we hypothesize will be useful for learning complex models capable of precise localization. In comparison to PASCAL VOC, \COCO has both more categories and instances.
146
+
147
+ An important property of our dataset is we strive to find non-iconic images containing objects in their natural context. The amount of contextual information present in an image can be estimated by examining the average number of object categories and instances per image, Fig.~\ref{fig:dataanalysis}(b, c). For ImageNet we plot the object detection validation set, since the training data only has a single object labeled. On average our dataset contains 3.5 categories and 7.7 instances per image. In comparison ImageNet and PASCAL VOC both have less than 2 categories and 3 instances per image on average. Another interesting observation is only $10\%$ of the images in \COCO have only one category per image, in comparison, over $60\%$ of images contain a single object category in ImageNet and PASCAL VOC. As expected, the SUN dataset has the most contextual information since it is scene-based and uses an unrestricted set of categories.
148
+
149
+ Finally, we analyze the average size of objects in the datasets. Generally smaller objects are harder to recognize and require more contextual reasoning to recognize. As shown in Fig.~\ref{fig:dataanalysis}(e), the average sizes of objects is smaller for both \COCO and SUN.
150
+
151
+ \section{Dataset Splits}
152
+
153
+ To accommodate a faster release schedule, we split the \COCO dataset into two roughly equal parts. The first half of the dataset was released in 2014, the second half will be released in 2015. The 2014 release contains 82,783 training, 40,504 validation, and 40,775 testing images (approximately $\frac{1}{2}$ train, $\frac{1}{4}$ val, and $\frac{1}{4}$ test). There are nearly 270k segmented people and a total of 886k segmented object instances in the 2014 train+val data alone. The cumulative 2015 release will contain a total of 165,482 train, 81,208 val, and 81,434 test images. We took care to minimize the chance of near-duplicate images existing across splits by explicitly removing near duplicates (detected with \cite{douze2009evaluation}) and grouping images by photographer and date taken.
154
+
155
+ Following established protocol, annotations for train and validation data will be released, but not for test. We are currently finalizing the evaluation server for automatic evaluation on the test set. A full discussion of evaluation metrics will be added once the evaluation server is complete.
156
+
157
+ Note that we have limited the 2014 release to a subset of 80 categories. We did not collect segmentations for the following 11 categories: hat, shoe, eyeglasses (too many instances), mirror, window, door, street sign (ambiguous and difficult to label), plate, desk (due to confusion with bowl and dining table, respectively) and blender, hair brush (too few instances). We may add segmentations for some of these categories in the cumulative 2015 release.
158
+
159
+ \begin{figure*}[t]\centering
160
+ \includegraphics[width=\textwidth]{figs/fantastic_fig}
161
+ \caption{Samples of annotated images in the \COCO dataset.\label{fig:exampleimages}}
162
+ \end{figure*}
163
+
164
+ \section{Algorithmic Analysis}
165
+
166
+ \begin{table*}
167
+ {\tiny\resizebox{\textwidth}{!}{\tabcolsep=0.05cm\begin{tabu}[ht]{@{}l*{21}{c}@{} }
168
+ \rowfont{\footnotesize} &plane &bike &bird &boat &bottle &bus &car &cat &chair &cow &table &dog &horse &moto &person &plant &sheep &sofa &train &tv &avg.\\\hline
169
+ \tabularnewline\rowfont{\footnotesize} DPMv5-P & {\bf 45.6} & 49.0 & 11.0 & {\bf 11.6} & {\bf 27.2} & 50.5 & {\bf 43.1} & {\bf 23.6} & {\bf 17.2} & 23.2 & {\bf 10.7} & {\bf 20.5} & 42.5 & {\bf 44.5} & {\bf 41.3} & {\bf 8.7} & {\bf 29.0} & {\bf 18.7 } & {\bf 40.0} & 34.5 & {\bf 29.6} \\
170
+ \tabularnewline\rowfont{\footnotesize} DPMv5-C & 43.7 & {\bf 50.1} & {\bf 11.8} & 2.4 & 21.4 & {\bf 60.1} & 35.6 & 16.0 & 11.4 & {\bf 24.8} & 5.3 & 9.4 & {\bf 44.5} & 41.0 & 35.8 & 6.3 & 28.3 & 13.3 & 38.8 & {\bf 36.2} & 26.8 \\\hline
171
+ \tabularnewline\rowfont{\footnotesize} DPMv5-P & 35.1 & 17.9 & 3.7 & 2.3 & {\bf 7} & 45.4 & {\bf 18.3} & 8.6 & {\bf 6.3} & 17 & 4.8 & {\bf 5.8} & 35.3 & 25.4 & {\bf 17.5} & 4.1 & {\bf 14.5} & 9.6 & 31.7 & 27.9 & 16.9\\
172
+ \tabularnewline\rowfont{\footnotesize} DPMv5-C & {\bf 36.9} & {\bf 20.2} & {\bf 5.7} & {\bf 3.5} & 6.6 & {\bf 50.3} & 16.1 & {\bf 12.8} & 4.5 & {\bf 19.0} & {\bf 9.6} & 4.0 & {\bf 38.2} & {\bf 29.9} & 15.9 & {\bf 6.7} & 13.8 & {\bf 10.4} & {\bf 39.2} & {\bf 37.9} & {\bf 19.1}\\
173
+ \tabularnewline\end{tabu}}}
174
+ \caption{\textbf{Top}: Detection performance evaluated on \textbf{PASCAL VOC 2012}. DPMv5-P is the performance reported by Girshick et al.~in VOC release 5. DPMv5-C uses the same implementation, but is trained with \COCO. \textbf{Bottom}: Performance evaluated on \textbf{\COCO} for DPM models trained with PASCAL VOC 2012 (DPMv5-P) and \COCO (DPMv5-C). For DPMv5-C we used 5000 positive and 10000 negative training examples. While \COCO is considerably more challenging than PASCAL, use of more training data coupled with more sophisticated approaches \cite{Hinton,GirshickDDM13,OverFeat} should improve performance substantially.\label{tab:ap_scores}}
175
+ \end{table*}
176
+
177
+ \myparagraph{Bounding-box detection} For the following experiments we take a subset of 55,000 images from our dataset\footnote{These preliminary experiments were performed before our final split of the dataset intro train, val, and test. Baselines on the actual test set will be added once the evaluation server is complete.} and obtain tight-fitting bounding boxes from the annotated segmentation masks. We evaluate models tested on both \COCO and PASCAL, see Table \ref{tab:ap_scores}. We evaluate two different models. \textbf{DPMv5-P}: the latest implementation of~\cite{felzenszwalb2010object} (release 5 \cite{voc-release5}) trained on PASCAL VOC 2012. \textbf{DPMv5-C}: the same implementation trained on COCO (5000 positive and 10000 negative images). We use the default parameter settings for training COCO models.
178
+
179
+ If we compare the average performance of DPMv5-P on PASCAL VOC and \COCO, we find that average performance on \COCO drops by nearly a {\em factor of 2}, suggesting that \COCO does include more difficult (non-iconic) images of objects that are partially occluded, amid clutter, etc. We notice a similar drop in performance for the model trained on \COCO (DPMv5-C).
180
+
181
+ The effect on detection performance of training on PASCAL VOC or \COCO may be analyzed by comparing DPMv5-P and DPMv5-C. They use the same implementation with different sources of training data. Table \ref{tab:ap_scores} shows DPMv5-C still outperforms DPMv5-P in 6 out of 20 categories when testing on PASCAL VOC. In some categories (e.g., dog, cat, people), models trained on \COCO perform worse, while on others (e.g., bus, tv, horse), models trained on our data are better.
182
+
183
+ Consistent with past observations~\cite{zhu2012we}, we find that including difficult (non-iconic) images during training may not always help. Such examples may act as noise and pollute the learned model if the model is not rich enough to capture such appearance variability. Our dataset allows for the exploration of such issues.
184
+
185
+ Torralba and Efros \cite{torralba2011unbiased} proposed a metric to measure cross-dataset generalization which computes the `performance drop' for models that train on one dataset and test on another. The performance difference of the DPMv5-P models across the two datasets is 12.7 AP while the DPMv5-C models only have 7.7 AP difference. Moreover, overall performance is much lower on \COCO. These observations support two hypotheses: 1) \COCO is significantly more difficult than PASCAL VOC and 2) models trained on \COCO can generalize better to easier datasets such as PASCAL VOC given more training data. To gain insight into the differences between the datasets, see \myappendix for visualizations of person and chair examples from the two datasets.
186
+
187
+ \myparagraph{Generating segmentations from detections} We now describe a simple method for generating object bounding boxes and segmentation masks, following prior work that produces segmentations from object detections \cite{brox2011object,yang2012layered,ramanan2007using,dai2012learning}. We learn aspect-specific pixel-level segmentation masks for different categories. These are readily learned by averaging together segmentation masks from aligned training instances. We learn different masks corresponding to the different mixtures in our DPM detector. Sample masks are visualized in Fig.~\ref{fig:masks}.
188
+
189
+ \begin{figure}[!t]\centering
190
+ \includegraphics[width=\columnwidth]{figs/shape_prior}
191
+ \caption{We visualize our mixture-specific shape masks. We paste thresholded shape masks on each candidate detection to generate candidate segments.\label{fig:masks}}
192
+ \end{figure}
193
+
194
+ \begin{figure}[!t]\centering
195
+ \includegraphics[width=\columnwidth]{figs/segment}
196
+ \caption{Evaluating instance detections with segmentation masks versus bounding boxes. Bounding boxes are a particularly crude approximation for articulated objects; in this case, the majority of the pixels in the ({\bf blue}) tight-fitting bounding-box do not lie on the object. Our ({\bf green}) instance-level segmentation masks allows for a more accurate measure of object detection and localization.\label{fig:segment}}\vspace{-2mm}
197
+ \end{figure}
198
+
199
+ \myparagraph{Detection evaluated by segmentation} Segmentation is a challenging task even assuming a detector reports correct results as it requires fine localization of object part boundaries. To decouple segmentation evaluation from detection correctness, we benchmark segmentation quality using only correct detections. Specifically, given that the detector reports a correct bounding box, how well does the predicted segmentation of that object match the ground truth segmentation? As criterion for correct detection, we impose the standard requirement that intersection over union between predicted and ground truth boxes is at least $0.5$. We then measure the intersection over union of the predicted and ground truth segmentation masks, see Fig.~\ref{fig:segment}. To establish a baseline for our dataset, we project learned DPM part masks onto the image to create segmentation masks. Fig.~\ref{fig:seg_eval} shows results of this segmentation baseline for the DPM learned on the 20 PASCAL categories and tested on our dataset.
200
+
201
+ \begin{figure*}[!t]\centering
202
+ \begin{minipage}[t]{0.03\textwidth}
203
+ \vspace{4.5\linewidth}
204
+ \hfill\begin{sideways}\scriptsize{Predicted\hspace{.7cm}Ground truth}\end{sideways}\hfill
205
+ \end{minipage}
206
+ \begin{minipage}[t]{0.17\textwidth}\vspace{0pt}
207
+ \setlength\fboxsep{0pt}
208
+ \fbox{\includegraphics[width=\linewidth]{figs/ped_box}} \\
209
+ \fbox{\includegraphics[width=\linewidth]{figs/ped_mask_gt}} \\
210
+ \fbox{\includegraphics[width=\linewidth]{figs/ped_mask_det}}
211
+ \end{minipage}\hfill
212
+ \begin{minipage}[t]{0.20\textwidth}\vspace{0pt}
213
+ \includegraphics[width=\linewidth, clip=true, trim=2.6in 2.60in 2.6in 2.60in]{figs/seg_overlap_person}
214
+ \end{minipage}
215
+ \begin{minipage}[t]{0.55\textwidth}\vspace{0pt}
216
+ \includegraphics[width=\linewidth, clip=true, trim=0.5in 2.60in 1.0in 2.60in]{figs/seg_overlap_all}
217
+ \end{minipage}
218
+ \vspace{-0.08\linewidth}
219
+ \caption{A predicted segmentation might not recover object detail even though detection and ground truth bounding boxes overlap well (left). Sampling from the person category illustrates that predicting segmentations from top-down projection of DPM part masks is difficult even for correct detections (center). Average segmentation overlap measured on \COCO for the 20 PASCAL VOC categories demonstrates the difficulty of the problem (right).\label{fig:seg_eval}}
220
+ \end{figure*}
221
+
222
+ \section{Discussion}
223
+
224
+ We introduced a new dataset for detecting and segmenting objects found in everyday life in their natural environments. Utilizing over 70,000 worker hours, a vast collection of object instances was gathered, annotated and organized to drive the advancement of object detection and segmentation algorithms. Emphasis was placed on finding non-iconic images of objects in natural environments and varied viewpoints. Dataset statistics indicate the images contain rich contextual information with many objects present per image.
225
+
226
+ There are several promising directions for future annotations on our dataset. We currently only label ``things'', but labeling ``stuff'' may also provide significant contextual information that may be useful for detection. Many object detection algorithms benefit from additional annotations, such as the amount an instance is occluded \cite{Dollar2012PAMI} or the location of keypoints on the object \cite{bourdev2009poselets}. Finally, our dataset could provide a good benchmark for other types of labels, including scene types \cite{SUN}, attributes \cite{Patterson2012SunAttributes,farhadi2009describing} and full sentence written descriptions \cite{rashtchian2010collecting}. We are actively exploring adding various such annotations.
227
+
228
+ To download and learn more about \COCO please see the project website\footnote{\url{http://mscoco.org/}}. \COCO will evolve and grow over time; up to date information is available online.
229
+
230
+ {\myparagraph{Acknowledgments} Funding for all crowd worker tasks was provided by Microsoft. P.P.~and D.R.~were supported by ONR MURI Grant N00014-10-1-0933. We would like to thank all members of the community who provided valuable feedback throughout the process of defining and collecting the dataset.}
231
+
232
+ \section*{Appendix Overview}
233
+
234
+ In the appendix, we provide detailed descriptions of the AMT user interfaces and the full list of 272 candidate categories (from which our final 91 were selected) and 40 scene categories (used for scene-object queries).
235
+
236
+ \section*{Appendix I: User Interfaces}
237
+
238
+ We describe and visualize our user interfaces for collecting non-iconic images, category labeling, instance spotting, instance segmentation, segmentation verification and finally crowd labeling.
239
+
240
+ \myparagraph{Non-iconic Image Collection} Flickr provides a rich image collection associated with text captions. However, captions might be inaccurate and images may be iconic. To construct a high-quality set of non-iconic images, we first collected candidate images by searching for pairs of object categories, or pairs of object and scene categories. We then created an AMT filtering task that allowed users to remove invalid or iconic images from a grid of 128 candidates, Fig.~\ref{fig:ui_collection}. We found the choice of instructions to be crucial, and so provided users with examples of iconic and non-iconic images. Some categories rarely co-occurred with others. In such cases, we collected candidates using only the object category as the search term, but apply a similar filtering step, Fig.~\ref{fig:ui_collection}(b).
241
+
242
+ \myparagraph{Category Labeling} Fig.~\ref{fig:ui_pipeline}(a) shows our interface for category labeling. We designed the labeling task to encourage workers to annotate all categories present in the image. Workers annotate categories by dragging and dropping icons from the bottom category panel onto a corresponding object instance. Only a single instance of each object category needs to be annotated in the image. We group icons by the super-categories from Fig.~\ref{fig:icons}, allowing workers to quickly skip categories that are unlikely to be present.
243
+
244
+ \begin{figure}\centering
245
+ \includegraphics[width=0.5\textwidth]{figs/ui_collection}
246
+ \caption{User interfaces for non-iconic image collection. (a) Interface for selecting non-iconic images containing pairs of objects. (b) Interface for selecting non-iconic images for categories that rarely co-occurred with others.\label{fig:ui_collection}}
247
+ \end{figure}
248
+
249
+ \myparagraph{Instance Spotting} Fig.~\ref{fig:ui_pipeline}(b) depicts our interface for labeling all instances of a given category. The interface is initialized with a blinking icon specifying a single instance obtained from the previous category-labeling stage. Workers are then asked to spot and click on up to 10 total instances of the given category, placing a single cross anywhere within the region of each instance. In order to spot small objects, we found it crucial to include a ``magnifying glass'' feature that doubles the resolution of a worker's currently selected region.
250
+
251
+ \myparagraph{Instance Segmentation} Fig.~\ref{fig:ui_pipeline}(c) shows our user interface for instance segmentation. We modified source code from the OpenSurfaces project \cite{bell13opensurfaces}, which defines a single AMT task for segmenting multiple regions of a homogenous material in real-scenes. In our case, we define a single task for segmenting a single object instance labeled from the previous annotation stage. To aid the segmentation process, we added a visualization of the object category icon to remind workers of the category to be segmented. Crucially, we also added zoom-in functionality to allow for efficient annotation of small objects and curved boundaries. In the previous annotation stage, to ensure high coverage of all object instances, we used multiple workers to label all instances per image. We would like to segment {\em all} such object instances, but instance annotations across different workers may refer to different or redundant instances. To resolve this correspondence ambiguity, we sequentially post AMT segmentation tasks, ignoring instance annotations that are already covered by an existing segmentation mask.
252
+
253
+ \myparagraph{Segmentation Verification} Fig.~\ref{fig:ui_pipeline}(d) shows our user interface for segmentation verification. Due to the time consuming nature of the previous task, each object instance is segmented only once. The purpose of the verification stage is therefore to ensure that each segmented instance from the previous stage is of sufficiently high quality. Workers are shown a grid of 64 segmentations and asked to select poor quality segmentations. Four of the 64 segmentation are known to be bad; a worker must identify 3 of the 4 known bad segmentations to complete the task. Each segmentation is initially shown to 3 annotators. If any of the annotators indicates the segmentation is bad, it is shown to 2 additional workers. At this point, any segmentation that doesn't receive at least 4 of 5 favorable votes is discarded and the corresponding instance added back to the pool of unsegmented objects. Examples of borderline cases that either passed (4/5 votes) or were rejected (3/5 votes) are shown in Fig.~\ref{verification}.
254
+
255
+ \myparagraph{Crowd Labeling} Fig.~\ref{fig:ui_pipeline}(e) shows our user interface for crowd labeling. As discussed, for images containing ten object instances or fewer of a given category, every object instance was individually segmented. In some images, however, the number of instances of a given category is much higher. In such cases crowd labeling provided a more efficient method for annotation. Rather than requiring workers to draw exact polygonal masks around each object instance, we allow workers to ``paint'' all pixels belonging to the category in question. Crowd labeling is similar to semantic segmentation as object instance are not individually identified. We emphasize that crowd labeling is only necessary for images containing more than ten object instances of a given category.
256
+
257
+ \section*{Appendix II: Object \& Scene Categories}
258
+
259
+ Our dataset contains 91 object categories (the 2014 release contains segmentation masks for 80 of these categories). We began with a list of frequent object categories taken from WordNet, LabelMe, SUN and other sources as well as categories derived from a free recall experiment with young children. The authors then voted on the resulting 272 categories with the aim of sampling a diverse and computationally challenging set of categories; see \S\ref{sec:image_collection} for details. The list in Table \ref{tbl:category_list} enumerates those 272 categories in descending order of votes. As discussed, the final selection of 91 categories attempts to pick categories with high votes, while keeping the number of categories per super-category (animals, vehicles, furniture, etc.) balanced.
260
+
261
+ As discussed in \S\ref{sec:image_collection}, in addition to using object-object queries to gather non-iconic images, object-scene queries also proved effective. For this task we selected a subset of 40 scene categories from the SUN dataset that frequently co-occurred with object categories of interest. Table \ref{tbl:scene_category_list} enumerates the 40 scene categories (evenly split between indoor and outdoor scenes).
262
+
263
+
264
+
265
+ \newpage
266
+ \bibliographystyle{IEEEtran}
267
+ \bibliography{coco}
268
+
269
+
270
+
271
+ \begin{figure*}\centering
272
+ \includegraphics[width=1\textwidth]{figs/icons}
273
+ \caption{Icons of 91 categories in the \COCO dataset grouped by 11 super-categories. We use these icons in our annotation pipeline to help workers quickly reference the indicated object category.\label{fig:icons}}
274
+ \end{figure*}
275
+
276
+ \begin{figure*}\centering
277
+ \includegraphics[width=1\textwidth]{figs/ui_pipeline}
278
+ \caption{ User interfaces for collecting instance annotations, see text for details.\label{fig:ui_pipeline}}
279
+ \end{figure*}\newpage
280
+
281
+
282
+
283
+ \begin{figure*}\centering
284
+ \begin{subfigure}[b]{0.48\textwidth}
285
+ \includegraphics[width=\textwidth]{figs/ex_pascal_person}
286
+ \label{fig:train_more_data_pascal_person}\caption{PASCAL VOC.}
287
+ \end{subfigure}\quad
288
+ \begin{subfigure}[b]{0.48\textwidth}
289
+ \includegraphics[width=\textwidth]{figs/ex_coco_person}
290
+ \label{fig:train_more_data_coco_person}\caption{\COCO.}
291
+ \end{subfigure}
292
+ \caption{Random person instances from PASCAL VOC and \COCO. At most one instance is sampled per image.\label{fig:visualization_person}}\vspace{2mm}
293
+ \end{figure*}
294
+
295
+ \begin{table*}
296
+ \resizebox{\textwidth}{!}{
297
+ \begin{tabular}{c c c c c c c c c c }
298
+ \bf person & \bf bicycle & \bf car & \bf motorcycle & \bf bird & \bf cat & \bf dog & \bf horse & \bf sheep & \bf bottle \\
299
+ \bf chair & \bf couch & \bf potted plant & \bf tv & \bf cow & \bf airplane & \bf hat$^*$ & license plate & \bf bed & \bf laptop \\
300
+ fridge & \bf microwave & \bf sink & \bf oven & \bf toaster & \bf bus & \bf train & \bf mirror$^*$ & \bf dining table & \bf elephant \\
301
+ \bf banana & bread & \bf toilet & \bf book & \bf boat & \bf plate$^*$ & \bf cell phone & \bf mouse & \bf remote & \bf clock \\
302
+ face & hand & \bf apple & \bf keyboard & \bf backpack & steering wheel & \bf wine glass & chicken & \bf zebra & \bf shoe$^*$ \\
303
+ eye & mouth & \bf scissors & \bf truck & \bf traffic light & \bf eyeglasses$^*$ & \bf cup & \bf blender$^*$ & \bf hair drier & wheel \\
304
+ \bf street sign$^*$ & \bf umbrella & \bf door$^*$ & \bf fire hydrant & \bf bowl & teapot & \bf fork & \bf knife & \bf spoon & \bf bear \\
305
+ headlights & \bf window$^*$ & \bf desk$^*$ & computer & \bf refrigerator & \bf pizza & squirrel & duck & \bf frisbee & guitar \\
306
+ nose & \bf teddy bear & \bf tie & \bf stop sign & \bf surfboard & \bf sandwich & pen/pencil & \bf kite & \bf orange & \bf toothbrush \\
307
+ printer & pans & head & \bf sports ball & \bf broccoli & \bf suitcase & \bf carrot & chandelier & \bf parking meter & fish \\
308
+ \bf handbag & \bf hot dog & stapler & basketball hoop & \bf donut & \bf vase & \bf baseball bat & \bf baseball glove & \bf giraffe & jacket \\
309
+ \bf skis & \bf snowboard & table lamp & egg & door handle & power outlet & hair & tiger & table & coffee table \\
310
+ \bf skateboard & helicopter & tomato &tree & bunny & pillow & \bf tennis racket& \bf cake & feet & \bf bench \\
311
+ chopping board & washer & lion & monkey & \bf hair brush$^*$ & light switch & arms & legs & house & cheese \\
312
+ goat & magazine & key & picture frame & cupcake & fan (ceil/floor) & frogs & rabbit & owl & scarf \\
313
+ ears & home phone & pig & strawberries & pumpkin & van & kangaroo & rhinoceros & sailboat & deer \\
314
+ playing cards & towel & hyppo & can & dollar bill & doll & soup & meat & window & muffins \\
315
+ tire & necklace & tablet & corn & ladder & pineapple & candle & desktop & carpet & cookie \\
316
+ toy cars & bracelet & bat & balloon & gloves & milk & pants & wheelchair & building & bacon \\
317
+ box & platypus & pancake & cabinet & whale & dryer & torso & lizard & shirt & shorts \\
318
+ pasta & grapes & shark & swan & fingers & towel & side table & gate & beans & flip flops \\
319
+ moon & road/street & fountain & fax machine & bat & hot air balloon & cereal & seahorse & rocket & cabinets \\
320
+ basketball & telephone & movie (disc) & football & goose & long sleeve shirt & short sleeve shirt & raft & rooster & copier \\
321
+ radio & fences & goal net & toys & engine & soccer ball & field goal posts & socks & tennis net & seats \\
322
+ elbows & aardvark & dinosaur & unicycle & honey & legos & fly & roof & baseball & mat \\
323
+ ipad & iphone & hoop & hen & back & table cloth & soccer nets & turkey & pajamas & underpants \\
324
+ goldfish & robot & crusher & animal crackers & basketball court & horn & firefly & armpits & nectar & super hero costume \\
325
+ jetpack & robots & & & & & & & &
326
+ \end{tabular} }
327
+ \caption{Candidate category list (272). {\bf Bold}: selected categories (91). {\bf Bold$^*$}: omitted categories in 2014 release (11).\label{tbl:category_list}}
328
+ \end{table*}
329
+
330
+
331
+ \begin{figure*}\centering
332
+ \begin{subfigure}[b]{0.48\textwidth}
333
+ \includegraphics[width=\textwidth]{figs/ex_pascal_chair}
334
+ \label{fig:train_more_data_pascal_chair}\vspace{-.5cm}
335
+ \caption{PASCAL VOC.}
336
+ \end{subfigure} \quad
337
+ \begin{subfigure}[b]{0.48\textwidth}
338
+ \includegraphics[width=\textwidth]{figs/ex_coco_chair}
339
+ \label{fig:train_more_data_coco_chair}\vspace{-.5cm}
340
+ \caption{\COCO.}
341
+ \end{subfigure}
342
+ \caption{Random chair instances from PASCAL VOC and \COCO. At most one instance is sampled per image.\label{fig:visualization_chair}}
343
+ \end{figure*}
344
+
345
+ \begin{figure*}\centering
346
+ \includegraphics[width=\textwidth]{figs/verification}
347
+ \caption{Examples of borderline segmentations that passed (top) or were rejected (bottom) in the verification stage.\label{verification}}
348
+ \end{figure*}
349
+
350
+ \begin{table*}\centering
351
+ \resizebox{.95\textwidth}{!}{
352
+ \begin{tabular}{c c c c c c c c c c }
353
+ library & church & office & restaurant & kitchen & living room & bathroom & factory & campus & bedroom \\
354
+ child's room & dining room & auditorium & shop & home & hotel & classroom & cafeteria & hospital room & food court \\
355
+ street & park & beach & river & village & valley & market & harbor & yard & parking lot \\
356
+ lighthouse & railway & playground & swimming pool & forest & gas station & garden & farm & mountain & plaza
357
+ \end{tabular} }
358
+ \caption{Scene category list.\label{tbl:scene_category_list}}
359
+ \end{table*}
360
+
361
+
362
+
363
+ \end{document}
papers/1406/1406.6247.tex ADDED
@@ -0,0 +1,457 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass{article} \usepackage{nips14submit_e,times}
2
+ \usepackage{url}
3
+ \usepackage{verbatim}
4
+ \usepackage{graphicx}
5
+ \usepackage[square,sort,comma,numbers]{natbib}
6
+ \usepackage{amsmath}
7
+ \usepackage{amsfonts}
8
+ \usepackage{amsmath}
9
+ \usepackage{wrapfig}
10
+ \usepackage{subfig}
11
+
12
+
13
+
14
+ \title{Recurrent Models of Visual Attention}
15
+
16
+
17
+ \author{
18
+ Volodymyr Mnih \hspace{0.3cm}
19
+ Nicolas Heess \hspace{0.3cm}
20
+ Alex Graves \hspace{0.3cm}
21
+ Koray Kavukcuoglu \hspace{0.3cm}
22
+ \\
23
+ Google DeepMind\\
24
+ \\
25
+ \small{\texttt{ \{vmnih,heess,gravesa,korayk\} @ google.com }}
26
+ }
27
+
28
+
29
+
30
+ \newcommand{\new}{\marginpar{NEW}}
31
+ \newcommand{\fix}[1]{\textcolor{red}{\textbf{[FIX: #1]}}}
32
+ \newcommand{\modification}[1]{\textcolor{red}{ #1}}
33
+
34
+ \nipsfinalcopy
35
+
36
+ \begin{document}
37
+
38
+
39
+ \maketitle
40
+
41
+ \vspace{-0.5cm}
42
+ \begin{abstract}
43
+ Applying convolutional neural networks to large images is computationally
44
+ expensive because the amount of computation scales linearly with the number of
45
+ image pixels. We present a novel recurrent neural network model that is
46
+ capable of extracting information from an image or video by adaptively
47
+ selecting a sequence of regions or locations and only processing the selected
48
+ regions at high resolution. Like convolutional neural networks, the proposed
49
+ model has a degree of translation invariance built-in, but the amount of
50
+ computation it performs can be controlled independently of the input image
51
+ size. While the model is non-differentiable, it can be trained using
52
+ reinforcement learning methods to learn task-specific policies. We evaluate
53
+ our model on several image classification tasks, where it significantly
54
+ outperforms a convolutional neural network baseline on cluttered images, and on
55
+ a dynamic visual control problem, where it learns to track a simple object
56
+ without an explicit training signal for doing so.
57
+ \end{abstract}
58
+
59
+ \vspace{-0.5cm}
60
+ \section{Introduction}
61
+
62
+ Neural network-based architectures have recently had great success in significantly advancing the state of the art on challenging image classification and object detection datasets~\cite{krizhevsky-imagenet,Girshick:CoRR:2013,Sermanet:ARXIV:2013}.
63
+ Their excellent recognition accuracy, however, comes at a high computational cost both at training and testing time.
64
+ The large convolutional neural networks typically used currently take days to train on multiple GPUs even though the input images are downsampled to reduce computation~\cite{krizhevsky-imagenet}.
65
+ In the case of object detection processing a single image at test time currently takes seconds when running on a single GPU~\cite{Girshick:CoRR:2013,Sermanet:ARXIV:2013} as these approaches effectively follow the classical sliding window paradigm from the computer vision literature where a classifier, trained to detect an object in a tightly cropped bounding box, is applied independently to thousands of candidate windows from the test image at different positions and scales. Although some computations can be shared,
66
+ the main computational expense for these models comes from convolving filter maps with the entire input image, therefore their computational complexity is at least linear in the number of pixels.
67
+
68
+
69
+
70
+
71
+ One important property of human perception is that one does not tend to process a whole scene in its entirety at once. Instead humans focus attention selectively on parts of the visual space to acquire information when and where it is needed, and combine information from different fixations over time to build up an internal representation of the scene \cite{Rensink:VisCog:2000}, guiding future eye movements and decision making.
72
+ Focusing the computational resources on parts of a scene saves ``bandwidth'' as fewer ``pixels'' need to be processed. But it also substantially reduces the task complexity as the object of interest can be placed in the center of the fixation and irrelevant features of the visual environment (``clutter'') outside the fixated region are naturally ignored.
73
+
74
+ In line with its fundamental role, the guidance of human eye movements has been extensively studied in neuroscience and cognitive science literature. While low-level scene properties and bottom up processes (e.g.\ in the form of saliency; \cite{Itti:PAMI:1998}) play an important role, the locations on which humans fixate have also been shown to be strongly task specific (see~\cite{Hayhoe:TICS:2005} for a review and also e.g.\ \cite{Torralba:PsychRev:2006,Mathe:NIPS:2013}).
75
+ In this paper we
76
+ take inspiration from these results and
77
+ develop a novel framework for attention-based task-driven visual processing with neural networks. Our model considers attention-based processing of a visual scene as a \textit{control problem} and is general enough to be applied to static images, videos, or as a perceptual module of an agent that interacts with a dynamic visual environment (e.g. robots, computer game playing agents).
78
+
79
+ The model is a recurrent neural network (RNN) which processes inputs sequentially, attending to different locations within the images (or video frames) one at a time, and incrementally combines information from these fixations to build up a dynamic internal representation of the scene or environment.
80
+ Instead of processing an entire image or even bounding box at once,
81
+ at each step, the model selects the next location to attend to based on past information \textit{and} the demands of the task.
82
+ Both the number of parameters in our model and the amount of computation it performs can be controlled independently of the size of the input image, which is in contrast to convolutional networks whose computational demands scale linearly with the number of image pixels.
83
+ We describe an end-to-end optimization procedure that allows the model to be trained directly with respect to a given task and to maximize a performance measure which may depend on the entire sequence of decisions made by the model. This procedure uses backpropagation to train the neural-network components and policy gradient to address the non-differentiabilities due to the control problem.
84
+
85
+ We show that our model can learn effective task-specific strategies for where to look on several image classification tasks as well as a dynamic visual control problem.
86
+ Our results also suggest that an attention-based model may be better than a convolutional neural network at both dealing with clutter and scaling up to large input images.
87
+
88
+ \vspace{-0.3cm}
89
+
90
+ \section{Previous Work}
91
+ \label{sec:Related}
92
+ \vspace{-0.4cm}
93
+
94
+ Computational limitations have received much attention in the computer vision literature. For instance, for object detection, much work has been dedicated to reducing the cost of the widespread sliding window paradigm, focusing primarily on reducing the number of windows for which the full classifier is evaluated, e.g.\ via classifier cascades (e.g.\ \cite{Viola:CVPR:2001,Felzenszwalb:CVPR:2010}),
95
+ removing image regions from consideration via a branch and bound approach on the classifier output (e.g.\ \cite{Lampert:CVPR:2008}), or by proposing candidate windows that are likely to contain objects (e.g.\ \cite{Alexe:CVPR:2010,Sande:ICCV:2011}). Even though substantial speedups may be obtained with such approaches, and some of these can be combined with or used as an add-on to CNN classifiers \cite{Girshick:CoRR:2013}, they remain firmly rooted in the window classifier design for object detection and only exploit past information to inform future processing of the image in a very limited way.
96
+
97
+ A second class of approaches that has a long history in computer vision and is strongly motivated by human perception are saliency detectors (e.g.\ \cite{Itti:PAMI:1998}). These approaches prioritize the processing of potentially interesting (``salient'') image regions which are typically identified based on some measure of local low-level feature contrast. Saliency detectors indeed capture some of the properties of human eye movements, but they typically do not to integrate information across fixations, their saliency computations are mostly hardwired, and they are based on low-level image properties only, usually ignoring other factors such as semantic content of a scene and task demands (but see \cite{Torralba:PsychRev:2006}).
98
+
99
+ Some works in the computer vision literature and elsewhere e.g.\ \cite{Stanley:GECCO:2004,Paletta:ICML:2005,Butko:CVPR:2009,Larochelle:NIPS:2010,Denil:NC:2013,Alexe:NIPS:2012,Ranzato:ARXIV:2014} have embraced vision as a sequential decision task as we do here. There, as in our work, information about the image is gathered sequentially and the decision where to attend next is based on previous fixations of the image. \cite{Butko:CVPR:2009} employs the learned Bayesian observer model from \cite{Butko:ICDL:2008} to the task of object detection. The learning framework of \cite{Butko:ICDL:2008} is related to ours as they also employ a policy gradient formulation (cf.\ section \ref{sec:Model}) but their overall setup is considerably more restrictive than ours and only some parts of the system are learned.
100
+
101
+ Our work is perhaps the most similar to the other attempts to implement attentional processing in a deep learning framework~\cite{Larochelle:NIPS:2010,Denil:NC:2013,Ranzato:ARXIV:2014}. Our formulation which employs an RNN to integrate visual information over time and to decide how to act is, however, more general, and our learning procedure allows for end-to-end optimization of the sequential decision process instead of relying on greedy action selection. We further demonstrate how the same general architecture can be used for efficient object recognition in still images as well as to interact with a dynamic visual environment in a task-driven way.
102
+
103
+ \vspace{-0.2cm}
104
+ \section{The Recurrent Attention Model (RAM)}
105
+ \label{sec:Model}
106
+ \vspace{-0.4cm}
107
+
108
+ In this paper we consider the attention problem as the sequential decision process of a goal-directed agent interacting with a visual environment. At each point in time, the agent observes the environment only via a bandwidth-limited sensor, i.e.\ it never senses the environment in full. It may extract information only in a local region or in a narrow frequency band. The agent can, however, actively control how to deploy its sensor resources (e.g. choose the sensor location). The agent can also affect the true state of the environment by executing actions. Since the environment is only partially observed the agent needs to integrate information over time in order to determine how to act and how to deploy its sensor most effectively. At each step, the agent receives a scalar reward (which depends on the actions the agent has executed and can be delayed), and the goal of the agent is to maximize the total sum of such rewards.
109
+
110
+ This formulation encompasses tasks as diverse as object detection in static images and control problems like playing a computer game from the image stream visible on the screen.
111
+ For a game, the environment state would be the true state of the game engine and the agent's sensor would operate on the video frame shown on the screen. (Note that for most games, a single frame would not fully specify the game state). The environment actions here would correspond to joystick controls, and the reward would reflect points scored.
112
+ For object detection in static images the state of the environment would be fixed and correspond to the true contents of the image. The environmental action would correspond to the classification decision (which may be executed only after a fixed number of fixations), and the reward would reflect if the decision is correct.
113
+
114
+ \vspace{-0.3cm}
115
+ \subsection{Model}
116
+ \vspace{-0.3cm}
117
+
118
+ \begin{figure}
119
+ \begin{center}
120
+ \includegraphics[width=0.99\linewidth]{figures/model.pdf}
121
+ \caption{\label{fig-model}\textbf{A) Glimpse Sensor:} Given the coordinates of the glimpse and an input image, the sensor extracts a \textit{retina-like} representation $\rho(x_t, l_{t-1})$ centered at $l_{t-1}$ that contains multiple resolution patches. \textbf{B) Glimpse Network:} Given the location $(l_{t-1})$ and input image $(x_t)$, uses the glimpse sensor to extract retina representation $\rho(x_t,l_{t-1})$. The retina representation and glimpse location is then mapped into a hidden space using independent linear layers parameterized by $\theta_g^0$ and $\theta_g^1$ respectively using rectified units followed by another linear layer $\theta_g^2$ to combine the information from both components. The glimpse network $f_g(.;\{\theta_g^0,\theta_g^1,\theta_g^2\})$ defines a trainable bandwidth limited sensor for the attention network producing the glimpse representation $g_t$. \textbf{C) Model Architecture:} Overall, the model is an RNN. The core network of the model $f_h(.;\theta_h)$ takes the glimpse representation $g_t$ as input and combining with the internal representation at previous time step $h_{t-1}$, produces the new internal state of the model $h_t$. The location network $f_l(.;\theta_l)$ and the action network $f_a(.;\theta_a)$ use the internal state $h_t$ of the model to produce the next location to attend to $l_{t}$ and the action/classification $a_t$ respectively. This basic RNN iteration is repeated for a variable number of steps.}
122
+ \end{center}
123
+ \vspace{-0.5cm}
124
+ \end{figure}
125
+
126
+ The agent is built around a recurrent neural network as shown in Fig.~\ref{fig-model}. At each time step, it processes the sensor data, integrates information over time, and chooses how to act and how to deploy its sensor at next time step:
127
+
128
+
129
+ \textbf{Sensor:} At each step $t$ the agent receives a (partial) observation of the environment in the form of an image $x_t$. The agent does not have full access to this image but rather can extract information from $x_t$ via its bandwidth limited sensor $\rho$, e.g.\ by focusing the sensor on some region or frequency band of interest.
130
+
131
+ In this paper we assume that the bandwidth-limited sensor extracts a retina-like representation $\rho(x_t, l_{t-1})$ around location $l_{t-1}$ from image $x_t$. It encodes the region around $l$ at a high-resolution but uses a progressively lower resolution for pixels further from $l$, resulting in a vector of much lower dimensionality than the original image $x$. We will refer to this low-resolution representation as a \textit{glimpse} \cite{Larochelle:NIPS:2010}.
132
+ The glimpse sensor is used inside what we call the \textit{glimpse network} $f_g$ to produce the glimpse feature vector $g_t = f_g( x_t,l_{t-1}; \theta_g)$ where $\theta_g = \{ \theta_g^0, \theta_g^1, \theta_g^2 \}$ (Fig.~\ref{fig-model}B).
133
+
134
+ \textbf{Internal state:} The agent maintains an interal state which summarizes information extracted from the history of past observations; it encodes the agent's knowledge of the environment and is instrumental to deciding how to act and where to deploy the sensor.
135
+ This internal state is formed by the hidden units $h_t$ of the recurrent neural network and updated over time by the \textit{core network}: $h_t = f_h(h_{t-1}, g_t; \theta_h)$. The external input to the network is the glimpse feature vector $g_t$.
136
+
137
+
138
+
139
+
140
+ \textbf{Actions:} At each step, the agent performs two actions: it decides how to deploy its sensor via the sensor control $l_t$, and an environment action $a_t$ which might affect the state of the environment. The nature of the environment action depends on the task.
141
+ In this work, the location actions are chosen stochastically from a distribution parameterized by the location network $f_l(h_t;\theta_l)$ at time $t$: $l_t \sim p(\cdot | f_l(h_t; \theta_l))$. The environment action $a_t$ is similarly drawn from a distribution conditioned on a second network output $a_t \sim p(\cdot|f_a(h_t;\theta_a))$. For classification it is formulated using a softmax output and for dynamic environments, its exact formulation depends on the action set defined for that particular environment (e.g. joystick movements, motor control, ...).
142
+
143
+ \textbf{Reward:} After executing an action the agent receives a new visual observation of the environment $x_{t+1}$ and a reward signal $r_{t+1}$. The goal of the agent is to maximize the sum of the reward signal\footnote{
144
+ Depending on the scenario it may be more appropriate to consider a sum of \textit{discounted} rewards, where rewards obtained in the distant future contribute less: $R = \sum_{t=1}^T \gamma^{t-1} r_t$. In this case we can have $T \rightarrow \infty$.
145
+ } which is usually very sparse and delayed: $R = \sum_{t=1}^T r_t$.
146
+ In the case of object recognition, for example, $r_T=1$ if the object is classified correctly after $T$ steps and $0$ otherwise.
147
+
148
+ The above setup is a special instance of what is known in the RL community as a Partially Observable Markov Decision Process (POMDP). The true state of the environment (which can be static or dynamic) is unobserved.
149
+ In this view, the agent needs to learn a (stochastic) policy $\pi((l_t,a_t) | s_{1:t}; \theta)$ with parameters $\theta$ that, at each step $t$, maps the history of past interactions with the environment $s_{1:t} = x_1,l_1,a_1,\dots x_{t-1},l_{t-1},a_{t-1},x_t$ to a distribution over actions for the current time step, subject to the constraint of the sensor. In our case, the policy $\pi$ is defined by the RNN outlined above, and the history $s_t$ is summarized in the state of the hidden units $h_t$.
150
+ We will describe the specific choices for the above components in Section~\ref{sec:exp}.
151
+
152
+
153
+
154
+ \vspace{-0.3cm}
155
+ \subsection{Training}
156
+ \vspace{-0.3cm}
157
+
158
+ The parameters of our agent are given by the parameters of the glimpse network, the core network (Fig.~\ref{fig-model}C), and the action network $\theta = \{ \theta_g, \theta_h, \theta_a \}$ and we learn these to maximize the total reward the agent can expect when interacting with the environment.
159
+
160
+ More formally, the policy of the agent, possibly in combination with the dynamics of the environment (e.g.\ for game-playing), induces a distribution over possible interaction sequences $s_{1:N}$ and we aim to maximize the reward under this distribution: $J(\theta) = \mathbb{E}_{p(s_{1:T};\theta)}\left[ \sum_{t=1}^T r_t \right]=\mathbb{E}_{p(s_{1:T};\theta)}\left[ R \right]$, where $p(s_{1:T};\theta)$ depends on the policy
161
+
162
+ Maximizing $J$ exactly is non-trivial since it involves an expectation over the high-dimensional interaction sequences which may in turn involve unknown environment dynamics. Viewing the problem as a POMDP, however, allows us to bring techniques from the RL literature to bear: As shown by Williams \cite{Williams:ML:1992} a sample approximation to the gradient is given by
163
+ \begin{align}
164
+ \nabla_\theta J
165
+ = \sum_{t=1}^T \mathbb{E}_{p(s_{1:T};\theta)}\left[ \nabla_\theta \log \pi(u_t | s_{1:t}; \theta) R \right] \approx
166
+ \frac{1}{M} \sum_{i=1}^M \sum_{t=1}^T \nabla_\theta \log \pi(u_t^i | s_{1:t}^i; \theta) R^i, \label{eq:REINFORCE}
167
+ \end{align}
168
+ where $s^i$'s are interaction sequences obtained by running the current agent $\pi_\theta$ for $i=1 \dots M$ episodes.
169
+
170
+ The learning rule (\ref{eq:REINFORCE}) is also known as the REINFORCE rule, and it involves running the agent with its current policy to obtain samples of interaction sequences $s_{1:T}$ and then adjusting the parameters $\theta$ of our agent such that the log-probability of chosen actions that have led to high cumulative reward is increased, while that of actions having produced low reward is decreased.
171
+
172
+ Eq.\ (\ref{eq:REINFORCE}) requires us to compute $\nabla_\theta \log \pi(u_t^i | s_{1:t}^i; \theta)$. But this is just the gradient of the RNN that defines our agent evaluated at time step $t$ and can be computed by standard backpropagation~\cite{Wierstra2007POMDP}.
173
+
174
+ \textbf{Variance Reduction :}
175
+ Equation (\ref{eq:REINFORCE}) provides us with an unbiased estimate of the gradient but it may have high variance. It is therefore common to consider a gradient estimate of the form
176
+ \begin{align}
177
+ \frac{1}{M} \sum_{i=1}^M \sum_{t=1}^T \nabla_\theta \log \pi(u_t^i | s_{1:t}^i; \theta) \left (R_t^i - b_t \right ), \label{eq:REINFORCEbaseline}
178
+ \end{align}
179
+ where $R_t^i = \sum_{t'=1}^T r_{t'}^i$ is the cumulative reward obtained \textit{following} the execution of action $u_t^i$, and $b_t$ is a baseline that may depend on $s_{1:t}^i$ (e.g.\ via $h_t^i$) but not on the action $u_t^i$ itself.
180
+ This estimate is equal to (\ref{eq:REINFORCE}) in expectation but may have lower variance. It is natural to select $b_t= \mathbb{E}_{\pi}\left[R_t\right]$~\cite{Sutton00policygradient}, and this form of baseline known as the value function in the reinforcement learning literature.
181
+ The resulting algorithm increases the log-probability of an action that was followed by a larger than expected cumulative reward, and decreases the probability if the obtained cumulative reward was smaller.
182
+ We use this type of baseline and learn it by reducing the squared error between $R_t^i$'s and $b_t$.
183
+
184
+ \textbf{Using a Hybrid Supervised Loss:}
185
+ The algorithm described above allows us to train the agent when the ``best'' actions are unknown, and the learning signal is only provided via the reward. For instance, we may not know a priori which sequence of fixations provides most information about an unknown image, but the total reward at the end of an episode will give us an indication whether the tried sequence was good or bad.
186
+
187
+ However, in some situations we do know the correct action to take: For instance, in an object detection task the agent has to output the label of the object as the final action. For the training images this label will be known and we can directly optimize the policy to output the correct label associated with a training image at the end of an observation sequence. This can be achieved, as is common in supervised learning, by maximizing the conditional probability of the true label given the observations from the image, i.e. by maximizing $\log \pi(a_T^* | s_{1:T}; \theta)$, where $a_T^*$ corresponds to the ground-truth label(-action) associated with the image from which observations $s_{1:T}$ were obtained.
188
+ We follow this approach for classification problems where we optimize the cross entropy loss to train the action network $f_a$ and backpropagate the gradients through the core and glimpse networks. The location network $f_l$ is always trained with REINFORCE.
189
+
190
+ \vspace{-0.3cm}
191
+ \newcommand{\linl}[1]{Linear(#1)}
192
+ \section{Experiments}
193
+ \vspace{-0.3cm}
194
+
195
+ \label{sec:exp}
196
+ We evaluated our approach on several image classification tasks as well as a simple game.
197
+ We first describe the design choices that were common to all our experiments:
198
+
199
+ \textbf{Retina and location encodings:}
200
+ The retina encoding $\rho(x, l)$ extracts $k$ square patches centered at location $l$, with the first patch being $g_w \times g_w$ pixels in size, and each successive patch having twice the width of the previous.
201
+ The $k$ patches are then all resized to $g_w\times g_w$ and concatenated.
202
+ Glimpse locations $l$ were encoded as real-valued $(x,y)$ coordinates\footnote{We also experimented with using a discrete representation for the locations $l$ but found that it was difficult to learn policies over more than $25$ possible discrete locations.} with $(0,0)$ being the center of the image $x$ and $(-1,-1)$ being the top left corner of $x$.
203
+
204
+ \textbf{Glimpse network:}
205
+ The glimpse network $f_g(x, l)$ had two fully connected layers.
206
+ Let $\linl{x}$ denote a linear transformation of the vector $x$, i.e. $\linl{x}=Wx+b$ for some weight matrix $W$ and bias vector $b$, and let $Rect(x)=max(x,0)$ be the rectifier nonlinearity.
207
+ The output $g$ of the glimpse network was defined as $g = Rect(\linl{h_g} + \linl{h_l})$ where $h_g = Rect(\linl{\rho(x, l)})$ and $h_l = Rect(\linl{l})$.
208
+ The dimensionality of $h_g$ and $h_l$ was $128$ while the dimensionality of $g$ was $256$ for all attention models trained in this paper.
209
+
210
+ \textbf{Location network:} The policy for the locations $l$ was defined by a two-component Gaussian with a fixed variance.
211
+ The location network outputs the mean of the location policy at time $t$ and is defined as $f_l(h) = \linl{h}$ where $h$ is the state of the core network/RNN.
212
+
213
+ \textbf{Core network:}
214
+ For the classification experiments that follow the core $f_h$ was a network of rectifier units defined as $h_t=f_h(h_{t-1}) = Rect(\linl{h_{t-1}}+\linl{g_t})$.
215
+ The experiment done on a dynamic environment used a core of LSTM units~\cite{hochreiter1997lstm}.
216
+
217
+ \subsection{Image Classification}
218
+ \vspace{-0.2cm}
219
+ The attention network used in the following classification experiments made a classification decision only at the last timestep $t=N$.
220
+ The action network $f_a$ was simply a linear softmax classifier defined as $f_a(h) = \exp\left(\linl{h} \right)/Z$, where $Z$ is a normalizing constant.
221
+ The RNN state vector $h$ had dimensionality $256$. All methods were trained using stochastic gradient descent with momentum of $0.9$. Hyperparameters such as the learning rate and the variance of the location policy were selected using random search~\cite{bergstra2012random}.
222
+ The reward at the last time step was $1$ if the agent classified correctly and $0$ otherwise. The rewards for all other timesteps were $0$.
223
+
224
+ \begin{table}[t]
225
+ \centering
226
+ \subfloat{
227
+ \begin{tabular}{ll}
228
+ \multicolumn{2}{c}{\textbf{(a) 28x28 MNIST}}\\
229
+ \hline
230
+ Model & Error \\
231
+ \hline
232
+ FC, 2 layers (256 hiddens each) & \textbf{1.35}\% \\
233
+ 1 Random Glimpse, $8\times 8$, 1 scale & 42.85\% \\
234
+ RAM, 2 glimpses, $8\times 8$, 1 scale & 6.27\% \\
235
+ RAM, 3 glimpses, $8\times 8$, 1 scale & 2.7\% \\
236
+ RAM, 4 glimpses, $8\times 8$, 1 scale & 1.73\% \\
237
+ RAM, 5 glimpses, $8\times 8$, 1 scale & 1.55\% \\
238
+ RAM, 6 glimpses, $8\times 8$, 1 scale & \textbf{1.29}\% \\
239
+ RAM, 7 glimpses, $8\times 8$, 1 scale & 1.47\% \\
240
+ \end{tabular}
241
+ \label{tbl:mnist}
242
+ }
243
+ \subfloat{
244
+ \begin{tabular}{ll}
245
+ \multicolumn{2}{c}{\textbf{(b) 60x60 Translated MNIST}}\\
246
+ \hline
247
+ Model & Error \\
248
+ \hline
249
+ FC, 2 layers (64 hiddens each) & 7.56\% \\
250
+ FC, 2 layers (256 hiddens each) & 3.7\% \\
251
+ Convolutional, 2 layers & 2.31\% \\
252
+ RAM, 4 glimpses, $12\times 12$, 3 scales & 2.29\% \\
253
+ RAM, 6 glimpses, $12\times 12$, 3 scales & \textbf{1.86}\% \\
254
+ RAM, 8 glimpses, $12\times 12$, 3 scales & \textbf{1.84}\% \\
255
+ \label{tbl:mnist60}
256
+ \end{tabular}
257
+ }
258
+ \vspace{-0.2cm}
259
+ \caption{Classification results on the MNIST and Translated MNIST datasets. FC denotes a fully-connected network with two layers of rectifier units. The convolutional network had one layer of 8 $10\times 10$ filters with stride 5, followed by a fully connected layer with 256 units with rectifiers after each layer. Instances of the attention model are labeled with the number of glimpses, the number of scales in the retina, and the size of the retina.}
260
+ \vspace{-0.3cm}
261
+ \end{table}
262
+
263
+ \textbf{Centered Digits:}
264
+ We first tested the ability of our training method to learn successful glimpse policies by using it to train RAM models with up to $7$ glimpses on the MNIST digits dataset.
265
+ The ``retina'' for this experiment was simply an $8\times 8$ patch, which is only big enough to capture a part of a digit, hence the experiment also tested the ability of RAM to combine information from multiple glimpses.
266
+ Note that since the first glimpse is always random, the single glimpse model is effectively a classifier that gets a single random $8\times 8$ patch as input.
267
+ We also trained a standard feedforward neural network with two hidden layers of 256 rectified linear units as a baseline.
268
+ The error rates achieved by the different models on the test set are shown in Table \ref{tbl:mnist}.
269
+ We see that each additional glimpse improves the performance of RAM until it reaches its minimum with $6$ glimpses, where it matches the performance of the fully connected model training on the full $28\times 28$ centered digits.
270
+ This demonstrates the model can successfully learn to combine information from multiple glimpses.
271
+ \begin{figure}
272
+ \centering
273
+
274
+ \subfloat[Random test cases for the Translated MNIST task.\label{fig:mnist60}]{\includegraphics[width=0.45\textwidth]{figures/patches_translated.png}
275
+ }
276
+ \hfill
277
+ \subfloat[Random test cases for the Cluttered Translated MNIST task.\label{fig:mnist60c}]{\includegraphics[width=0.45\textwidth]{figures/patches_cluttered.png}
278
+ }
279
+ \caption{Examples of test cases for the Translated and Cluttered Translated MNIST tasks.}
280
+ \vspace{-0.5cm}
281
+ \end{figure}
282
+
283
+ \textbf{Non-Centered Digits:}
284
+ The second problem we considered was classifying non-centered digits.
285
+ We created a new task called Translated MNIST, for which data was generated by placing an MNIST digit in a random location of a larger blank patch.
286
+ Training cases were generated on the fly so the effective training set size was 50000 (the size of the MNIST training set) multiplied by the possible number of locations.
287
+ Figure~\ref{fig:mnist60} contains a random sample of test cases for the $60$ by $60$ Translated MNIST task.
288
+ Table~\ref{tbl:mnist60} shows the results for several different models trained on the Translated MNIST task with 60 by 60 patches.
289
+ In addition to RAM and two fully-connected networks we also trained a network with one convolutional layer of $16$ $10\times 10$ filters with stride $5$ followed by a rectifier nonlinearity and then a fully-connected layer of $256$ rectifier units.
290
+ The convolutional network, the RAM networks, and the smaller fully connected model all had roughly the same number of parameters.
291
+ Since the convolutional network has some degree of translation invariance built in, it attains a significantly lower error rate of $2.3\%$ than the fully connected networks.
292
+ However, RAM with 4 glimpses gets roughly the same performance as the convolutional network and outperforms it for 6 and 8 glimpses, reaching roughly $1.9\%$ error.
293
+ This is possible because the attention model can focus its retina on the digit and hence learn a translation invariant policy.
294
+ This experiment also shows that the attention model is able to successfully search for an object in a big image when the object is not centered.
295
+
296
+ \textbf{Cluttered Non-Centered Digits:}
297
+ One of the most challenging aspects of classifying real-world images is the presence of a wide range clutter.
298
+ Systems that operate on the entire image at full resolution are particularly susceptible to clutter and must learn to be invariant to it.
299
+ One possible advantage of an attention mechanism is that it may make it easier to learn in the presence of clutter by focusing on the relevant part of the image and ignoring the irrelevant part.
300
+ We test this hypothesis with several experiments on a new task we call Cluttered Translated MNIST.
301
+ Data for this task was generated by first placing an MNIST digit in a random location of a larger blank image and then adding random $8$ by $8$ subpatches from other random MNIST digits to random locations of the image.
302
+ The goal is to classify the complete digit present in the image.
303
+ Figure~\ref{fig:mnist60c} shows a random sample of test cases for the $60$ by $60$ Cluttered Translated MNIST task.
304
+
305
+ \begin{table}[t]
306
+ \centering
307
+ \subfloat{
308
+ \begin{tabular}{ll}
309
+ \multicolumn{2}{c}{\textbf{(a) 60x60 Cluttered Translated MNIST}}\\
310
+ \hline
311
+ Model & Error \\
312
+ \hline
313
+ FC, 2 layers (64 hiddens each) & 28.96\% \\
314
+ FC, 2 layers (256 hiddens each) & 13.2\% \\
315
+ Convolutional, 2 layers & 7.83\%\\
316
+ RAM, 4 glimpses, $12\times 12$, 3 scales & 7.1\% \\
317
+ RAM, 6 glimpses, $12\times 12$, 3 scales & 5.88\% \\
318
+ RAM, 8 glimpses, $12\times 12$, 3 scales & 5.23\% \\
319
+ \end{tabular}
320
+ \label{tbl:mnist60c}
321
+ }
322
+ \subfloat{
323
+ \begin{tabular}{ll}
324
+ \multicolumn{2}{c}{\textbf{(b) 100x100 Cluttered Translated MNIST}}\\
325
+ \hline
326
+ Model & Error \\
327
+ \hline
328
+ Convolutional, 2 layers & 16.51\%\\
329
+ RAM, 4 glimpses, $12\times 12$, 4 scales & 14.95\% \\
330
+ RAM, 6 glimpses, $12\times 12$, 4 scales & 11.58\% \\
331
+ RAM, 8 glimpses, $12\times 12$, 4 scales & 10.83\% \\
332
+ \label{tbl:mnist100c}
333
+ \end{tabular}
334
+ }
335
+ \vspace{-0.1cm}
336
+ \caption{Classification on the Cluttered Translated MNIST dataset. FC denotes a fully-connected network with two layers of rectifier units. The convolutional network had one layer of 8 $10\times 10$ filters with stride 5, followed by a fully connected layer with 256 units in the $60\times 60$ case and 86 units in the $100\times 100$ case with rectifiers after each layer. Instances of the attention model are labeled with the number of glimpses, the size of the retina, and the number of scales in the retina. All models except for the big fully connected network had roughly the same number of parameters.}
337
+ \vspace{-0.5cm}
338
+ \end{table}
339
+
340
+ \begin{figure}
341
+ \vspace{0.2cm}
342
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975022.png}
343
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975036.png}
344
+ \centering
345
+ \caption{Examples of the learned policy on $60\times60$ cluttered-translated MNIST task. Column 1: The input image with glimpse path overlaid in green. Columns 2-7: The six glimpses the network chooses. The center of each image shows the full resolution glimpse, the outer low resolution areas are obtained by upscaling the low resolution glimpses back to full image size. The glimpse paths clearly show that the learned policy avoids computation in empty or noisy parts of the input space and directly explores the area around the object of interest.}
346
+ \vspace{-0.3cm}
347
+ \label{fig:policy}
348
+ \end{figure}
349
+
350
+ Table~\ref{tbl:mnist60c} shows the classification results for the models we trained on $60$ by $60$ Cluttered Translated MNIST with 4 pieces of clutter.
351
+ The presence of clutter makes the task much more difficult but the performance of the attention model is affected less than the performance of the other models.
352
+ RAM with 4 glimpses reaches $7.1\%$ error, which outperforms fully-connected models by a wide margin and the convolutional neural network by $0.7\%$, and RAM trained with 6 and 8 glimpses achieves even lower error.
353
+ Since RAM achieves larger relative error improvements over a convolutional network in the presence of clutter these results suggest the attention-based models may be better at dealing with clutter than convolutional networks because they can simply ignore it by not looking at it. Two samples of learned policy is shown in Figure~\ref{fig:policy} and more are included in the supplementary materials. The first column shows the original data point with the glimpse path overlaid. The location of the first glimpse is marked with a filled circle and the location of the final glimpse is marked with an empty circle. The intermediate points on the path are traced with solid straight lines. Each consecutive image to the right shows a representation of the glimpse that the network sees. It can be seen that the learned policy can reliably find and explore around the object of interest while avoiding clutter at the same time.
354
+
355
+ To further test this hypothesis we also performed experiments on $100$ by $100$ Cluttered Translated MNIST with 8 pieces of clutter.
356
+ The test errors achieved by the models we compared are shown in Table~\ref{tbl:mnist100c}.
357
+ The results show similar improvements of RAM over a convolutional network.
358
+ It has to be noted that the overall capacity and the amount of computation of our model does not change from $60 \times 60$ images to $100 \times 100$, whereas the hidden layer of the convolutional network that is connected to the linear layer grows linearly with the number of pixels in the input.
359
+
360
+
361
+ \vspace{-0.3cm}
362
+ \subsection{Dynamic Environments}
363
+ \vspace{-0.3cm}
364
+
365
+ One appealing property of the recurrent attention model is that it can be applied to videos or interactive problems with a visual input just as easily as to static image tasks.
366
+ We test the ability of our approach to learn a control policy in a dynamic visual environment while perceiving the environment through a bandwidth-limited retina by training it to play a simple game.
367
+ The game is played on a $24$ by $24$ screen of binary pixels and involves two objects: a single pixel that represents a ball falling from the top of the screen while bouncing off the sides of the screen and a two-pixel paddle positioned at the bottom of the screen which the agent controls with the aim of catching the ball.
368
+ When the falling pixel reaches the bottom of the screen the agent either gets a reward of 1 if the paddle overlaps with the ball and a reward of 0 otherwise.
369
+ The game then restarts from the beginning.
370
+
371
+ We trained the recurrent attention model to play the game of ``Catch'' using only the final reward as input.
372
+ The network had a $6$ by $6$ retina at three scales as its input, which means that the agent had to capture the ball in the $6$ by $6$ highest resolution region in order to know its precise position.
373
+ In addition to the two location actions, the attention model had three game actions (left, right, and do nothing) and the action network $f_a$ used a linear softmax to model a distribution over the game actions.
374
+ We used a core network of 256 LSTM units.
375
+
376
+ We performed random search to find suitable hyper-parameters and trained each agent for 20 million frames.
377
+ A video of the best agent, which catches the ball roughly $85\%$ of the time, can be downloaded from~\url{http://www.cs.toronto.edu/~vmnih/docs/attention.mov}.
378
+ The video shows that the recurrent attention model learned to play the game by tracking the ball near the bottom of the screen.
379
+ Since the agent was not in any way told to track the ball and was only rewarded for catching it, this result demonstrates the ability of the model to learn effective task-specific attention policies.
380
+
381
+ \vspace{-0.4cm}
382
+
383
+ \section{Discussion}
384
+ \vspace{-0.3cm}
385
+ This paper introduced a novel visual attention model that is formulated as a single recurrent neural network which takes a glimpse window as its input and uses the internal state of the network to select the next location to focus on as well as to generate control signals in a dynamic environment.
386
+ Although the model is not differentiable, the proposed unified architecture is trained end-to-end from pixel inputs to actions using a policy gradient method.
387
+ The model has several appealing properties. First, both the number of parameters and the amount of computation RAM performs can be controlled independently of the size of the input images.
388
+ Second, the model is able to ignore clutter present in an image by centering its retina on the relevant regions.
389
+ Our experiments show that RAM significantly outperforms a convolutional architecture with a comparable number of parameters on a cluttered object classification task.
390
+ Additionally, the flexibility of our approach allows for a number of interesting extensions.
391
+ For example, the network can be augmented with another action that allows it terminate at any time point and make a final classification decision.
392
+ Our preliminary experiments show that this allows the network to learn to stop taking glimpses once it has enough information to make a confident classification.
393
+ The network can also be allowed to control the scale at which the retina samples the image allowing it to fit objects of different size in the fixed size retina.
394
+ In both cases, the extra actions can be simply added to the action network $f_a$ and trained using the policy gradient procedure we have described.
395
+ Given the encouraging results achieved by RAM, applying the model to large scale object recognition and video classification is a natural direction for future work.
396
+
397
+ \vspace{-0.3cm}
398
+
399
+ \section*{Supplementary Material}
400
+
401
+ \begin{figure}[h]
402
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975014.png}
403
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975015.png}
404
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975016.png}
405
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975017.png}
406
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975018.png}
407
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975019.png}
408
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975020.png}
409
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975021.png}
410
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975022.png}
411
+
412
+ \centering
413
+ \caption{Examples of the learned policy on $60\times60$ cluttered-translated MNIST task. Column 1: The input image from MNIST test set with glimpse path overlaid in green (correctly classified) or red (false classified). Columns 2-7: The six glimpses the network chooses. The center of each image shows the full resolution glimpse, the outer low resolution areas are obtained by upscaling the low resolution glimpses back to full image size. The glimpse paths clearly show that the learned policy avoids computation in empty or noisy parts of the input space and directly explores the area around the object of interest.}
414
+ \vspace{-0.3cm}
415
+ \label{fig:policy}
416
+ \end{figure}
417
+
418
+ \begin{figure}[h]
419
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975023.png}
420
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975024.png}
421
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975025.png}
422
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975026.png}
423
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975027.png}
424
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975028.png}
425
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975029.png}
426
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975030.png}
427
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975031.png}
428
+
429
+ \centering
430
+ \caption{Examples of the learned policy on $60\times60$ cluttered-translated MNIST task. Column 1: The input image from MNIST test set with glimpse path overlaid in green (correctly classified) or red (false classified). Columns 2-7: The six glimpses the network chooses. The center of each image shows the full resolution glimpse, the outer low resolution areas are obtained by upscaling the low resolution glimpses back to full image size. The glimpse paths clearly show that the learned policy avoids computation in empty or noisy parts of the input space and directly explores the area around the object of interest.}
431
+ \vspace{-0.3cm}
432
+ \label{fig:policy}
433
+ \end{figure}
434
+
435
+ \begin{figure}[h]
436
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975032.png}
437
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975033.png}
438
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975034.png}
439
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975035.png}
440
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975036.png}
441
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975037.png}
442
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975038.png}
443
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975039.png}
444
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975040.png}
445
+ \includegraphics[width=0.99\linewidth]{figures/vizi/vizi_14975041.png}
446
+ \centering
447
+ \caption{Examples of the learned policy on $60\times60$ cluttered-translated MNIST task. Column 1: The input image from MNIST test set with glimpse path overlaid in green (correctly classified) or red (false classified). Columns 2-7: The six glimpses the network chooses. The center of each image shows the full resolution glimpse, the outer low resolution areas are obtained by upscaling the low resolution glimpses back to full image size. The glimpse paths clearly show that the learned policy avoids computation in empty or noisy parts of the input space and directly explores the area around the object of interest.}
448
+ \vspace{-0.3cm}
449
+ \label{fig:policy}
450
+ \end{figure}
451
+
452
+ \clearpage
453
+ {\small
454
+ \bibliographystyle{plain}
455
+ \bibliography{attention,deepqnet}
456
+ }
457
+ \end{document}
papers/1409/1409.1259.tex ADDED
@@ -0,0 +1,938 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ \documentclass[11pt]{article}
6
+
7
+ \usepackage{acl2014}
8
+ \usepackage{times}
9
+ \usepackage{url}
10
+ \usepackage{latexsym}
11
+ \usepackage{etoolbox}
12
+
13
+ \usepackage{graphicx}
14
+ \usepackage{url}
15
+ \usepackage{amsmath}
16
+ \usepackage{amssymb}
17
+ \usepackage{color}
18
+ \usepackage{caption}
19
+ \usepackage{multirow}
20
+ \usepackage{comment}
21
+ \usepackage[utf8]{inputenc}
22
+
23
+
24
+
25
+ \newcommand{\bmx}[0]{\begin{bmatrix}}
26
+ \newcommand{\emx}[0]{\end{bmatrix}}
27
+ \newcommand{\qt}[1]{\left<#1\right>}
28
+ \newcommand{\qexp}[1]{\left<#1\right>}
29
+ \newcommand{\qlay}[1]{\left[#1\right]}
30
+ \newcommand{\vect}[1]{\mathbf{#1}}
31
+ \newcommand{\vects}[1]{\boldsymbol{#1}}
32
+ \newcommand{\matr}[1]{\mathbf{#1}}
33
+ \newcommand{\var}[0]{\operatorname{Var}}
34
+ \newcommand{\cov}[0]{\operatorname{Cov}}
35
+ \newcommand{\diag}[0]{\operatorname{diag}}
36
+ \newcommand{\matrs}[1]{\boldsymbol{#1}}
37
+ \newcommand{\vone}[0]{\vect{1}}
38
+ \newcommand{\va}[0]{\vect{a}}
39
+ \newcommand{\vb}[0]{\vect{b}}
40
+ \newcommand{\vc}[0]{\vect{c}}
41
+ \newcommand{\vh}[0]{\vect{h}}
42
+ \newcommand{\vv}[0]{\vect{v}}
43
+ \newcommand{\vx}[0]{\vect{x}}
44
+ \newcommand{\vw}[0]{\vect{w}}
45
+ \newcommand{\vs}[0]{\vect{s}}
46
+ \newcommand{\vf}[0]{\vect{f}}
47
+ \newcommand{\ve}[0]{\vect{e}}
48
+ \newcommand{\vy}[0]{\vect{y}}
49
+ \newcommand{\vg}[0]{\vect{g}}
50
+ \newcommand{\vr}[0]{\vect{r}}
51
+ \newcommand{\vm}[0]{\vect{m}}
52
+ \newcommand{\vu}[0]{\vect{u}}
53
+ \newcommand{\vL}[0]{\vect{L}}
54
+ \newcommand{\vz}[0]{\vect{z}}
55
+ \newcommand{\vp}[0]{\vect{p}}
56
+ \newcommand{\mW}[0]{\matr{W}}
57
+ \newcommand{\mG}[0]{\matr{G}}
58
+ \newcommand{\mX}[0]{\matr{X}}
59
+ \newcommand{\mQ}[0]{\matr{Q}}
60
+ \newcommand{\mU}[0]{\matr{U}}
61
+ \newcommand{\mV}[0]{\matr{V}}
62
+ \newcommand{\mL}[0]{\matr{L}}
63
+ \newcommand{\mR}[0]{\matr{R}}
64
+ \newcommand{\mA}{\matr{A}}
65
+ \newcommand{\mD}{\matr{D}}
66
+ \newcommand{\mS}{\matr{S}}
67
+ \newcommand{\mI}{\matr{I}}
68
+ \newcommand{\td}[0]{\text{d}}
69
+ \newcommand{\vsig}[0]{\vects{\sigma}}
70
+ \newcommand{\valpha}[0]{\vects{\alpha}}
71
+ \newcommand{\vmu}[0]{\vects{\mu}}
72
+ \newcommand{\tf}[0]{\text{m}}
73
+ \newcommand{\tdf}[0]{\text{dm}}
74
+ \newcommand{\TT}[0]{{\vects{\theta}}}
75
+ \newcommand{\grad}[0]{\nabla}
76
+ \newcommand{\alert}[1]{\textcolor{red}{#1}}
77
+ \newcommand{\N}[0]{\mathcal{N}}
78
+ \newcommand{\LL}[0]{\mathcal{L}}
79
+ \newcommand{\HH}[0]{\mathcal{H}}
80
+ \newcommand{\RR}[0]{\mathbb{R}}
81
+ \newcommand{\Scal}[0]{\mathcal{S}}
82
+ \newcommand{\sigmoid}{\sigma}
83
+ \newcommand{\softmax}{\text{softmax}}
84
+ \newcommand{\E}[0]{\mathbb{E}}
85
+ \newcommand{\enabla}[0]{\ensuremath{\overset{\raisebox{-0.3ex}[0.5ex][0ex]{\ensuremath{\scriptscriptstyle e}}}{\nabla}}}
86
+ \newcommand{\enhnabla}[0]{\nabla_{\hspace{-0.5mm}e}\,}
87
+ \newcommand{\tred}[1]{\textcolor{red}{#1}}
88
+
89
+ \DeclareMathOperator*{\argmin}{arg\,min}
90
+ \DeclareMathOperator*{\argmax}{arg\,max}
91
+
92
+ \graphicspath{ {./figures/} }
93
+
94
+
95
+ \setlength\titlebox{5cm}
96
+
97
+
98
+
99
+ \title{On the Properties of Neural Machine Translation: Encoder--Decoder Approaches}
100
+
101
+ \author{
102
+ Kyunghyun Cho~~~~~~~~~Bart van Merri\"enboer\\
103
+ Universit\'e de Montr\'eal \\
104
+ \And
105
+ Dzmitry Bahdanau\thanks{\hspace{2mm}Research done while visiting Universit\'e de
106
+ Montr\'eal}\\
107
+ Jacobs University, Germany \\
108
+ \AND
109
+ Yoshua Bengio \\
110
+ Universit\'e de Montr\'eal, CIFAR Senior Fellow
111
+ }
112
+
113
+ \date{}
114
+
115
+ \begin{document}
116
+ \maketitle
117
+
118
+ \begin{abstract}
119
+ Neural machine translation is a relatively new approach to statistical
120
+ machine translation based purely on neural networks. The neural machine
121
+ translation models often consist of an encoder and a decoder. The encoder
122
+ extracts a fixed-length representation from a variable-length input
123
+ sentence, and the decoder generates a correct translation from this
124
+ representation. In this paper, we focus on analyzing the properties of the
125
+ neural machine translation using two models; RNN Encoder--Decoder and a
126
+ newly proposed gated recursive convolutional neural network. We show that
127
+ the neural machine translation performs relatively well on short sentences
128
+ without unknown words, but its performance degrades rapidly as the length of
129
+ the sentence and the number of unknown words increase. Furthermore, we find
130
+ that the proposed gated recursive convolutional network learns a grammatical
131
+ structure of a sentence automatically.
132
+ \end{abstract}
133
+
134
+ \section{Introduction}
135
+
136
+ A new approach for statistical machine translation based purely on neural
137
+ networks has recently been proposed~\cite{Kalchbrenner2012,Sutskever2014}. This
138
+ new approach, which we refer to as {\it neural machine translation}, is inspired
139
+ by the recent trend of deep representational learning. All the neural network
140
+ models used in \cite{Kalchbrenner2012,Sutskever2014,Cho2014} consist of an
141
+ encoder and a decoder. The encoder extracts a fixed-length vector representation
142
+ from a variable-length input sentence, and from this representation the decoder
143
+ generates a correct, variable-length target translation.
144
+
145
+ The emergence of the neural machine translation is highly significant, both
146
+ practically and theoretically. Neural machine translation models require only a
147
+ fraction of the memory needed by traditional statistical machine translation
148
+ (SMT) models. The models we trained for this paper require only 500MB of memory
149
+ in total. This stands in stark contrast with existing SMT systems, which often
150
+ require tens of gigabytes of memory. This makes the neural machine translation
151
+ appealing in practice. Furthermore, unlike conventional translation systems,
152
+ each and every component of the neural translation model is trained jointly to
153
+ maximize the translation performance.
154
+
155
+ As this approach is relatively new, there has not been much work on analyzing
156
+ the properties and behavior of these models. For instance: What are the
157
+ properties of sentences on which this approach performs better? How does the
158
+ choice of source/target vocabulary affect the performance? In which cases does
159
+ the neural machine translation fail?
160
+
161
+ It is crucial to understand the properties and behavior of this new neural
162
+ machine translation approach in order to determine future research directions.
163
+ Also, understanding the weaknesses and strengths of neural machine translation
164
+ might lead to better ways of integrating SMT and neural machine translation
165
+ systems.
166
+
167
+ In this paper, we analyze two neural machine translation models. One of them is
168
+ the RNN Encoder--Decoder that was proposed recently in \cite{Cho2014}. The other
169
+ model replaces the encoder in the RNN Encoder--Decoder model with a novel neural
170
+ network, which we call a {\it gated recursive convolutional neural network}
171
+ (grConv). We evaluate these two models on the task of translation from French to
172
+ English.
173
+
174
+ Our analysis shows that the performance of the neural machine translation model
175
+ degrades quickly as the length of a source sentence increases. Furthermore, we
176
+ find that the vocabulary size has a high impact on the translation performance.
177
+ Nonetheless, qualitatively we find that the both models are able to generate
178
+ correct translations most of the time. Furthermore, the newly proposed grConv
179
+ model is able to learn, without supervision, a kind of syntactic structure over
180
+ the source language.
181
+
182
+
183
+ \section{Neural Networks for Variable-Length Sequences}
184
+
185
+ In this section, we describe two types of neural networks that are able to
186
+ process variable-length sequences. These are the recurrent neural network
187
+ and the proposed gated recursive convolutional neural network.
188
+
189
+ \subsection{Recurrent Neural Network with Gated Hidden Neurons}
190
+ \label{sec:rnn_gated}
191
+
192
+ \begin{figure}[ht]
193
+ \centering
194
+ \begin{minipage}{0.44\columnwidth}
195
+ \centering
196
+ \includegraphics[width=0.8\columnwidth]{rnn.pdf}
197
+ \end{minipage}
198
+ \hfill
199
+ \begin{minipage}{0.44\columnwidth}
200
+ \centering
201
+ \includegraphics[width=0.8\columnwidth]{hidden_unit.pdf}
202
+ \end{minipage}
203
+ \medskip
204
+ \begin{minipage}{0.44\columnwidth}
205
+ \centering
206
+ (a)
207
+ \end{minipage}
208
+ \hfill
209
+ \begin{minipage}{0.44\columnwidth}
210
+ \centering
211
+ (b)
212
+ \end{minipage}
213
+ \caption{The graphical illustration of (a) the recurrent
214
+ neural network and (b) the hidden unit that adaptively forgets and remembers.}
215
+ \label{fig:rnn_unit}
216
+ \end{figure}
217
+
218
+ A recurrent neural network (RNN, Fig.~\ref{fig:rnn_unit}~(a)) works on a variable-length sequence $x=(\vx_1,
219
+ \vx_2, \cdots, \vx_T)$ by maintaining a hidden state $\vh$ over time. At each
220
+ timestep $t$, the hidden state $\vh^{(t)}$ is updated by
221
+ \begin{align*}
222
+ \vh^{(t)} = f\left( \vh^{(t-1)}, \vx_t \right),
223
+ \end{align*}
224
+ where $f$ is an activation function. Often $f$ is as simple as performing a
225
+ linear transformation on the input vectors, summing them, and applying an
226
+ element-wise logistic sigmoid function.
227
+
228
+ An RNN can be used effectively to learn a distribution over a variable-length
229
+ sequence by learning the distribution over the next input $p(\vx_{t+1} \mid
230
+ \vx_{t}, \cdots, \vx_{1})$. For instance, in the case of a sequence of
231
+ $1$-of-$K$ vectors, the distribution can be learned by an RNN which has as an
232
+ output
233
+ \begin{align*}
234
+ p(x_{t,j} = 1 \mid \vx_{t-1}, \dots, \vx_1) = \frac{\exp \left(
235
+ \vw_j \vh_{\qt{t}}\right) } {\sum_{j'=1}^{K} \exp \left( \vw_{j'}
236
+ \vh_{\qt{t}}\right) },
237
+ \end{align*}
238
+ for all possible symbols $j=1,\dots,K$, where $\vw_j$ are the rows of a
239
+ weight matrix $\mW$. This results in the joint distribution
240
+ \begin{align*}
241
+ p(x) = \prod_{t=1}^T p(x_t \mid x_{t-1}, \dots, x_1).
242
+ \end{align*}
243
+
244
+ Recently, in \cite{Cho2014} a new activation function for RNNs was proposed.
245
+ The new activation function augments the usual logistic sigmoid activation
246
+ function with two gating units called reset, $\vr$, and update, $\vz$, gates.
247
+ Each gate depends on the previous hidden state $\vh^{(t-1)}$, and the current
248
+ input $\vx_t$ controls the flow of information. This is reminiscent of long
249
+ short-term memory (LSTM) units~\cite{Hochreiter1997}. For details about this
250
+ unit, we refer the reader to \cite{Cho2014} and Fig.~\ref{fig:rnn_unit}~(b). For
251
+ the remainder of this paper, we always use this new activation function.
252
+
253
+
254
+ \subsection{Gated Recursive Convolutional Neural Network}
255
+ \label{sec:grconv}
256
+
257
+ \begin{figure*}[ht]
258
+ \centering
259
+ \begin{minipage}{0.35\textwidth}
260
+ \centering
261
+ \includegraphics[width=0.8\columnwidth]{rconv.pdf}
262
+ \end{minipage}
263
+ \hfill
264
+ \begin{minipage}{0.19\textwidth}
265
+ \centering
266
+ \includegraphics[width=0.8\columnwidth]{rconv_gated_unit.pdf}
267
+ \end{minipage}
268
+ \hfill
269
+ \begin{minipage}{0.17\textwidth}
270
+ \centering
271
+ \includegraphics[width=0.8\columnwidth]{rconv_gated_unit_ex1.pdf}
272
+ \end{minipage}
273
+ \hfill
274
+ \begin{minipage}{0.17\textwidth}
275
+ \centering
276
+ \includegraphics[width=0.8\columnwidth]{rconv_gated_unit_ex2.pdf}
277
+ \end{minipage}
278
+ \begin{minipage}{0.35\textwidth}
279
+ \centering
280
+ (a)
281
+ \end{minipage}
282
+ \hfill
283
+ \begin{minipage}{0.19\textwidth}
284
+ \centering
285
+ (b)
286
+ \end{minipage}
287
+ \hfill
288
+ \begin{minipage}{0.17\textwidth}
289
+ \centering
290
+ (c)
291
+ \end{minipage}
292
+ \hfill
293
+ \begin{minipage}{0.17\textwidth}
294
+ \centering
295
+ (d)
296
+ \end{minipage}
297
+ \caption{The graphical illustration of (a) the recursive convolutional
298
+ neural network and (b) the proposed gated unit for the
299
+ recursive convolutional neural network. (c--d) The example structures that
300
+ may be learned with the proposed gated unit.}
301
+ \label{fig:rconv_unit}
302
+ \end{figure*}
303
+
304
+ Besides RNNs, another natural approach to dealing with variable-length sequences
305
+ is to use a recursive convolutional neural network where the parameters at each
306
+ level are shared through the whole network (see Fig.~\ref{fig:rconv_unit}~(a)).
307
+ In this section, we introduce a binary convolutional neural network whose
308
+ weights are recursively applied to the input sequence until it outputs a single
309
+ fixed-length vector. In addition to a usual convolutional architecture, we
310
+ propose to use the previously mentioned gating mechanism, which allows the
311
+ recursive network to learn the structure of the source sentences on the fly.
312
+
313
+ Let $x=(\vx_1, \vx_2, \cdots, \vx_T)$ be an input sequence, where $\vx_t \in
314
+ \RR^d$. The proposed gated recursive convolutional neural network (grConv)
315
+ consists of four weight matrices $\mW^l$, $\mW^r$, $\mG^l$ and $\mG^r$. At each
316
+ recursion level $t \in \left[ 1, T-1\right]$, the activation of the $j$-th
317
+ hidden unit $h^{(t)}_j$ is computed by
318
+ \begin{align}
319
+ \label{eq:grconv_main}
320
+ h^{(t)}_j = \omega_c \tilde{h}^{(t)}_j + \omega_l h^{(t-1)}_{j-1} + \omega_r
321
+ h^{(t-1)}_j,
322
+ \end{align}
323
+ where $\omega_c$, $\omega_l$ and $\omega_r$ are the values of a gater that sum
324
+ to $1$. The hidden unit is initialized as
325
+ \begin{align*}
326
+ h^{(0)}_j = \mU \vx_j,
327
+ \end{align*}
328
+ where $\mU$ projects the input into a hidden space.
329
+
330
+ The new activation $\tilde{h}^{(t)}_j$ is computed as usual:
331
+ \begin{align*}
332
+ \tilde{h}^{(t)}_j = \phi\left( \mW^l h^{(t)}_{j-1} + \mW^r h^{(t)}_{j}
333
+ \right),
334
+ \end{align*}
335
+ where $\phi$ is an element-wise nonlinearity.
336
+
337
+ The gating coefficients $\omega$'s are computed by
338
+ \begin{align*}
339
+ \left[
340
+ \begin{array}{c}
341
+ \omega_c \\
342
+ \omega_l \\
343
+ \omega_r
344
+ \end{array}
345
+ \right] = \frac{1}{Z}
346
+ \exp\left( \mG^l h^{(t)}_{j-1} + \mG^r h^{(t)}_{j}
347
+ \right),
348
+ \end{align*}
349
+ where $\mG^l, \mG^r \in \RR^{3 \times d}$ and
350
+ \[
351
+ Z = \sum_{k=1}^3 \left[\exp\left( \mG^l h^{(t)}_{j-1} + \mG^r h^{(t)}_{j} \right)\right]_k.
352
+ \]
353
+
354
+ According to this activation, one can think of the activation of a single node
355
+ at recursion level $t$ as a choice between either a new activation computed from
356
+ both left and right children, the activation from the left child, or the
357
+ activation from the right child. This choice allows the overall structure of the
358
+ recursive convolution to change adaptively with respect to an input sample. See
359
+ Fig.~\ref{fig:rconv_unit}~(b) for an illustration.
360
+
361
+ In this respect, we may even consider the proposed grConv as doing a kind of
362
+ unsupervised parsing. If we consider the case where the gating unit makes a
363
+ hard decision, i.e., $\omega$ follows an 1-of-K coding, it is easy to see that
364
+ the network adapts to the input and forms a tree-like structure (See
365
+ Fig.~\ref{fig:rconv_unit}~(c--d)). However, we leave the further investigation
366
+ of the structure learned by this model for future research.
367
+
368
+ \section{Purely Neural Machine Translation}
369
+
370
+ \subsection{Encoder--Decoder Approach}
371
+
372
+ The task of translation can be understood from the perspective of machine learning
373
+ as learning the conditional distribution $p(f \mid e)$ of a target sentence
374
+ (translation) $f$ given a source sentence $e$. Once the conditional distribution
375
+ is learned by a model, one can use the model to directly sample a target
376
+ sentence given a source sentence, either by actual sampling or by using a
377
+ (approximate) search algorithm to find the maximum of the distribution.
378
+
379
+ A number of recent papers have proposed to use neural networks to directly learn
380
+ the conditional distribution from a bilingual, parallel
381
+ corpus~\cite{Kalchbrenner2012,Cho2014,Sutskever2014}. For instance, the authors
382
+ of \cite{Kalchbrenner2012} proposed an approach involving a convolutional
383
+ $n$-gram model to extract a fixed-length vector of a source sentence which is
384
+ decoded with an inverse convolutional $n$-gram model augmented with an RNN. In
385
+ \cite{Sutskever2014}, an RNN with LSTM units was used to encode a source
386
+ sentence and starting from the last hidden state, to decode a target sentence.
387
+ Similarly, the authors of \cite{Cho2014} proposed to use an RNN to encode and
388
+ decode a pair of source and target phrases.
389
+
390
+ \begin{figure}
391
+ \centering
392
+ \includegraphics[width=0.9\columnwidth]{encode_decode.pdf}
393
+ \caption{The encoder--decoder architecture}
394
+ \label{fig:encode_decode}
395
+ \end{figure}
396
+
397
+ At the core of all these recent works lies an encoder--decoder architecture (see
398
+ Fig.~\ref{fig:encode_decode}). The encoder processes a variable-length input
399
+ (source sentence) and builds a fixed-length vector representation (denoted as
400
+ $\vz$ in Fig.~\ref{fig:encode_decode}). Conditioned on the encoded
401
+ representation, the decoder generates a variable-length sequence (target
402
+ sentence).
403
+
404
+ Before \cite{Sutskever2014} this encoder--decoder approach was used mainly as
405
+ a part of the existing statistical machine translation (SMT) system. This
406
+ approach was used to re-rank the $n$-best list generated by the SMT system in
407
+ \cite{Kalchbrenner2012}, and the authors of \cite{Cho2014} used this approach
408
+ to provide an additional score for the existing phrase table.
409
+
410
+ In this paper, we concentrate on analyzing the direct translation performance,
411
+ as in \cite{Sutskever2014}, with two model configurations. In both models, we
412
+ use an RNN with the gated hidden unit~\cite{Cho2014}, as this is one of the only
413
+ options that does not require a non-trivial way to determine the target length.
414
+ The first model will use the same RNN with the gated hidden unit as an encoder,
415
+ as in \cite{Cho2014}, and the second one will use the proposed gated recursive
416
+ convolutional neural network (grConv). We aim to understand the inductive bias
417
+ of the encoder--decoder approach on the translation performance measured by
418
+ BLEU.
419
+
420
+ \section{Experiment Settings}
421
+
422
+ \subsection{Dataset}
423
+
424
+ We evaluate the encoder--decoder models on the task of English-to-French
425
+ translation. We use the bilingual, parallel corpus which is a set of 348M
426
+ selected by the method in \cite{Axelrod2011} from a combination of Europarl (61M
427
+ words), news commentary (5.5M), UN (421M) and two crawled corpora of 90M and
428
+ 780M words respectively.\footnote{All the data can be downloaded from
429
+ \url{http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/}.} We did not use
430
+ separate monolingual data. The performance of the neural machien translation
431
+ models was measured on the news-test2012, news-test2013 and news-test2014 sets
432
+ (~3000 lines each). When comparing to the SMT system, we use news-test2012 and
433
+ news-test2013 as our development set for tuning the SMT system, and
434
+ news-test2014 as our test set.
435
+
436
+ Among all the sentence pairs in the prepared parallel corpus, for reasons of
437
+ computational efficiency we only use the pairs where both English and French
438
+ sentences are at most 30 words long to train neural networks. Furthermore, we
439
+ use only the 30,000 most frequent words for both English and French. All the
440
+ other rare words are considered unknown and are mapped to a special token
441
+ ($\left[ \text{UNK} \right]$).
442
+
443
+
444
+ \subsection{Models}
445
+
446
+
447
+
448
+ We train two models: The RNN Encoder--Decoder~(RNNenc)\cite{Cho2014} and the
449
+ newly proposed gated recursive convolutional neural network (grConv). Note that
450
+ both models use an RNN with gated hidden units as a decoder (see
451
+ Sec.~\ref{sec:rnn_gated}).
452
+
453
+ We use minibatch stochastic gradient descent with AdaDelta~\cite{Zeiler-2012} to
454
+ train our two models. We initialize the square weight matrix (transition matrix)
455
+ as an orthogonal matrix with its spectral radius set to $1$ in the case of the
456
+ RNNenc and $0.4$ in the case of the grConv. $\tanh$ and a rectifier
457
+ ($\max(0,x)$) are used as the element-wise nonlinear functions for the RNNenc
458
+ and grConv respectively.
459
+
460
+ The grConv has 2000 hidden neurons, whereas the RNNenc has 1000 hidden
461
+ neurons. The word embeddings are 620-dimensional in both cases.\footnote{
462
+ In all cases, we train the whole network including the word embedding matrix.
463
+ }
464
+ Both models were trained for approximately 110 hours, which is equivalent to
465
+ 296,144 updates and 846,322 updates for the grConv and RNNenc,
466
+ respectively.\footnote{
467
+ The code will be available online, should the paper be accepted.
468
+ }
469
+
470
+ \begin{table*}[ht]
471
+ \centering
472
+ \begin{minipage}{0.48\textwidth}
473
+ \centering
474
+ \begin{tabular}{c | c | c c}
475
+ & Model & Development & Test \\
476
+ \hline
477
+ \hline
478
+ \multirow{5}{*}{\rotatebox[origin=c]{90}{All}}
479
+ & RNNenc & 13.15 & 13.92 \\
480
+ & grConv & 9.97 & 9.97 \\
481
+ & Moses & 30.64 & 33.30 \\
482
+ & Moses+RNNenc$^\star$ & 31.48 & 34.64 \\
483
+ & Moses+LSTM$^\circ$ & 32 & 35.65 \\
484
+ \hline
485
+ \multirow{3}{*}{\rotatebox[origin=c]{90}{No UNK}}
486
+ & RNNenc & 21.01 & 23.45 \\
487
+ & grConv & 17.19 & 18.22 \\
488
+ & Moses & 32.77 & 35.63 \\
489
+ \end{tabular}
490
+ \end{minipage}
491
+ \hfill
492
+ \begin{minipage}{0.48\textwidth}
493
+ \centering
494
+ \begin{tabular}{c | c | c c}
495
+ & Model & Development & Test \\
496
+ \hline
497
+ \hline
498
+ \multirow{3}{*}{\rotatebox[origin=c]{90}{All}}
499
+ & RNNenc & 19.12 & 20.99 \\
500
+ & grConv & 16.60 & 17.50 \\
501
+ & Moses & 28.92 & 32.00 \\
502
+ \hline
503
+ \multirow{3}{*}{\rotatebox[origin=c]{90}{No UNK}}
504
+ & RNNenc & 24.73 & 27.03 \\
505
+ & grConv & 21.74 & 22.94 \\
506
+ & Moses & 32.20 & 35.40 \\
507
+ \end{tabular}
508
+ \end{minipage}
509
+
510
+ \begin{minipage}{0.48\textwidth}
511
+ \centering
512
+ (a) All Lengths
513
+ \end{minipage}
514
+ \hfill
515
+ \begin{minipage}{0.48\textwidth}
516
+ \centering
517
+ (b) 10--20 Words
518
+ \end{minipage}
519
+ \caption{BLEU scores computed on the development and test sets. The top
520
+ three rows show the scores on all the sentences, and the bottom three
521
+ rows on the sentences having no unknown words. ($\star$) The result
522
+ reported in \cite{Cho2014} where the RNNenc was used to score phrase
523
+ pairs in the phrase table. ($\circ$) The result reported in
524
+ \cite{Sutskever2014} where an encoder--decoder with LSTM units was used
525
+ to re-rank the $n$-best list generated by Moses.}
526
+ \label{tab:bleu}
527
+ \end{table*}
528
+
529
+ \subsubsection{Translation using Beam-Search}
530
+
531
+ We use a basic form of beam-search to find a translation that maximizes the
532
+ conditional probability given by a specific model (in this case, either the
533
+ RNNenc or the grConv). At each time step of the decoder, we keep the $s$
534
+ translation candidates with the highest log-probability, where $s=10$ is the
535
+ beam-width. During the beam-search, we exclude any hypothesis that includes an
536
+ unknown word. For each end-of-sequence symbol that is selected among the
537
+ highest scoring candidates the beam-width is reduced by one, until the
538
+ beam-width reaches zero.
539
+
540
+ The beam-search to (approximately) find a sequence of maximum log-probability
541
+ under RNN was proposed and used successfully in \cite{Graves2012} and
542
+ \cite{Boulanger2013}. Recently, the authors of \cite{Sutskever2014} found this
543
+ approach to be effective in purely neural machine translation based on LSTM
544
+ units.
545
+
546
+ When we use the beam-search to find the $k$ best translations, we do not use a
547
+ usual log-probability but one normalized with respect to the length of the
548
+ translation. This prevents the RNN decoder from favoring shorter translations,
549
+ behavior which was observed earlier in, e.g.,~\cite{Graves2013}.
550
+
551
+ \begin{figure*}[ht]
552
+ \centering
553
+ \begin{minipage}{0.31\textwidth}
554
+ \centering
555
+ \includegraphics[width=1.\columnwidth]{len_norm.pdf}
556
+ \\
557
+ (a) RNNenc
558
+ \end{minipage}
559
+ \hfill
560
+ \begin{minipage}{0.31\textwidth}
561
+ \centering
562
+ \includegraphics[width=1.\columnwidth]{grconv_len_norm.pdf}
563
+ \\
564
+ (b) grConv
565
+ \end{minipage}
566
+ \hfill
567
+ \begin{minipage}{0.31\textwidth}
568
+ \centering
569
+ \includegraphics[width=1.\columnwidth]{max_unk_norm.pdf}
570
+ \\
571
+ (c) RNNenc
572
+ \end{minipage}
573
+ \caption{The BLEU scores achieved by (a) the RNNenc and (b) the grConv for
574
+ sentences of a given length. The plot is smoothed by taking a window of size
575
+ 10. (c) The BLEU scores achieved by the RNN model for sentences with less
576
+ than a given number of unknown words.}
577
+ \label{fig:bleu_length}
578
+ \end{figure*}
579
+
580
+ \begin{table*}[htp]
581
+ \begin{minipage}{0.99\textwidth}
582
+ \small
583
+ \centering
584
+ \begin{tabular}{c | p{13cm}}
585
+ Source & She explained her new position of foreign affairs and security policy
586
+ representative as a reply to a question: "Who is the European Union? Which phone
587
+ number should I call?"; i.e. as an important step to unification and better
588
+ clarity of Union's policy towards countries such as China or India. \\
589
+ \hline
590
+ Reference & Elle a expliqué le nouveau poste de la Haute représentante pour les
591
+ affaires étrangères et la politique de défense dans le cadre d'une réponse à la
592
+ question: "Qui est qui à l'Union européenne?" "A quel numéro de téléphone
593
+ dois-je appeler?", donc comme un pas important vers l'unicité et une plus grande
594
+ lisibilité de la politique de l'Union face aux états, comme est la Chine ou bien
595
+ l'Inde. \\
596
+ \hline
597
+ RNNEnc & Elle a décrit sa position en matière de politique étrangère et de
598
+ sécurité ainsi que la politique de l'Union européenne en matière de gouvernance
599
+ et de démocratie . \\
600
+ \hline
601
+ grConv & Elle a expliqué sa nouvelle politique étrangère et de sécurité en
602
+ réponse à un certain nombre de questions : "Qu'est-ce que l'Union européenne ? "
603
+ . \\
604
+ \hline
605
+ Moses & Elle a expliqué son nouveau poste des affaires étrangères et la
606
+ politique de sécurité représentant en réponse à une question: "Qui est l'Union
607
+ européenne? Quel numéro de téléphone dois-je appeler?"; c'est comme une étape
608
+ importante de l'unification et une meilleure lisibilité de la politique de
609
+ l'Union à des pays comme la Chine ou l'Inde . \\
610
+ \hline
611
+ \multicolumn{2}{c}{} \\
612
+ \hline
613
+ Source & The investigation should be complete by the end of the year when the
614
+ findings will be presented to Deutsche Bank's board of managing directors - with
615
+ recommendations for action. \\
616
+ \hline
617
+ Reference & L'examen doit être terminé d'ici la fin de l'année, ensuite les
618
+ résultats du conseil d'administration de la Deutsche Bank doivent être présentés
619
+ - avec recommandation, d' habitude. \\
620
+ \hline
621
+ RNNEnc & L'étude devrait être terminée à la fin de l' année, lorsque les
622
+ conclusions seront présentées au conseil d'administration de la Deutsche Bank,
623
+ conseil d'association avec des mesures. \\
624
+ \hline
625
+ grConv & L'enquête devrait être terminée à la fin de l'année où les conclusions
626
+ seront présentées par le conseil d'administration de la BCE à la direction des
627
+ recommandations. \\
628
+ \hline
629
+ Moses & L'enquête devrait être terminé d'ici la fin de l'année lorsque les
630
+ résultats seront présentés à la Deutsche Bank conseil des directeurs généraux -
631
+ avec des recommandations . \\
632
+ \hline
633
+ \multicolumn{2}{c}{} \\
634
+ \hline
635
+ Source & And there are thorny mechanical questions that must be resolved during
636
+ that time, like how to balance the state's mandate of "adequate access" to
637
+ licensed marijuana with its prohibitions on cannabis businesses within 1,000
638
+ feet of a school, park, playground or child care center. \\
639
+ \hline
640
+ Reference & Pendant ce temps, des questions pratiques restent en suspens:
641
+ comment équilibrer le mandat de l'état qui garantit un accès approprié à la
642
+ marijuana agréée et interdit l'installation de commerces de vente de cannabis
643
+ dans un rayon de 30 km autour d'une école, d'un parc, d'un terrain de jeu ou
644
+ d'une crèche. \\
645
+ \hline
646
+ RNNEnc & Il y a des questions préventives qui se posent quant à l'équilibre des
647
+ droits de l'enfant dans les limites d'une entreprise de collecte de sang. \\
648
+ \hline
649
+ grConv & De façon générale, il y a des raisons de sécurité pour que les
650
+ entreprises aient accès à des milliers de centres de pêche, d'eau ou de
651
+ recherche. \\
652
+ \hline
653
+ Moses & Et il y a des problèmes mécaniques complexes qui doivent être résolues
654
+ au cours de cette période, comme la manière d'équilibrer le mandat de "l'accès
655
+ adéquat" permis de marijuana avec l'interdiction du cannabis aux entreprises de
656
+ 1000 pieds d'une école de jeu ou de parc, le service de garde. \\
657
+ \end{tabular}
658
+ \\
659
+ {\normalsize (a) Long Sentences}
660
+ \end{minipage}
661
+
662
+ \vspace{3mm}
663
+ \begin{minipage}{0.99\textwidth}
664
+ \small
665
+ \centering
666
+ \begin{tabular}{c | p{13cm}}
667
+ Source & There is still no agreement as to which election rules to follow. \\
668
+ \hline
669
+ Reference & Toutefois il n'existe toujours pas d'accord selon quel règlement de
670
+ vote il faut procéder. \\
671
+ \hline
672
+ RNNEnc & Il n'y a pas encore d'accord sur les règles électorales. \\
673
+ \hline
674
+ grConv & Il n'y a pas encore d'accord sur la question des élections à suivre. \\
675
+ \hline
676
+ Moses & Il y a toujours pas d'accord sur l'élection des règles à suivre. \\
677
+ \hline
678
+ \multicolumn{2}{c}{} \\
679
+ \hline
680
+ Source & Many of these ideas may have been creative, but they didn't necessarily
681
+ work. \\
682
+ \hline
683
+ Reference & Beaucoup de ces idées étaient créatives mais elles n'ont pas
684
+ forcément fonctionné. \\
685
+ \hline
686
+ RNNEnc & Bon nombre de ces idées ont peut-être été créatrices, mais elles ne
687
+ s'appliquaient pas nécessairement. \\
688
+ \hline
689
+ grConv & Beaucoup de ces idées peuvent être créatives, mais elles n'ont pas
690
+ fonctionné. \\
691
+ \hline
692
+ Moses & Beaucoup de ces idées ont pu être créatif, mais ils n'ont pas
693
+ nécessairement. \\
694
+ \hline
695
+ \multicolumn{2}{c}{} \\
696
+ \hline
697
+ Source & There is a lot of consensus between the Left and the Right on this
698
+ subject. \\
699
+ \hline
700
+ Reference & C'est qu'il y a sur ce sujet un assez large consensus entre gauche
701
+ et droite. \\
702
+ \hline
703
+ RNNEnc & Il existe beaucoup de consensus entre la gauche et le droit à la
704
+ question. \\
705
+ \hline
706
+ grConv & Il y a un consensus entre la gauche et le droit sur cette question. \\
707
+ \hline
708
+ Moses & Il y a beaucoup de consensus entre la gauche et la droite sur ce sujet.
709
+ \\
710
+ \hline
711
+ \multicolumn{2}{c}{} \\
712
+ \hline
713
+ Source & According to them, one can find any weapon at a low price right now. \\
714
+ \hline
715
+ Reference & Selon eux, on peut trouver aujourd'hui à Moscou n'importe quelle
716
+ arme pour un prix raisonnable.\\
717
+ \hline
718
+ RNNEnc & Selon eux, on peut se trouver de l'arme à un prix trop bas.\\
719
+ \hline
720
+ grConv & En tout cas, ils peuvent trouver une arme à un prix très bas à la
721
+ fois.\\
722
+ \hline
723
+ Moses & Selon eux, on trouve une arme �� bas prix pour l'instant.
724
+ \end{tabular}
725
+ \\
726
+ {\normalsize (b) Short Sentences}
727
+ \end{minipage}
728
+ \caption{The sample translations along with the source sentences and the reference translations.}
729
+ \label{tbl:translations}
730
+ \end{table*}
731
+
732
+ \section{Results and Analysis}
733
+
734
+ \subsection{Quantitative Analysis}
735
+
736
+ In this paper, we are interested in the properties of the neural machine
737
+ translation models. Specifically, the translation quality with respect to the
738
+ length of source and/or target sentences and with respect to the number of words
739
+ unknown to the model in each source/target sentence.
740
+
741
+ First, we look at how the BLEU score, reflecting the translation performance,
742
+ changes with respect to the length of the sentences (see
743
+ Fig.~\ref{fig:bleu_length} (a)--(b)). Clearly, both models perform relatively
744
+ well on short sentences, but suffer significantly as the length of the
745
+ sentences increases.
746
+
747
+ We observe a similar trend with the number of unknown words, in
748
+ Fig.~\ref{fig:bleu_length} (c). As expected, the performance degrades rapidly as
749
+ the number of unknown words increases. This suggests that it will be an
750
+ important challenge to increase the size of vocabularies used by the neural
751
+ machine translation system in the future. Although we only present the result
752
+ with the RNNenc, we observed similar behavior for the grConv as well.
753
+
754
+ In Table~\ref{tab:bleu}~(a), we present the translation performances obtained
755
+ using the two models along with the baseline phrase-based SMT system.\footnote{
756
+ We used Moses as a baseline, trained with additional monolingual data for a
757
+ 4-gram language model.
758
+ } Clearly the phrase-based SMT system still shows the superior performance over
759
+ the proposed purely neural machine translation system, but we can see that under
760
+ certain conditions (no unknown words in both source and reference sentences),
761
+ the difference diminishes quite significantly. Furthermore, if we consider only
762
+ short sentences (10--20 words per sentence), the difference further decreases
763
+ (see Table~\ref{tab:bleu}~(b).
764
+
765
+ Furthermore, it is possible to use the neural machine translation models
766
+ together with the existing phrase-based system, which was found recently in
767
+ \cite{Cho2014,Sutskever2014} to improve the overall translation performance
768
+ (see Table~\ref{tab:bleu}~(a)).
769
+
770
+ This analysis suggests that that the current neural translation approach has
771
+ its weakness in handling long sentences. The most obvious explanatory
772
+ hypothesis is that the fixed-length vector representation does not have enough
773
+ capacity to encode a long sentence with complicated structure and meaning. In
774
+ order to encode a variable-length sequence, a neural network may ``sacrifice''
775
+ some of the important topics in the input sentence in order to remember others.
776
+
777
+ This is in stark contrast to the conventional phrase-based machine translation
778
+ system~\cite{Koehn2003}. As we can see from Fig.~\ref{fig:moses_bleu_length},
779
+ the conventional system trained on the same dataset (with additional monolingual
780
+ data for the language model) tends to get a higher BLEU score on longer
781
+ sentences.
782
+
783
+ In fact, if we limit the lengths of both the source sentence and the reference
784
+ translation to be between 10 and 20 words and use only the sentences with no
785
+ unknown words, the BLEU scores on the test set are 27.81 and 33.08 for the
786
+ RNNenc and Moses, respectively.
787
+
788
+ Note that we observed a similar trend even when we used sentences of up to 50
789
+ words to train these models.
790
+
791
+ \subsection{Qualitative Analysis}
792
+
793
+ Although BLEU score is used as a de-facto standard metric for evaluating the
794
+ performance of a machine translation system, it is not the perfect metric~(see,
795
+ e.g., \cite{Song13,Liu2011}). Hence, here we present some of the actual
796
+ translations generated from the two models, RNNenc and grConv.
797
+
798
+ In Table.~\ref{tbl:translations} (a)--(b), we show the translations of some
799
+ randomly selected sentences from the development and test sets. We chose the
800
+ ones that have no unknown words. (a) lists long sentences (longer than 30
801
+ words), and (b) short sentences (shorter than 10 words). We can see that,
802
+ despite the difference in the BLEU scores, all three models (RNNenc, grConv and
803
+ Moses) do a decent job at translating, especially, short sentences. When the
804
+ source sentences are long, however, we notice the performance degradation of the
805
+ neural machine translation models.
806
+
807
+ \begin{figure}[ht]
808
+ \centering
809
+ \includegraphics[width=0.9\columnwidth]{moses_len_norm.pdf} \caption{The BLEU
810
+ scores achieved by an SMT system for sentences of a given length. The plot is
811
+ smoothed by taking a window of size 10.
812
+ We use the solid, dotted and dashed
813
+ lines to show the effect of different lengths of source, reference or both
814
+ of them, respectively.}
815
+ \label{fig:moses_bleu_length}
816
+ \end{figure}
817
+
818
+ \begin{figure*}[ht]
819
+ \begin{minipage}{0.5\textwidth}
820
+ \centering
821
+ \includegraphics[width=\textwidth,clip=true,trim=90 50 90 50]{obama.pdf}
822
+ \end{minipage}
823
+ \hfill
824
+ \begin{minipage}{0.48\textwidth}
825
+ \centering
826
+ \begin{tabular}{l}
827
+ Translations \\
828
+ \hline
829
+ Obama est le Président des États-Unis . (2.06)\\
830
+ Obama est le président des États-Unis . (2.09)\\
831
+ Obama est le président des Etats-Unis . (2.61)\\
832
+ Obama est le Président des Etats-Unis . (3.33)\\
833
+ Barack Obama est le président des États-Unis . (4.41)\\
834
+ Barack Obama est le Président des États-Unis . (4.48)\\
835
+ Barack Obama est le président des Etats-Unis . (4.54)\\
836
+ L'Obama est le Président des États-Unis . (4.59)\\
837
+ L'Obama est le président des États-Unis . (4.67)\\
838
+ Obama est président du Congrès des États-Unis .(5.09) \\
839
+ \end{tabular}
840
+ \end{minipage}
841
+ \begin{minipage}{0.4\textwidth}
842
+ \centering
843
+ (a)
844
+ \end{minipage}
845
+ \hfill
846
+ \begin{minipage}{0.58\textwidth}
847
+ \centering
848
+ (b)
849
+ \end{minipage}
850
+ \caption{(a) The visualization of the grConv structure when the input is
851
+ {\it ``Obama is the President of the United States.''}. Only edges with
852
+ gating coefficient $\omega$ higher than $0.1$ are shown. (b) The top-$10$ translations
853
+ generated by the grConv. The numbers in parentheses are the negative
854
+ log-probability of the translations.}
855
+ \label{fig:obama}
856
+ \end{figure*}
857
+
858
+ Additionally, we present here what type of structure the proposed gated
859
+ recursive convolutional network learns to represent. With a sample sentence
860
+ {\it ``Obama is the President of the United States''}, we present the parsing
861
+ structure learned by the grConv encoder and the generated translations, in
862
+ Fig.~\ref{fig:obama}. The figure suggests that the grConv extracts the vector
863
+ representation of the sentence by first merging {\it ``of the United States''}
864
+ together with {\it ``is the President of''} and finally combining this with {\it
865
+ ``Obama is''} and {\it ``.''}, which is well correlated with our intuition.
866
+
867
+ Despite the lower performance the grConv showed compared
868
+ to the RNN Encoder--Decoder,\footnote{
869
+ However, it should be noted that the number of gradient updates used to
870
+ train the grConv was a third of that used to train the RNNenc. Longer
871
+ training may change the result, but for a fair comparison we chose to
872
+ compare models which were trained for an equal amount of time. Neither model
873
+ was trained to convergence.
874
+ }
875
+ we find this property of the grConv learning a
876
+ grammar structure automatically interesting and believe further investigation is
877
+ needed.
878
+
879
+ \section{Conclusion and Discussion}
880
+
881
+ In this paper, we have investigated the property of a recently introduced family
882
+ of machine translation system based purely on neural networks. We focused on
883
+ evaluating an encoder--decoder approach, proposed recently in
884
+ \cite{Kalchbrenner2012,Cho2014,Sutskever2014}, on the task of
885
+ sentence-to-sentence translation. Among many possible encoder--decoder models we
886
+ specifically chose two models that differ in the choice of the encoder; (1) RNN
887
+ with gated hidden units and (2) the newly proposed gated recursive convolutional
888
+ neural network.
889
+
890
+ After training those two models on pairs of English and French sentences, we
891
+ analyzed their performance using BLEU scores with respect to the lengths of
892
+ sentences and the existence of unknown/rare words in sentences. Our analysis
893
+ revealed that the performance of the neural machine translation suffers
894
+ significantly from the length of sentences. However, qualitatively, we found
895
+ that the both models are able to generate correct translations very well.
896
+
897
+ These analyses suggest a number of future research directions in machine
898
+ translation purely based on neural networks.
899
+
900
+ Firstly, it is important to find a way to scale up training a neural network
901
+ both in terms of computation and memory so that much larger vocabularies for
902
+ both source and target languages can be used. Especially, when it comes to
903
+ languages with rich morphology, we may be required to come up with a radically
904
+ different approach in dealing with words.
905
+
906
+ Secondly, more research is needed to prevent the neural machine translation
907
+ system from underperforming with long sentences. Lastly, we need to explore
908
+ different neural architectures, especially for the decoder. Despite the radical
909
+ difference in the architecture between RNN and grConv which were used as an
910
+ encoder, both models suffer from {\it the curse of sentence length}. This
911
+ suggests that it may be due to the lack of representational power in the
912
+ decoder. Further investigation and research are required.
913
+
914
+ In addition to the property of a general neural machine translation system, we
915
+ observed one interesting property of the proposed gated recursive convolutional
916
+ neural network (grConv). The grConv was found to mimic the grammatical structure
917
+ of an input sentence without any supervision on syntactic structure of language.
918
+ We believe this property makes it appropriate for natural language processing
919
+ applications other than machine translation.
920
+
921
+
922
+
923
+
924
+
925
+
926
+
927
+ \section*{Acknowledgments}
928
+
929
+ The authors would like to acknowledge the support of the following agencies for
930
+ research funding and computing support: NSERC, Calcul Qu\'{e}bec, Compute Canada,
931
+ the Canada Research Chairs and CIFAR.
932
+
933
+
934
+ \bibliographystyle{acl}
935
+ \bibliography{strings,strings-shorter,ml,aigaion,myref}
936
+
937
+
938
+ \end{document}
papers/1409/1409.4667.tex ADDED
The diff for this file is too large to render. See raw diff
 
papers/1411/1411.4555.tex ADDED
@@ -0,0 +1,816 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,twocolumn,letterpaper]{article}
2
+
3
+ \usepackage{cvpr}
4
+ \usepackage{times}
5
+ \usepackage{epsfig}
6
+ \usepackage{graphicx}
7
+ \usepackage{amsmath}
8
+ \usepackage{amssymb}
9
+ \usepackage{multirow}
10
+
11
+ \interfootnotelinepenalty=10000
12
+
13
+
14
+
15
+
16
+
17
+ \cvprfinalcopy
18
+
19
+ \def\cvprPaperID{1642} \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
20
+
21
+ \ifcvprfinal\pagestyle{empty}\fi
22
+ \begin{document}
23
+
24
+ \title{Show and Tell: A Neural Image Caption Generator}
25
+
26
+ \author{Oriol Vinyals\\
27
+ Google\\
28
+ {\tt\small vinyals@google.com}
29
+ \and
30
+ Alexander Toshev\\
31
+ Google\\
32
+ {\tt\small toshev@google.com}
33
+ \and
34
+ Samy Bengio\\
35
+ Google\\
36
+ {\tt\small bengio@google.com}
37
+ \and
38
+ Dumitru Erhan\\
39
+ Google\\
40
+ {\tt\small dumitru@google.com}
41
+ }
42
+
43
+ \maketitle
44
+
45
+
46
+ \begin{abstract}
47
+ Automatically describing the content of an image is a fundamental
48
+ problem in artificial intelligence that connects
49
+ computer vision and natural language processing.
50
+ In this paper, we present a generative model based on a deep recurrent
51
+ architecture that combines recent advances in computer vision and
52
+ machine translation and that can be used to generate natural sentences
53
+ describing an image. The model is trained
54
+ to maximize the likelihood of the target description
55
+ sentence given the training image. Experiments on several datasets show
56
+ the accuracy of the model and the fluency of the language it learns
57
+ solely from image descriptions. Our model is often quite accurate,
58
+ which we verify both qualitatively and quantitatively.
59
+ For instance, while the current state-of-the-art BLEU-1 score (the higher the
60
+ better) on the Pascal dataset is 25, our approach yields 59, to be compared to
61
+ human performance around 69. We also show BLEU-1 score improvements
62
+ on Flickr30k, from 56 to 66, and on SBU, from 19 to 28.
63
+ Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is
64
+ the current state-of-the-art.
65
+ \end{abstract}
66
+
67
+ \section{Introduction}
68
+ \label{sec:intro}
69
+
70
+ Being able to automatically describe the content of an image using properly
71
+ formed English sentences is a very challenging task, but it could have great
72
+ impact, for instance by helping visually impaired people better understand the
73
+ content of images on the web. This task is significantly harder, for example, than the
74
+ well-studied image classification or object recognition tasks,
75
+ which have been a main focus in the computer vision community~\cite{ILSVRCarxiv14}.
76
+ Indeed, a description must capture not only the objects contained in an image, but
77
+ it also must express how these objects relate to each other as
78
+ well as their attributes and the activities they are involved in. Moreover, the above
79
+ semantic knowledge has to be expressed in a natural language like English, which
80
+ means that a language model is needed in addition to visual understanding.
81
+
82
+ Most previous attempts have proposed
83
+ to stitch together existing solutions of the above sub-problems, in order to go from
84
+ an image to its description~\cite{farhadi2010every,kulkarni2011baby}. In contrast, we would like to
85
+ present in this work a single joint model that
86
+ takes an image $I$ as input, and is trained to maximize the likelihood
87
+ $p(S|I)$ of producing a target sequence of words $S = \{S_1, S_2, \ldots\}$
88
+ where each word $S_t$ comes from a given dictionary, that describes the image
89
+ adequately.
90
+
91
+ \begin{figure}
92
+ \begin{center}
93
+ \includegraphics[width=0.5\textwidth]{overview_fig_2.pdf}
94
+ \end{center}
95
+ \caption{\label{fig:overview} NIC, our model, is based end-to-end on a neural network consisting of a vision CNN followed by a language generating RNN. It generates complete sentences in natural language from an input image, as shown on the example above.}
96
+ \end{figure}
97
+
98
+
99
+ The main inspiration of our work comes from recent advances in machine translation, where the task is to transform a sentence $S$ written
100
+ in a source language, into its translation $T$ in the target language, by
101
+ maximizing $p(T|S)$. For many
102
+ years, machine translation was also achieved by a series of separate tasks
103
+ (translating words individually, aligning words, reordering, etc), but recent
104
+ work has shown that translation can be done in a much simpler way using
105
+ Recurrent Neural Networks
106
+ (RNNs)~\cite{cho2014learning,bahdanau2014neural,sutskever2014sequence}
107
+ and still reach state-of-the-art performance.
108
+ An ``encoder'' RNN {\em reads} the source sentence and
109
+ transforms it into a rich fixed-length vector representation, which in turn in used as the
110
+ initial hidden state of a ``decoder'' RNN that {\em generates}
111
+ the target sentence.
112
+
113
+ Here, we propose to follow this elegant recipe,
114
+ replacing the encoder RNN by a deep convolution neural network (CNN). Over the last few years it has been convincingly
115
+ shown that CNNs can produce a rich representation of the input image by embedding it
116
+ to a fixed-length vector, such that this representation can be used for a variety of
117
+ vision tasks~\cite{sermanet2013overfeat}. Hence, it is natural to use a CNN as an
118
+ image ``encoder'', by first pre-training it for an image classification task and
119
+ using the last hidden layer as an input to the RNN decoder that generates sentences (see Fig.~\ref{fig:overview}).
120
+ We call this model the Neural Image Caption, or NIC.
121
+
122
+ Our contributions are as follows. First, we present an end-to-end system for the
123
+ problem. It is a neural net which is fully trainable using stochastic
124
+ gradient descent.
125
+ Second, our model combines state-of-art sub-networks for vision and language models. These
126
+ can be pre-trained on larger corpora and thus can take advantage of additional data. Finally,
127
+ it yields significantly better performance compared to state-of-the-art approaches;
128
+ for instance, on the Pascal dataset, NIC yielded a BLEU score of 59,
129
+ to be compared to the current state-of-the-art of 25, while human performance
130
+ reaches 69. On Flickr30k, we improve from 56 to 66, and on SBU,
131
+ from 19 to 28.
132
+
133
+ \section{Related Work}
134
+ \label{sec:related}
135
+
136
+ The problem of generating natural language descriptions from visual
137
+ data has long been studied in computer vision, but mainly for
138
+ video~\cite{gerber1996knowledge,yao2010i2t}. This has led to complex
139
+ systems composed of visual primitive recognizers combined with a structured
140
+ formal language, e.g.~And-Or Graphs or logic systems, which are
141
+ further converted to natural language via rule-based systems. Such
142
+ systems are heavily hand-designed, relatively brittle and have been
143
+ demonstrated only on limited domains, e.g. traffic scenes or sports.
144
+
145
+ The problem of still image description with natural text has gained
146
+ interest more recently. Leveraging recent advances in recognition of
147
+ objects, their attributes and locations, allows us to drive natural language
148
+ generation systems, though these are limited in their
149
+ expressivity. Farhadi et al.~\cite{farhadi2010every} use detections to
150
+ infer a triplet of scene elements which is converted to text using
151
+ templates. Similarly, Li et al.~\cite{li2011composing} start off with
152
+ detections and piece together a final description using phrases containing
153
+ detected objects and relationships. A more complex graph of detections
154
+ beyond triplets is used by Kulkani et
155
+ al.~\cite{kulkarni2011baby}, but with template-based text generation.
156
+ More powerful language models based on language parsing
157
+ have been used as well
158
+ \cite{mitchell2012midge,aker2010generating,kuznetsova2012collective,kuznetsova2014treetalk,elliott2013image}. The
159
+ above approaches have been able to describe images ``in the wild",
160
+ but they are heavily hand-designed and rigid when it comes to text
161
+ generation.
162
+
163
+ A large body of work has addressed the problem of ranking descriptions
164
+ for a given image
165
+ \cite{hodosh2013framing,gong2014improving,ordonez2011im2text}. Such
166
+ approaches are based on the idea of co-embedding of images and text in
167
+ the same vector space. For an image query, descriptions are retrieved
168
+ which lie close to the image in the embedding space. Most closely, neural networks are used to co-embed
169
+ images and sentences together \cite{socher2014grounded} or even image crops and subsentences \cite{karpathy2014deep} but do not attempt to generate novel
170
+ descriptions. In general, the above approaches cannot describe previously unseen
171
+ compositions of objects, even though the individual objects might have been
172
+ observed in the training data. Moreover, they avoid addressing the
173
+ problem of evaluating how good a generated description is.
174
+
175
+ In this work we combine deep
176
+ convolutional nets for image classification \cite{batchnorm} with
177
+ recurrent networks for sequence modeling
178
+ \cite{hochreiter1997long}, to create a single network
179
+ that generates descriptions of images. The RNN is trained in the context of
180
+ this single ``end-to-end'' network. The model is inspired
181
+ by recent successes of sequence generation in machine translation
182
+ \cite{cho2014learning,bahdanau2014neural,sutskever2014sequence}, with
183
+ the difference that instead of starting with a sentence, we provide an image
184
+ processed by a convolutional net. The closest works are by Kiros et al.~\cite{kiros2013multimodal} who
185
+ use a neural net, but a feedforward one, to predict the next word given the image
186
+ and previous words. A recent work by Mao et al.~\cite{baidu2014} uses a recurrent
187
+ NN for the same prediction task. This is very similar to the present proposal but
188
+ there are a number of important differences: we use a more powerful RNN model,
189
+ and provide the visual input to the RNN model directly, which makes it possible
190
+ for the RNN to keep track of the objects that have been explained by the text. As
191
+ a result of these seemingly insignificant differences, our system achieves
192
+ substantially better results on the established benchmarks. Lastly, Kiros et al.~\cite{kiros2014}
193
+ propose to construct a joint multimodal embedding space by using a powerful
194
+ computer vision model and an LSTM that encodes text. In contrast to our approach,
195
+ they use two separate pathways (one for images, one for text) to define a joint embedding,
196
+ and, even though they can generate text, their approach is highly tuned for ranking.
197
+
198
+
199
+ \section{Model}
200
+ \label{sec:model}
201
+
202
+ In this paper, we propose a neural and probabilistic framework to generate
203
+ descriptions from images. Recent advances in statistical machine
204
+ translation have shown that, given a powerful sequence model, it is
205
+ possible to achieve state-of-the-art results by directly maximizing
206
+ the probability of the correct translation given an input sentence in
207
+ an ``end-to-end'' fashion -- both for training and inference. These
208
+ models make use of a recurrent neural network
209
+ which encodes the variable length input into a fixed dimensional
210
+ vector, and uses this representation to ``decode'' it to the desired
211
+ output sentence. Thus, it is natural to use the same approach where,
212
+ given an image (instead of an input sentence in the source language),
213
+ one applies the same principle of ``translating'' it into its
214
+ description.
215
+
216
+ Thus, we propose to directly maximize the probability of the correct
217
+ description given the image by using the following formulation:
218
+
219
+ \begin{equation}
220
+ \theta^\star = \arg\max_\theta \sum_{(I,S)} \log p(S | I ; \theta)
221
+ \label{eqn:obj}
222
+ \end{equation}
223
+ where $\theta$ are the parameters of our model, $I$ is an image, and
224
+ $S$ its correct transcription. Since $S$ represents any sentence, its
225
+ length is unbounded. Thus, it is common to apply the chain rule to
226
+ model the joint probability over $S_0,\ldots,S_N$, where $N$ is the
227
+ length of this particular example as
228
+
229
+ \begin{equation}
230
+ \log p(S | I) = \sum_{t=0}^N \log p(S_t | I, S_0, \ldots, S_{t-1})
231
+ \label{eqn:chain}
232
+ \end{equation}
233
+ where we dropped the dependency on $\theta$ for convenience.
234
+ At training
235
+ time, $(S,I)$ is a training example pair, and we optimize the sum of
236
+ the log probabilities as described in~(\ref{eqn:chain}) over the
237
+ whole training set using stochastic gradient descent (further training
238
+ details are given in Section \ref{sec:exps}).
239
+
240
+ It is natural to model $p(S_t | I, S_0, \ldots, S_{t-1})$ with a
241
+ Recurrent Neural Network (RNN), where the variable number of
242
+ words we condition upon up to $t-1$ is expressed by a fixed length
243
+ hidden state or memory $h_t$. This memory is updated after seeing a
244
+ new input $x_t$ by using a non-linear function $f$:
245
+ \begin{equation}\label{eq:rnn}
246
+ h_{t+1} = f(h_{t}, x_t)\;.
247
+ \end{equation}
248
+ To make the above RNN more concrete two crucial design choices are to be made: what is
249
+ the exact form of $f$ and how are the images and words fed as inputs $x_t$. For
250
+ $f$ we use a Long-Short Term Memory (LSTM) net, which has shown state-of-the art
251
+ performance on sequence tasks such as translation. This model is outlined in the
252
+ next section.
253
+
254
+ For the representation of images, we use a Convolutional Neural Network
255
+ (CNN). They have been widely used and studied for image tasks, and are
256
+ currently state-of-the art for object recognition and detection. Our particular
257
+ choice of CNN uses a novel approach to batch normalization and yields the
258
+ current best performance on the ILSVRC 2014 classification
259
+ competition~\cite{batchnorm}. Furthermore, they have been shown to
260
+ generalize to other tasks such as scene classification by means of
261
+ transfer learning~\cite{decaf2014}. The words are represented with an embedding
262
+ model.
263
+
264
+ \subsection{LSTM-based Sentence Generator}
265
+ \label{sec:lstm}
266
+
267
+ The choice of $f$ in (\ref{eq:rnn}) is governed by its
268
+ ability to deal with vanishing and exploding gradients~\cite{hochreiter1997long},
269
+ the most common
270
+ challenge in designing and training RNNs. To address this challenge, a particular form
271
+ of recurrent nets, called LSTM, was introduced \cite{hochreiter1997long}
272
+ and applied with great success to translation \cite{cho2014learning,sutskever2014sequence} and sequence generation \cite{graves2013generating}.
273
+
274
+ \begin{figure}
275
+ \begin{center}
276
+ \includegraphics[width=0.85\columnwidth]{detailed_lstm_figure.pdf}
277
+ \end{center}
278
+ \caption{\label{fig:lstm} LSTM: the memory block contains a cell $c$ which is controlled by three gates. In blue we show the recurrent connections -- the output $m$ at time $t-1$ is fed back to the memory at time $t$ via the three gates; the cell value is fed back via the forget gate; the predicted word at time $t-1$ is fed back in addition to the memory output $m$ at time $t$ into the Softmax for word prediction.}
279
+ \end{figure}
280
+
281
+ The core of the LSTM model is a memory cell $c$ encoding
282
+ knowledge at every time step of what inputs have been observed up to this step (see Figure~\ref{fig:lstm}) . The behavior of the cell
283
+ is controlled by ``gates" -- layers which are applied multiplicatively and thus can
284
+ either keep a value from the gated layer if the gate is $1$ or zero this value if the gate is $0$.
285
+ In particular, three gates are being used which control whether to forget the current cell value (forget gate $f$),
286
+ if it should read its input (input gate $i$) and whether to output the new cell value (output gate $o$).
287
+ The definition of the gates and cell update and output are as follows:
288
+ \begin{eqnarray}
289
+ i_t &= &\sigma(W_{ix} x_t+ W_{im} m_{t-1}) \\
290
+ f_t &= & \sigma(W_{fx} x_t+ W_{fm} m_{t-1}) \\
291
+ o_t &= & \sigma(W_{ox} x_t + W_{om} m_{t-1}) \\
292
+ c_t &= & f_t \odot c_{t-1} + i_t \odot h(W_{cx} x_t + W_{cm} m_{t-1}) \\
293
+ m_t &= & o_t \odot c_t \\
294
+ p_{t+1} &=& \textrm{Softmax}(m_t)
295
+ \end{eqnarray}
296
+ where $\odot$ represents the product with a gate value, and the various $W$
297
+ matrices are trained parameters. Such multiplicative gates make it
298
+ possible to train the LSTM robustly as these gates deal well with exploding and vanishing gradients \cite{hochreiter1997long}.
299
+ The nonlinearities are sigmoid $\sigma(\cdot)$ and hyperbolic tangent $h(\cdot)$.
300
+ The last equation $m_t$ is what is used
301
+ to feed to a Softmax, which will produce a probability distribution $p_t$ over all words.
302
+
303
+ \begin{figure}
304
+ \begin{center}
305
+ \includegraphics[width=0.75\columnwidth]{unrolled_lstm.pdf}
306
+ \end{center}
307
+ \caption{\label{fig:unrolled_lstm} LSTM model combined with a CNN image embedder (as defined in \cite{batchnorm}) and word embeddings. The unrolled connections between the LSTM memories are in blue and they correspond to the recurrent connections in Figure~\ref{fig:lstm}. All LSTMs share the same parameters. }
308
+ \end{figure}
309
+ \paragraph{Training} The LSTM model is trained to predict each word of the
310
+ sentence after it has seen the image as well as all preceding words as defined by
311
+ $p(S_t | I, S_0, \ldots, S_{t-1})$. For this purpose, it is instructive to think
312
+ of the LSTM in unrolled form -- a copy of the LSTM memory is created for the
313
+ image and each sentence word such that all LSTMs share the same parameters and the
314
+ output $m_{t-1}$ of the LSTM at time $t-1$ is fed to the LSTM at time $t$ (see
315
+ Figure~\ref{fig:unrolled_lstm}). All recurrent connections are transformed to feed-forward connections in the
316
+ unrolled version. In more detail, if we denote by $I$ the input
317
+ image and by $S=(S_0,\ldots, S_N)$ a true sentence describing this image, the
318
+ unrolling procedure reads:
319
+ \begin{eqnarray}
320
+ x_{-1} &=& \textrm{CNN}(I)\\
321
+ x_t &=& W_e S_t, \quad t\in\{0\ldots N-1\}\quad \label{eqn:sparse}\\
322
+ p_{t+1} &=& \textrm{LSTM}(x_t), \quad t\in\{0\ldots N-1\}\quad
323
+ \end{eqnarray}
324
+ where we represent each word as a one-hot vector $S_t$ of dimension equal to the
325
+ size of the dictionary. Note that we denote by $S_0$ a special start word and by
326
+ $S_{N}$ a special stop word which designates the start and end of the sentence.
327
+ In particular by emitting the stop word the LSTM signals that a complete sentence
328
+ has been generated. Both the image and the words are mapped to the same space,
329
+ the image by using a vision CNN, the words by using word embedding $W_e$. The image
330
+ $I$ is only input once, at $t=-1$, to inform the LSTM about the image contents. We
331
+ empirically verified that feeding the image at each time step as an extra input yields
332
+ inferior results, as the network can explicitly exploit noise in the image and
333
+ overfits more easily.
334
+
335
+ Our loss is the sum of the negative log likelihood of the correct word at each step as follows:
336
+ \begin{equation}
337
+ L(I, S) = - \sum_{t=1}^N \log p_t(S_t) \; .
338
+ \end{equation}
339
+ The above loss is minimized w.r.t. all the parameters of the LSTM, the top layer of the
340
+ image embedder CNN and word embeddings $W_e$.
341
+
342
+ \paragraph{Inference}
343
+
344
+ There are multiple approaches that can be used to generate a sentence given
345
+ an image, with NIC. The first one is {\bf Sampling} where we just
346
+ sample the first word according to $p_1$, then provide the corresponding
347
+ embedding as input and sample $p_2$, continuing like this until we sample the
348
+ special end-of-sentence token or some maximum length.
349
+ The second one is {\bf BeamSearch}: iteratively
350
+ consider the set of the $k$ best sentences up to time
351
+ $t$ as candidates to generate sentences of size $t+1$, and keep only the
352
+ resulting best $k$ of them. This better approximates
353
+ $S = \arg\max_{S'} p(S'|I)$.
354
+ We used the BeamSearch approach in the following experiments, with a
355
+ beam of size 20. Using a beam size of 1 (i.e., greedy search) did degrade our
356
+ results by 2 BLEU points on average.
357
+
358
+
359
+ \section{Experiments}
360
+ \label{sec:exps}
361
+ We performed an extensive set of experiments to assess the effectiveness of our
362
+ model using several metrics, data sources, and model architectures, in order
363
+ to compare to prior art.
364
+
365
+ \subsection{Evaluation Metrics}
366
+ Although it is sometimes not clear whether a description should be deemed
367
+ successful or not given an image,
368
+ prior art has proposed several evaluation metrics. The most
369
+ reliable (but time consuming) is to ask for raters to give a subjective score
370
+ on the usefulness of each description given the image. In this paper, we used
371
+ this to reinforce that some of the automatic metrics indeed correlate with this
372
+ subjective score, following the guidelines proposed
373
+ in~\cite{hodosh2013framing}, which asks the
374
+ graders to evaluate each generated sentence with a scale from 1 to 4\footnote{
375
+ The raters are asked whether the image is
376
+ described without any errors, described with minor errors, with a somewhat
377
+ related description, or with an unrelated description, with a score of 4 being
378
+ the best and 1 being the worst.}.
379
+
380
+ For this metric, we set up an Amazon Mechanical Turk experiment. Each image was
381
+ rated by 2 workers. The typical level of agreement between workers
382
+ is $65\%$. In case of disagreement we simply average the scores and record the
383
+ average as the score. For variance analysis, we perform bootstrapping
384
+ (re-sampling the results with replacement and computing means/standard
385
+ deviation over the resampled results). Like~\cite{hodosh2013framing} we
386
+ report the fraction
387
+ of scores which are larger or equal than a set of predefined thresholds.
388
+
389
+ The rest of the metrics can be computed automatically assuming one has access to
390
+ groundtruth, i.e.~human generated descriptions. The most commonly used metric
391
+ so far in the image description literature has been the
392
+ BLEU score~\cite{papineni2002},
393
+ which is a form of precision of word n-grams between generated and reference
394
+ sentences~\footnote{In this literature, most previous work report BLEU-1, i.e., they only compute precision at the unigram level, whereas BLEU-n is a geometric average of precision over 1- to n-grams.}.
395
+ Even though this metric has some obvious drawbacks, it has been shown to correlate
396
+ well with human evaluations. In this work, we corroborate this as well, as
397
+ we show in Section~\ref{sec:results}. An extensive evaluation protocol, as well
398
+ as the generated outputs of our system, can be found at \url{http://nic.droppages.com/}.
399
+
400
+ Besides BLEU, one can use the perplexity of the model for a given transcription
401
+ (which is closely related to our objective function in (\ref{eqn:obj})). The perplexity
402
+ is the geometric mean of the inverse probability for each predicted word. We
403
+ used this metric to perform choices regarding model selection and hyperparameter
404
+ tuning in our held-out set, but we do not report it since BLEU is always preferred
405
+ \footnote{Even though it would be more desirable, optimizing for BLEU score yields
406
+ a discrete optimization problem. In general, perplexity and BLEU scores are fairly
407
+ correlated.}. A much more detailed discussion regarding metrics can be found in
408
+ \cite{cider}, and research groups working on this topic have been reporting
409
+ other metrics which are deemed more appropriate for evaluating caption. We report
410
+ two such metrics - METEOR and Cider - hoping for much more discussion and research
411
+ to arise regarding the choice of metric.
412
+
413
+ Lastly, the current literature on image description
414
+ has also been using the proxy task of ranking a set of available
415
+ descriptions with respect to a given image (see for instance~\cite{kiros2014}).
416
+ Doing so has the advantage that one can use known ranking metrics like recall@k.
417
+ On the other hand, transforming the description generation task into a ranking
418
+ task is unsatisfactory: as the complexity of images to describe grows, together
419
+ with its dictionary, the number of possible sentences grows exponentially with
420
+ the size of the dictionary, and
421
+ the likelihood that a predefined sentence will fit a new image will go down
422
+ unless the number of such sentences also grows exponentially, which is not
423
+ realistic; not to mention the underlying computational complexity of evaluating
424
+ efficiently such a large corpus of stored sentences for each image.
425
+ The same argument has been used in speech recognition, where one has to
426
+ produce the sentence corresponding to a given acoustic sequence; while early
427
+ attempts concentrated on classification of isolated phonemes or words,
428
+ state-of-the-art approaches for this task are now generative and can produce
429
+ sentences from a large dictionary.
430
+
431
+ Now that our models can generate descriptions of reasonable quality,
432
+ and despite the ambiguities of evaluating an image description (where there
433
+ could be multiple valid descriptions not in the groundtruth)
434
+ we believe we should concentrate on evaluation metrics for the generation task
435
+ rather than for ranking.
436
+
437
+ \subsection{Datasets}
438
+ \label{sec:data}
439
+ For evaluation we use a number of datasets which consist of images and sentences in English describing these
440
+ images. The statistics of the datasets are as follows:
441
+ \begin{center}
442
+ \begin{tabular}{|l|c|c|c|}
443
+ \hline
444
+ \multirow{2}{*}{Dataset name} & \multicolumn{3}{|c|}{size} \\
445
+ \cline{2-4}
446
+ & train & valid. & test \\
447
+ \hline
448
+ \hline
449
+ Pascal VOC 2008 \cite{farhadi2010every} & - & - & 1000 \\
450
+ \hline
451
+ Flickr8k \cite{rashtchian2010collecting} & 6000 & 1000 & 1000 \\
452
+ \hline
453
+ Flickr30k \cite{hodoshimage} & 28000 & 1000 & 1000 \\
454
+ \hline
455
+ MSCOCO \cite{lin2014microsoft} & 82783 & 40504 & 40775 \\
456
+ \hline
457
+ SBU \cite{ordonez2011im2text} & 1M & - & - \\
458
+ \hline
459
+ \end{tabular}
460
+ \end{center}
461
+ With the exception of SBU, each image has been annotated by labelers
462
+ with 5 sentences that are
463
+ relatively visual and unbiased. SBU consists of
464
+ descriptions given by image owners when they uploaded them to Flickr. As
465
+ such they are not guaranteed to be visual or unbiased and thus this dataset has more noise.
466
+
467
+ The Pascal dataset is customary used for testing only after a system has been trained on
468
+ different data such as any of the other four dataset. In the case of SBU, we hold
469
+ out 1000 images for testing and train on the rest as
470
+ used by \cite{kuznetsova2014treetalk}. Similarly, we reserve 4K random images from the
471
+ MSCOCO validation set as test, called COCO-4k, and use it to report results in the following section.
472
+
473
+
474
+ \subsection{Results}
475
+ \label{sec:results}
476
+
477
+ Since our model is data driven and trained end-to-end, and given the abundance of
478
+ datasets, we wanted to answer
479
+ questions such as ``how dataset size affects generalization'',
480
+ ``what kinds of transfer learning it would be able to achieve'',
481
+ and ``how it would deal with weakly labeled examples''.
482
+ As a result, we performed experiments on five different datasets,
483
+ explained in Section~\ref{sec:data}, which enabled us to understand
484
+ our model in depth.
485
+
486
+ \subsubsection{Training Details}
487
+
488
+ Many of the challenges that we faced when training our models had to do with overfitting.
489
+ Indeed, purely supervised approaches require large amounts of data, but the datasets
490
+ that are of high quality have less than 100000 images. The task
491
+ of assigning a description is strictly harder than object classification and
492
+ data driven approaches have only recently become dominant thanks to datasets as large as ImageNet
493
+ (with ten times more data than the datasets we described in this paper, with the exception of SBU).
494
+ As a result, we believe that, even with the results we obtained which are quite good, the advantage
495
+ of our method versus most current human-engineered approaches will only increase in the next few years as training set sizes will grow.
496
+
497
+ Nonetheless, we explored several techniques to deal with overfitting. The most obvious
498
+ way to not overfit is to initialize the weights of the CNN component of our system to
499
+ a pretrained model (e.g., on ImageNet). We did this in all the experiments (similar to~\cite{gong2014improving}),
500
+ and it did help quite a lot in terms of generalization. Another set of weights that could
501
+ be sensibly initialized are $W_e$, the word embeddings. We tried initializing them
502
+ from a large news corpus~\cite{mikolov2013}, but no significant gains were observed, and we decided
503
+ to just leave them uninitialized for simplicity. Lastly, we did some model level overfitting-avoiding
504
+ techniques. We tried dropout~\cite{zaremba2014} and ensembling models, as well as exploring the size
505
+ (i.e., capacity) of the model by trading off number of hidden units versus depth. Dropout and ensembling
506
+ gave a few BLEU points improvement, and that is what we report throughout the paper.
507
+
508
+ We trained all sets of weights using stochastic gradient descent
509
+ with fixed learning rate and no momentum.
510
+ All weights were randomly initialized except for the CNN weights,
511
+ which we left unchanged because changing them had a negative impact.
512
+ We used 512 dimensions for the embeddings and the size of the LSTM memory.
513
+
514
+ Descriptions were preprocessed with basic tokenization, keeping all words
515
+ that appeared at least 5 times in the training set.
516
+
517
+ \subsubsection{Generation Results}
518
+
519
+ We report our main results on all the relevant datasets in Tables~\ref{tab:coco} and \ref{tab:bleu}.
520
+ Since PASCAL does not have a training set, we used the system trained using MSCOCO (arguably
521
+ the largest and highest quality dataset for this task). The state-of-the-art results for PASCAL
522
+ and SBU did not use image features based on deep learning, so arguably a big improvement
523
+ on those scores comes from that change alone. The Flickr datasets have been used
524
+ recently~\cite{hodosh2013framing,baidu2014,kiros2014}, but mostly evaluated in a retrieval framework. A
525
+ notable exception is~\cite{baidu2014}, where they did both retrieval and generation, and which
526
+ yields the best performance on the Flickr datasets up to now.
527
+
528
+ Human scores in Table~\ref{tab:bleu} were computed by comparing one of the human captions against the other four.
529
+ We do this for each of the five raters, and average their BLEU scores. Since this gives a slight
530
+ advantage to our system, given the BLEU score is computed against five reference sentences
531
+ and not four, we add back to the human scores the average difference of having five references instead of four.
532
+
533
+ Given that the field has seen significant advances in the last years, we do think
534
+ it is more meaningful to report BLEU-4, which is the standard in machine translation moving forward. Additionally,
535
+ we report metrics shown to correlate better with human evaluations in Table~\ref{tab:coco}\footnote{We
536
+ used the implementation of these metrics kindly provided in \url{http://www.mscoco.org}.}.
537
+ Despite recent efforts on better evaluation metrics \cite{cider}, our model fares strongly versus
538
+ human raters. However, when evaluating our captions using human raters (see Section~\ref{sec:human}),
539
+ our model fares much more poorly, suggesting more work is needed towards better metrics.
540
+ On the official test set for which labels are only available through the official website, our model had a 27.2 BLEU-4.
541
+
542
+ \begin{table}
543
+ \centering
544
+ \begin{small}
545
+ \begin{tabular}{|c|c|c|c|}
546
+ \hline
547
+ Metric & BLEU-4 & METEOR & CIDER \\
548
+ \hline
549
+ \hline
550
+ NIC & \bf{27.7} & \bf{23.7} & \bf{85.5} \\
551
+ \hline
552
+ Random & 4.6 & 9.0 & 5.1 \\
553
+ Nearest Neighbor & 9.9 & 15.7 & 36.5 \\
554
+ Human & 21.7 & 25.2 & 85.4 \\
555
+ \hline
556
+ \end{tabular}
557
+ \end{small}
558
+ \caption{Scores on the MSCOCO development set.}\label{tab:coco}
559
+ \end{table}
560
+
561
+ \begin{table}
562
+ \centering
563
+ \begin{small}
564
+ \begin{tabular}{|c|c|c|c|c|}
565
+ \hline
566
+ Approach & PASCAL & Flickr& Flickr& SBU \\
567
+ & (xfer) & 30k & 8k & \\
568
+ \hline
569
+ \hline
570
+ Im2Text~\cite{ordonez2011im2text} & & & & 11 \\
571
+ TreeTalk~\cite{kuznetsova2014treetalk} & & & & 19 \\
572
+ BabyTalk~\cite{kulkarni2011baby} & 25 & & & \\
573
+ Tri5Sem~\cite{hodosh2013framing} & & & 48 & \\
574
+ m-RNN~\cite{baidu2014} & & 55 & 58 & \\
575
+ MNLM~\cite{kiros2014}\footnotemark & & 56 & 51 & \\
576
+ \hline
577
+ SOTA & 25 & 56 & 58 & 19 \\
578
+ \hline
579
+ NIC & \bf{59} & \bf{66} & \bf{63} & \bf{28} \\
580
+ \hline
581
+ Human & 69 & 68 & 70 & \\
582
+ \hline
583
+ \end{tabular}
584
+ \end{small}
585
+ \caption{BLEU-1 scores. We only report previous work
586
+ results when available. SOTA stands for the current
587
+ state-of-the-art.}\label{tab:bleu}
588
+ \end{table}
589
+
590
+ \footnotetext{We computed these BLEU scores with the outputs that the authors of \cite{kiros2014} kindly provided for their OxfordNet system.}
591
+
592
+ \subsubsection{Transfer Learning, Data Size and Label Quality}
593
+
594
+ Since we have trained many models and we have several testing sets, we wanted to
595
+ study whether we could transfer a model to a different dataset, and how much the
596
+ mismatch in domain would be compensated with e.g. higher quality labels or more training
597
+ data.
598
+
599
+ The most obvious case for transfer learning and data size is between Flickr30k and Flickr8k. The two
600
+ datasets are similarly labeled as they were created by the same group.
601
+ Indeed, when training on Flickr30k (with about 4 times more training data),
602
+ the results obtained are 4 BLEU points better.
603
+ It is clear that in this case, we see gains by adding more training data
604
+ since the whole process is data-driven and overfitting prone.
605
+ MSCOCO is even bigger (5 times more
606
+ training data than Flickr30k), but since the collection process was done differently, there are likely
607
+ more differences in vocabulary and a larger mismatch. Indeed, all the BLEU scores degrade by 10 points.
608
+ Nonetheless, the descriptions are still reasonable.
609
+
610
+ Since PASCAL has no official training set and was collected independently of Flickr and MSCOCO, we
611
+ report transfer learning from MSCOCO (in Table~\ref{tab:bleu}). Doing transfer learning from
612
+ Flickr30k yielded worse results with BLEU-1 at 53 (cf. 59).
613
+
614
+ Lastly, even though SBU has weak labeling (i.e., the labels were captions and not
615
+ human generated descriptions), the task is much harder with a much larger and noisier
616
+ vocabulary. However, much more data is available for training. When running the MSCOCO
617
+ model on SBU, our performance degrades from 28 down to 16.
618
+
619
+ \subsubsection{Generation Diversity Discussion}
620
+
621
+ Having trained a generative model that gives $p(S|I)$, an obvious question is
622
+ whether the model generates novel captions, and whether the generated captions
623
+ are both diverse and high quality.
624
+ Table~\ref{tab:diversity} shows some samples when returning the N-best list from our
625
+ beam search decoder instead of the best hypothesis. Notice how the samples are
626
+ diverse and may show different aspects from the same image.
627
+ The agreement in BLEU score between the top 15 generated sentences is 58, which is similar to that of humans among them. This indicates the amount of diversity
628
+ our model generates.
629
+ In bold are the sentences that
630
+ are not present in the training set. If we take the best candidate, the
631
+ sentence is present in the training set 80\% of the times.
632
+ This is not too surprising given that the amount
633
+ of training data is quite small, so it is relatively easy for the model to pick ``exemplar''
634
+ sentences and use them to generate descriptions.
635
+ If we instead analyze the top 15 generated sentences, about half of the times we
636
+ see a completely novel description, but still with a similar BLEU score,
637
+ indicating that they are of enough quality, yet they
638
+ provide a healthy diversity.
639
+
640
+ \begin{table}[htb]
641
+ \begin{center}
642
+ \begin{tabular}{|l|}\hline
643
+ A man throwing a frisbee in a park. \\
644
+ {\bf A man holding a frisbee in his hand.} \\
645
+ {\bf A man standing in the grass with a frisbee.} \\
646
+ \hline
647
+ A close up of a sandwich on a plate. \\
648
+ A close up of a plate of food with french fries. \\
649
+ A white plate topped with a cut in half sandwich. \\
650
+ \hline
651
+ A display case filled with lots of donuts. \\
652
+ {\bf A display case filled with lots of cakes.} \\
653
+ {\bf A bakery display case filled with lots of donuts.} \\
654
+ \hline
655
+ \end{tabular}
656
+ \end{center}
657
+ \caption{{N-best examples from the MSCOCO test set. Bold lines indicate a novel sentence not present in the training set.}}
658
+ \label{tab:diversity}
659
+ \end{table}
660
+
661
+ \subsubsection{Ranking Results}
662
+
663
+ While we think ranking is an unsatisfactory way to evaluate description
664
+ generation from images, many papers report ranking scores,
665
+ using the set of testing captions as candidates to rank given a test image.
666
+ The approach that works best on these metrics (MNLM),
667
+ specifically implemented a ranking-aware loss. Nevertheless,
668
+ NIC is doing surprisingly well on both ranking tasks (ranking descriptions
669
+ given images, and ranking images given descriptions),
670
+ as can be seen in
671
+ Tables~\ref{tab:recall@10} and~\ref{tab:recall@1030k}. Note that for the Image Annotation task, we normalized our scores similar to what~\cite{baidu2014} used.
672
+
673
+ \begin{table}
674
+ \centering
675
+ \begin{small}
676
+ \setlength{\tabcolsep}{3pt}
677
+ \begin{tabular}{|c|ccc|ccc|}
678
+ \hline
679
+ \multirow{2}{*}{Approach} & \multicolumn{3}{c|}{Image Annotation} & \multicolumn{3}{c|}{Image Search} \\
680
+ & R@1 & R@10 & Med $r$ & R@1 & R@10 & Med $r$ \\
681
+ \hline
682
+ \hline
683
+ DeFrag~\cite{karpathy2014deep} & 13 & 44 & 14 & 10 & 43 & 15 \\
684
+ m-RNN~\cite{baidu2014} & 15 & 49 & 11 & 12 & 42 & 15\\
685
+ MNLM~\cite{kiros2014} & 18 & 55 & 8 & 13 & 52 & 10 \\
686
+ \hline
687
+ NIC & \bf{20} & \bf{61} & \bf{6} & \bf{19} & \bf{64} & \bf{5} \\
688
+ \hline
689
+ \end{tabular}
690
+ \end{small}
691
+ \caption{Recall@k and median rank on Flickr8k.\label{tab:recall@10}}
692
+ \end{table}
693
+
694
+ \begin{table}
695
+ \centering
696
+ \begin{small}
697
+ \setlength{\tabcolsep}{3pt}
698
+ \begin{tabular}{|c|ccc|ccc|}
699
+ \hline
700
+ \multirow{2}{*}{Approach} & \multicolumn{3}{c|}{Image Annotation} & \multicolumn{3}{c|}{Image Search} \\
701
+ & R@1 & R@10 & Med $r$ & R@1 & R@10 & Med $r$ \\
702
+ \hline
703
+ \hline
704
+ DeFrag~\cite{karpathy2014deep} & 16 & 55 & 8 & 10 & 45 & 13 \\
705
+ m-RNN~\cite{baidu2014} & 18 & 51 & 10 & 13 & 42 & 16\\
706
+ MNLM~\cite{kiros2014} & \bf{23} & \bf{63} & \bf{5} & \bf{17} & \bf{57} & \bf{8} \\
707
+ \hline
708
+ NIC & 17 & 56 & 7 & \bf{17} & \bf{57} & \bf{7} \\
709
+ \hline
710
+ \end{tabular}
711
+ \end{small}
712
+ \caption{Recall@k and median rank on Flickr30k.\label{tab:recall@1030k}}
713
+ \end{table}
714
+
715
+
716
+ \subsubsection{Human Evaluation}
717
+ \label{sec:human}
718
+
719
+ Figure~\ref{fig:turk_eval_numeric} shows the result of the human evaluations
720
+ of the descriptions provided by NIC, as well as a reference system and
721
+ groundtruth on various datasets. We can see that NIC is better than the reference
722
+ system, but clearly worse than the groundtruth, as expected.
723
+ This shows that BLEU is not a perfect metric, as it does not capture well
724
+ the difference between NIC and human descriptions assessed by raters.
725
+ Examples of rated images can be seen in Figure~\ref{fig:turk_eval_examples}.
726
+ It is interesting to see, for instance in the second image of the first
727
+ column, how the model was able to notice the frisbee given its size.
728
+
729
+ \begin{figure}
730
+ \begin{center}
731
+ \includegraphics[width=1.0\columnwidth]{turk_eval}
732
+ \end{center}
733
+ \vspace{-0.5cm}
734
+ \caption{\label{fig:turk_eval_numeric} {\em Flickr-8k: NIC}: predictions produced by NIC on the Flickr8k test set (average score: 2.37); {\em Pascal: NIC}: (average score: 2.45); {\em COCO-1k: NIC}: A subset of 1000 images from the MSCOCO test set with descriptions produced by NIC (average score: 2.72); {\em Flickr-8k: ref}: these are results from~\cite{hodosh2013framing} on Flickr8k rated using the same protocol, as a baseline (average score: 2.08); {\em Flickr-8k: GT}: we rated the groundtruth labels from Flickr8k using the same protocol. This provides us with a ``calibration'' of the scores (average score: 3.89)}
735
+ \end{figure}
736
+
737
+ \begin{figure*}
738
+ \begin{center}
739
+ \includegraphics[width=\textwidth]{nic_rated.jpg}
740
+ \vspace{-1cm}
741
+ \end{center}
742
+ \caption{\label{fig:turk_eval_examples} A selection of evaluation results, grouped by human rating.}
743
+ \end{figure*}
744
+
745
+
746
+ \subsubsection{Analysis of Embeddings}
747
+
748
+ In order to represent the previous word $S_{t-1}$ as input to the decoding LSTM
749
+ producing $S_t$, we use word embedding vectors~\cite{mikolov2013},
750
+ which have the advantage of
751
+ being independent of the size of the dictionary (contrary to a simpler
752
+ one-hot-encoding approach).
753
+ Furthermore, these word embeddings can be jointly trained with the rest of the
754
+ model. It is remarkable to see how the learned representations
755
+ have captured some semantic from the statistics of the language.
756
+ Table~\ref{tab:embeddings} shows, for a few example words, the nearest other
757
+ words found in the learned embedding space.
758
+
759
+ Note how some of the relationships
760
+ learned by the model will help the vision component. Indeed, having ``horse'', ``pony'',
761
+ and ``donkey'' close to each other will encourage the CNN to extract features that
762
+ are relevant to horse-looking animals.
763
+ We hypothesize that, in the extreme case where we see very few examples of a class (e.g., ``unicorn''),
764
+ its proximity to other word embeddings (e.g., ``horse'') should
765
+ provide a lot more information that would be completely lost with more
766
+ traditional bag-of-words based approaches.
767
+
768
+
769
+ \begin{table}[htb]
770
+ \label{tab:embeddings}
771
+ \begin{center}
772
+ \begin{tabular}{|l|l|}\hline
773
+ Word & Neighbors \\ \hline
774
+ car & van, cab, suv, vehicule, jeep \\
775
+ boy & toddler, gentleman, daughter, son \\
776
+ street & road, streets, highway, freeway \\
777
+ horse & pony, donkey, pig, goat, mule \\
778
+ computer & computers, pc, crt, chip, compute \\ \hline
779
+ \end{tabular}
780
+ \end{center}
781
+ \caption{{Nearest neighbors of a few example words}}
782
+ \end{table}
783
+
784
+
785
+
786
+ \section{Conclusion}
787
+ \label{sec:conclusion}
788
+ We have presented NIC, an
789
+ end-to-end neural network system that can automatically view an image
790
+ and generate a reasonable description in plain English.
791
+ NIC is based on a convolution neural network that encodes an image into
792
+ a compact representation, followed by a recurrent neural network that
793
+ generates a corresponding sentence. The model is trained to maximize
794
+ the likelihood of the sentence given the image.
795
+ Experiments on several datasets
796
+ show the robustness of NIC in terms of qualitative results (the
797
+ generated sentences are very reasonable) and quantitative evaluations,
798
+ using either ranking metrics or BLEU, a metric used in machine translation
799
+ to evaluate the quality of generated sentences.
800
+ It is clear from these experiments that, as the size of the available
801
+ datasets for image description increases, so will the performance of
802
+ approaches like NIC.
803
+ Furthermore, it will be interesting to see how one can use unsupervised
804
+ data, both from images alone and text alone, to improve image description
805
+ approaches.
806
+
807
+ \section*{Acknowledgement}
808
+
809
+ We would like to thank Geoffrey Hinton, Ilya Sutskever, Quoc Le, Vincent Vanhoucke, and Jeff Dean for useful discussions on the ideas behind the paper, and the write up.
810
+
811
+ {\small
812
+ \bibliographystyle{ieee}
813
+ \bibliography{egbib}
814
+ }
815
+
816
+ \end{document}
papers/1411/1411.5018.tex ADDED
@@ -0,0 +1,774 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[letterpaper,compsoc,twoside]{IEEEtran}
2
+ \usepackage{fixltx2e} \usepackage{cmap} \usepackage{ifthen}
3
+ \usepackage[T1]{fontenc}
4
+ \usepackage[utf8]{inputenc}
5
+ \usepackage{amsmath}
6
+
7
+ \usepackage[font={small,it},labelfont=bf]{caption}
8
+ \usepackage{float}
9
+
10
+ \setcounter{secnumdepth}{0}
11
+
12
+ \usepackage{scipy}
13
+ \makeatletter
14
+ \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax \let\PY@ul=\relax \let\PY@tc=\relax \let\PY@bc=\relax \let\PY@ff=\relax}
15
+ \def\PY@tok#1{\csname PY@tok@#1\endcsname}
16
+ \def\PY@toks#1+{\ifx\relax#1\empty\else \PY@tok{#1}\expandafter\PY@toks\fi}
17
+ \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{\PY@it{\PY@bf{\PY@ff{#1}}}}}}}
18
+ \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}}
19
+
20
+ \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}}
21
+ \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}}
22
+ \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}}
23
+ \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf}
24
+ \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}}
25
+ \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}}
26
+ \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}}
27
+ \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
28
+ \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
29
+ \expandafter\def\csname PY@tok@cs\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}\def\PY@bc##1{\setlength{\fboxsep}{0pt}\colorbox[rgb]{1.00,0.94,0.94}{\strut ##1}}}
30
+ \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit}
31
+ \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}}
32
+ \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
33
+ \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.20,0.20,0.20}{##1}}}
34
+ \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
35
+ \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}}
36
+ \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
37
+ \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.84,0.33,0.22}{##1}}}
38
+ \expandafter\def\csname PY@tok@nl\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.13,0.44}{##1}}}
39
+ \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.05,0.52,0.71}{##1}}}
40
+ \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.38,0.68,0.84}{##1}}}
41
+ \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
42
+ \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
43
+ \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.05,0.52,0.71}{##1}}}
44
+ \expandafter\def\csname PY@tok@nd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.33,0.33,0.33}{##1}}}
45
+ \expandafter\def\csname PY@tok@ne\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
46
+ \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.02,0.16,0.49}{##1}}}
47
+ \expandafter\def\csname PY@tok@si\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.44,0.63,0.82}{##1}}}
48
+ \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
49
+ \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}}
50
+ \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.02,0.16,0.45}{##1}}}
51
+ \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}}
52
+ \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
53
+ \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.78,0.36,0.04}{##1}}}
54
+ \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
55
+ \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
56
+ \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.78,0.36,0.04}{##1}}}
57
+ \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
58
+ \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}}
59
+ \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
60
+ \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}}
61
+ \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
62
+ \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}}
63
+ \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
64
+ \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.32,0.47,0.09}{##1}}}
65
+ \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.14,0.33,0.53}{##1}}}
66
+ \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
67
+ \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
68
+ \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
69
+ \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
70
+ \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
71
+ \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
72
+ \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
73
+ \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}}
74
+ \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.56,0.13,0.00}{##1}}}
75
+ \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
76
+ \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
77
+ \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
78
+ \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
79
+ \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
80
+
81
+ \def\PYZbs{\char`\\}
82
+ \def\PYZus{\char`\_}
83
+ \def\PYZob{\char`\{}
84
+ \def\PYZcb{\char`\}}
85
+ \def\PYZca{\char`\^}
86
+ \def\PYZam{\char`\&}
87
+ \def\PYZlt{\char`\<}
88
+ \def\PYZgt{\char`\>}
89
+ \def\PYZsh{\char`\#}
90
+ \def\PYZpc{\char`\%}
91
+ \def\PYZdl{\char`\$}
92
+ \def\PYZhy{\char`\-}
93
+ \def\PYZsq{\char`\'}
94
+ \def\PYZdq{\char`\"}
95
+ \def\PYZti{\char`\~}
96
+ \def\PYZat{@}
97
+ \def\PYZlb{[}
98
+ \def\PYZrb{]}
99
+ \makeatother
100
+
101
+
102
+
103
+
104
+ \providecommand*{\DUfootnotemark}[3]{\raisebox{1em}{\hypertarget{#1}{}}\hyperlink{#2}{\textsuperscript{#3}}}
105
+ \providecommand{\DUfootnotetext}[4]{\begingroup \renewcommand{\thefootnote}{\protect\raisebox{1em}{\protect\hypertarget{#1}{}}\protect\hyperlink{#2}{#3}}\footnotetext{#4}\endgroup }
106
+
107
+ \providecommand*{\DUrole}[2]{\ifcsname DUrole#1\endcsname \csname DUrole#1\endcsname{#2}\else \ifcsname docutilsrole#1\endcsname \csname docutilsrole#1\endcsname{#2}\else #2\fi \fi }
108
+
109
+ \ifthenelse{\isundefined{\hypersetup}}{
110
+ \usepackage[colorlinks=true,linkcolor=blue,urlcolor=blue]{hyperref}
111
+ \urlstyle{same} }{}
112
+
113
+
114
+ \begin{document}
115
+ \newcounter{footnotecounter}\title{Frequentism and Bayesianism: A Python-driven Primer}\author{Jake VanderPlas$^{\setcounter{footnotecounter}{1}\fnsymbol{footnotecounter}\setcounter{footnotecounter}{2}\fnsymbol{footnotecounter}}$\setcounter{footnotecounter}{1}\thanks{\fnsymbol{footnotecounter} Corresponding author: \protect\href{mailto:jakevdp@cs.washington.edu}{jakevdp@cs.washington.edu}}\setcounter{footnotecounter}{2}\thanks{\fnsymbol{footnotecounter} eScience Institute, University of Washington}\thanks{
116
+
117
+ \noindent Copyright\,\copyright\,2014 Jake VanderPlas. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.}\\
118
+ }\maketitle
119
+ \renewcommand{\leftmark}{PROC. OF THE 13th PYTHON IN SCIENCE CONF. (SCIPY 2014)}
120
+ \renewcommand{\rightmark}{FREQUENTISM AND BAYESIANISM: A PYTHON-DRIVEN PRIMER}
121
+
122
+
123
+
124
+
125
+ \InputIfFileExists{page_numbers.tex}{}{}
126
+ \newcommand*{\docutilsroleref}{\ref}
127
+ \newcommand*{\docutilsrolelabel}{\label}
128
+ \begin{abstract}This paper presents a brief, semi-technical comparison of the essential features of the frequentist and Bayesian approaches to statistical inference, with several illustrative examples implemented in Python. The differences between frequentism and Bayesianism fundamentally stem from differing definitions of probability, a philosophical divide which leads to distinct approaches to the solution of statistical problems as well as contrasting ways of asking and answering questions about unknown parameters. After an example-driven discussion of these differences, we briefly compare several leading Python statistical packages which implement frequentist inference using classical methods and Bayesian inference using Markov Chain Monte Carlo.\DUfootnotemark{id1}{blog}{1}\end{abstract}\begin{IEEEkeywords}statistics, frequentism, Bayesian inference\end{IEEEkeywords}\DUfootnotetext{blog}{id1}{1}{\phantomsection\label{blog}
129
+ This paper draws heavily from content originally published in a series of posts on the author's blog, \href{http://jakevdp.github.io/}{Pythonic Perambulations} \cite{VanderPlas2014}.}
130
+
131
+
132
+ \subsection{Introduction\label{introduction}}
133
+
134
+
135
+ One of the first things a scientist in a data-intensive field hears about statistics is that there are two different approaches: frequentism and Bayesianism. Despite their importance, many researchers never have opportunity to learn the distinctions between them and the different practical approaches that result.
136
+
137
+ This paper seeks to synthesize the philosophical and pragmatic aspects of this debate, so that scientists who use these approaches might be better prepared to understand the tools available to them. Along the way we will explore the fundamental philosophical disagreement between frequentism and Bayesianism, explore the practical aspects of how this disagreement affects data analysis, and discuss the ways that these practices may affect the interpretation of scientific results.
138
+
139
+ This paper is written for scientists who have picked up some statistical knowledge along the way, but who may not fully appreciate the philosophical differences between frequentist and Bayesian approaches and the effect these differences have on both the computation and interpretation of statistical results. Because this passing statistics knowledge generally leans toward frequentist principles, this paper will go into more depth on the details of Bayesian rather than frequentist approaches. Still, it is not meant to be a full introduction to either class of methods. In particular, concepts such as the likelihood are assumed rather than derived, and many advanced Bayesian and frequentist diagnostic tests are left out in favor of illustrating the most fundamental aspects of the approaches. For a more complete treatment, see, e.g. \cite{Wasserman2004} or \cite{Gelman2004}.
140
+
141
+ \subsection{The Disagreement: The Definition of Probability\label{the-disagreement-the-definition-of-probability}}
142
+
143
+
144
+ Fundamentally, the disagreement between frequentists and Bayesians concerns the definition of probability.
145
+
146
+ For frequentists, probability only has meaning in terms of \textbf{a limiting case of repeated measurements}. That is, if an astronomer measures the photon flux $F$ from a given non-variable star, then measures it again, then again, and so on, each time the result will be slightly different due to the statistical error of the measuring device. In the limit of many measurements, the \emph{frequency} of any given value indicates the probability of measuring that value. For frequentists, \textbf{probabilities are fundamentally related to frequencies of events}. This means, for example, that in a strict frequentist view, it is meaningless to talk about the probability of the \emph{true} flux of the star: the true flux is, by definition, a single fixed value, and to talk about an extended frequency distribution for a fixed value is nonsense.
147
+
148
+ For Bayesians, the concept of probability is extended to cover \textbf{degrees of certainty about statements}. A Bayesian might claim to know the flux $F$ of a star with some probability $P(F)$: that probability can certainly be estimated from frequencies in the limit of a large number of repeated experiments, but this is not fundamental. The probability is a statement of the researcher's knowledge of what the true flux is. For Bayesians, \textbf{probabilities are fundamentally related to their own knowledge about an event}. This means, for example, that in a Bayesian view, we can meaningfully talk about the probability that the \emph{true} flux of a star lies in a given range. That probability codifies our knowledge of the value based on prior information and available data.
149
+
150
+ The surprising thing is that this arguably subtle difference in philosophy can lead, in practice, to vastly different approaches to the statistical analysis of data. Below we will explore a few examples chosen to illustrate the differences in approach, along with associated Python code to demonstrate the practical aspects of the frequentist and Bayesian approaches.
151
+
152
+ \subsection{A Simple Example: Photon Flux Measurements\label{a-simple-example-photon-flux-measurements}}
153
+
154
+
155
+ First we will compare the frequentist and Bayesian approaches to the solution of an extremely simple problem. Imagine that we point a telescope to the sky, and observe the light coming from a single star. For simplicity, we will assume that the star's true photon flux is constant with time, i.e. that is it has a fixed value $F$; we will also ignore effects like sky background systematic errors. We will assume that a series of $N$ measurements are performed, where the $i^{\rm th}$ measurement reports the observed flux $F_i$ and error $e_i$.\DUfootnotemark{id5}{note-about-errors}{2} The question is, given this set of measurements $D = \{F_i,e_i\}$, what is our best estimate of the true flux $F$?\DUfootnotetext{note-about-errors}{id5}{2}{
156
+ We will make the reasonable assumption of normally-distributed measurement errors. In a Frequentist perspective, $e_i$ is the standard deviation of the results of the single measurement event in the limit of (imaginary) repetitions of \emph{that event}. In the Bayesian perspective, $e_i$ describes the probability distribution which quantifies our knowledge of $F$ given the measured value $F_i$.}
157
+
158
+
159
+ First we will use Python to generate some toy data to demonstrate the two approaches to the problem. We will draw 50 samples $F_i$ with a mean of 1000 (in arbitrary units) and a (known) error $e_i$:\vspace{1mm}
160
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
161
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{)} \PY{c}{\PYZsh{} for reproducibility}
162
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{e} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{normal}\PY{p}{(}\PY{l+m+mi}{30}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{50}\PY{p}{)}
163
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{F} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{normal}\PY{p}{(}\PY{l+m+mi}{1000}\PY{p}{,} \PY{n}{e}\PY{p}{)}
164
+ \end{Verbatim}
165
+ \vspace{1mm}
166
+ In this toy example we already know the true flux $F$, but the question is this: given our measurements and errors, what is our best point estimate of the true flux? Let's look at a frequentist and a Bayesian approach to solving this.
167
+
168
+ \subsubsection{Frequentist Approach to Flux Measurement\label{frequentist-approach-to-flux-measurement}}
169
+
170
+
171
+ We will start with the classical frequentist maximum likelihood approach. Given a single observation $D_i = (F_i, e_i)$, we can compute the probability distribution of the measurement given the true flux $F$ given our assumption of Gaussian errors:\begin{equation*}
172
+ P(D_i|F) = \left(2\pi e_i^2\right)^{-1/2} \exp{\left(\frac{-(F_i - F)^2}{2 e_i^2}\right)}.
173
+ \end{equation*}This should be read ``the probability of $D_i$ given $F$ equals ...''. You should recognize this as a normal distribution with mean $F$ and standard deviation $e_i$. We construct the \emph{likelihood} by computing the product of the probabilities for each data point:\begin{equation*}
174
+ \mathcal{L}(D|F) = \prod_{i=1}^N P(D_i|F)
175
+ \end{equation*}Here $D = \{D_i\}$ represents the entire set of measurements. For reasons of both analytic simplicity and numerical accuracy, it is often more convenient to instead consider the log-likelihood; combining the previous two equations gives\begin{equation*}
176
+ \log\mathcal{L}(D|F) = -\frac{1}{2} \sum_{i=1}^N \left[ \log(2\pi e_i^2) + \frac{(F_i - F)^2}{e_i^2} \right].
177
+ \end{equation*}We would like to determine the value of $F$ which maximizes the likelihood. For this simple problem, the maximization can be computed analytically (e.g. by setting $d\log\mathcal{L}/dF|_{\hat{F}} = 0$), which results in the following point estimate of $F$:\begin{equation*}
178
+ \hat{F} = \frac{\sum w_i F_i}{\sum w_i};~~w_i = 1/e_i^2
179
+ \end{equation*}The result is a simple weighted mean of the observed values. Notice that in the case of equal errors $e_i$, the weights cancel and $\hat{F}$ is simply the mean of the observed data.
180
+
181
+ We can go further and ask what the uncertainty of our estimate is. One way this can be accomplished in the frequentist approach is to construct a Gaussian approximation to the peak likelihood; in this simple case the fit can be solved analytically to give:\begin{equation*}
182
+ \sigma_{\hat{F}} = \left(\sum_{i=1}^N w_i \right)^{-1/2}
183
+ \end{equation*}This result can be evaluated this in Python as follows:\vspace{1mm}
184
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
185
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{w} \PY{o}{=} \PY{l+m+mf}{1.} \PY{o}{/} \PY{n}{e} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}
186
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{F\PYZus{}hat} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{n}{w} \PY{o}{*} \PY{n}{F}\PY{p}{)} \PY{o}{/} \PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{n}{w}\PY{p}{)}
187
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{sigma\PYZus{}F} \PY{o}{=} \PY{n}{w}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{o}{\PYZhy{}}\PY{l+m+mf}{0.5}
188
+ \end{Verbatim}
189
+ \vspace{1mm}
190
+ For our particular data, the result is $\hat{F} = 999 \pm 4$.
191
+
192
+ \subsubsection{Bayesian Approach to Flux Measurement\label{bayesian-approach-to-flux-measurement}}
193
+
194
+
195
+ The Bayesian approach, as you might expect, begins and ends with probabilities. The fundamental result of interest is our knowledge of the parameters in question, codified by the probability $P(F|D)$. To compute this result, we next apply Bayes' theorem, a fundamental law of probability:\begin{equation*}
196
+ P(F|D) = \frac{P(D|F)~P(F)}{P(D)}
197
+ \end{equation*}Though Bayes' theorem is where Bayesians get their name, it is important to note that it is not this theorem itself that is controversial, but the Bayesian \emph{interpretation of probability} implied by the term $P(F|D)$. While the above formulation makes sense given the Bayesian view of probability, the setup is fundamentally contrary to the frequentist philosophy, which says that probabilities have no meaning for fixed model parameters like $F$. In the Bayesian conception of probability, however, this poses no problem.
198
+
199
+ Let's take a look at each of the terms in this expression:\begin{itemize}
200
+
201
+ \item
202
+
203
+ $P(F|D)$: The \textbf{posterior}, which is the probability of the model parameters given the data.
204
+ \item
205
+
206
+ $P(D|F)$: The \textbf{likelihood}, which is proportional to the $\mathcal{L}(D|F)$ used in the frequentist approach.
207
+ \item
208
+
209
+ $P(F)$: The \textbf{model prior}, which encodes what we knew about the model before considering the data $D$.
210
+ \item
211
+
212
+ $P(D)$: The \textbf{model evidence}, which in practice amounts to simply a normalization term.
213
+ \end{itemize}
214
+
215
+
216
+ If we set the prior $P(F) \propto 1$ (a \emph{flat prior}), we find\begin{equation*}
217
+ P(F|D) \propto \mathcal{L}(D|F).
218
+ \end{equation*}That is, with a flat prior on $F$, the Bayesian posterior is maximized at precisely the same value as the frequentist result! So despite the philosophical differences, we see that the Bayesian and frequentist point estimates are equivalent for this simple problem.
219
+
220
+ You might notice that we glossed over one important piece here: the prior, $P(F)$, which we assumed to be flat.\DUfootnotemark{id6}{note-flat}{3} The prior allows inclusion of other information into the computation, which becomes very useful in cases where multiple measurement strategies are being combined to constrain a single model (as is the case in, e.g. cosmological parameter estimation). The necessity to specify a prior, however, is one of the more controversial pieces of Bayesian analysis.\DUfootnotetext{note-flat}{id6}{3}{
221
+ A flat prior is an example of an improper prior: that is, it cannot be normalized. In practice, we can remedy this by imposing some bounds on possible values: say, $0 < F < F_{tot}$, the total flux of all sources in the sky. As this normalization term also appears in the denominator of Bayes' theorem, it does not affect the posterior.}
222
+
223
+
224
+ A frequentist will point out that the prior is problematic when no true prior information is available. Though it might seem straightforward to use an \textbf{uninformative prior} like the flat prior mentioned above, there are some surprising subtleties involved.\DUfootnotemark{id7}{uninformative}{4} It turns out that in many situations, a truly uninformative prior cannot exist! Frequentists point out that the subjective choice of a prior which necessarily biases the result should have no place in scientific data analysis.
225
+
226
+ A Bayesian would counter that frequentism doesn't solve this problem, but simply skirts the question. Frequentism can often be viewed as simply a special case of the Bayesian approach for some (implicit) choice of the prior: a Bayesian would say that it's better to make this implicit choice explicit, even if the choice might include some subjectivity. Furthermore, as we will see below, the question frequentism answers is not always the question the researcher wants to ask.\DUfootnotetext{uninformative}{id7}{4}{\phantomsection\label{uninformative}
227
+ The flat prior in this case can be motivated by maximum entropy; see, e.g. \cite{Jeffreys1946}. Still, the use of uninformative priors like this often raises eyebrows among frequentists: there are good arguments that even ``uninformative'' priors can add information; see e.g. \cite{Evans2002}.}
228
+
229
+
230
+ \subsection{Where The Results Diverge\label{where-the-results-diverge}}
231
+
232
+
233
+ In the simple example above, the frequentist and Bayesian approaches give basically the same result. In light of this, arguments over the use of a prior and the philosophy of probability may seem frivolous. However, while it is easy to show that the two approaches are often equivalent for simple problems, it is also true that they can diverge greatly in other situations. In practice, this divergence most often makes itself most clear in two different ways:\newcounter{listcnt0}
234
+ \begin{list}{\arabic{listcnt0}.}
235
+ {
236
+ \usecounter{listcnt0}
237
+ \setlength{\rightmargin}{\leftmargin}
238
+ }
239
+
240
+ \item
241
+
242
+ The handling of nuisance parameters: i.e. parameters which affect the final result, but are not otherwise of interest.
243
+ \item
244
+
245
+ The different handling of uncertainty: for example, the subtle (and often overlooked) difference between frequentist confidence intervals and Bayesian credible regions.\end{list}
246
+
247
+
248
+ We will discuss examples of these below.
249
+
250
+ \subsection{Nuisance Parameters: Bayes' Billiards Game\label{nuisance-parameters-bayes-billiards-game}}
251
+
252
+
253
+ We will start by discussing the first point: nuisance parameters. A nuisance parameter is any quantity whose value is not directly relevant to the goal of an analysis, but is nevertheless required to determine the result which is of interest. For example, we might have a situation similar to the flux measurement above, but in which the errors $e_i$ are unknown. One potential approach is to treat these errors as nuisance parameters.
254
+
255
+ Here we consider an example of nuisance parameters borrowed from \cite{Eddy2004} that, in one form or another, dates all the way back to the posthumously-published 1763 paper written by Thomas Bayes himself \cite{Bayes1763}. The setting is a gambling game in which Alice and Bob bet on the outcome of a process they can't directly observe.
256
+
257
+ Alice and Bob enter a room. Behind a curtain there is a billiard table, which they cannot see. Their friend Carol rolls a ball down the table, and marks where it lands. Once this mark is in place, Carol begins rolling new balls down the table. If the ball lands to the left of the mark, Alice gets a point; if it lands to the right of the mark, Bob gets a point. We can assume that Carol's rolls are unbiased: that is, the balls have an equal chance of ending up anywhere on the table. The first person to reach six points wins the game.
258
+
259
+ Here the location of the mark (determined by the first roll) can be considered a nuisance parameter: it is unknown and not of immediate interest, but it clearly must be accounted for when predicting the outcome of subsequent rolls. If this first roll settles far to the right, then subsequent rolls will favor Alice. If it settles far to the left, Bob will be favored instead.
260
+
261
+ Given this setup, we seek to answer this question: \emph{In a particular game, after eight rolls, Alice has five points and Bob has three points. What is the probability that Bob will get six points and win the game?}
262
+
263
+ Intuitively, we realize that because Alice received five of the eight points, the marker placement likely favors her. Given that she has three opportunities to get a sixth point before Bob can win, she seems to have clinched it. But quantitatively speaking, what is the probability that Bob will persist to win?
264
+
265
+ \subsubsection{A Naïve Frequentist Approach\label{a-naive-frequentist-approach}}
266
+
267
+
268
+ Someone following a classical frequentist approach might reason as follows:
269
+
270
+ To determine the result, we need to estimate the location of the marker. We will quantify this marker placement as a probability $p$ that any given roll lands in Alice's favor. Because five balls out of eight fell on Alice's side of the marker, we compute the maximum likelihood estimate of $p$, given by:\begin{equation*}
271
+ \hat{p} = 5/8,
272
+ \end{equation*}a result follows in a straightforward manner from the binomial likelihood. Assuming this maximum likelihood probability, we can compute the probability that Bob will win, which requires him to get a point in each of the next three rolls. This is given by:\begin{equation*}
273
+ P(B) = (1 - \hat{p})^3
274
+ \end{equation*}Thus, we find that the probability of Bob winning is 0.053, or odds against Bob winning of 18 to 1.
275
+
276
+ \subsubsection{A Bayesian Approach\label{a-bayesian-approach}}
277
+
278
+
279
+ A Bayesian approach to this problem involves \emph{marginalizing} (i.e. integrating) over the unknown $p$ so that, assuming the prior is accurate, our result is agnostic to its actual value. In this vein, we will consider the following quantities:\begin{itemize}
280
+
281
+ \item
282
+
283
+ $B$ = Bob Wins
284
+ \item
285
+
286
+ $D$ = observed data, i.e. $D = (n_A, n_B) = (5, 3)$
287
+ \item
288
+
289
+ $p$ = unknown probability that a ball lands on Alice's side during the current game
290
+ \end{itemize}
291
+
292
+
293
+ We want to compute $P(B|D)$; that is, the probability that Bob wins given the observation that Alice currently has five points to Bob's three. A Bayesian would recognize that this expression is a \emph{marginal probability} which can be computed by integrating over the joint distribution $P(B,p|D)$:\begin{equation*}
294
+ P(B|D) \equiv \int_{-\infty}^\infty P(B,p|D) {\mathrm d}p
295
+ \end{equation*}This identity follows from the definition of conditional probability, and the law of total probability: that is, it is a fundamental consequence of probability axioms and will always be true. Even a frequentist would recognize this; they would simply disagree with the interpretation of $P(p)$ as being a measure of uncertainty of knowledge of the parameter $p$.
296
+
297
+ To compute this result, we will manipulate the above expression for $P(B|D)$ until we can express it in terms of other quantities that we can compute.
298
+
299
+ We start by applying the definition of conditional probability to expand the term $P(B,p|D)$:\begin{equation*}
300
+ P(B|D) = \int P(B|p, D) P(p|D) dp
301
+ \end{equation*}Next we use Bayes' rule to rewrite $P(p|D)$:\begin{equation*}
302
+ P(B|D) = \int P(B|p, D) \frac{P(D|p)P(p)}{P(D)} dp
303
+ \end{equation*}Finally, using the same probability identity we started with, we can expand $P(D)$ in the denominator to find:\begin{equation*}
304
+ P(B|D) = \frac{\int P(B|p,D) P(D|p) P(p) dp}{\int P(D|p)P(p) dp}
305
+ \end{equation*}Now the desired probability is expressed in terms of three quantities that we can compute:\begin{itemize}
306
+
307
+ \item
308
+
309
+ $P(B|p,D)$: This term is proportional to the frequentist likelihood we used above. In words: given a marker placement $p$ and Alice's 5 wins to Bob's 3, what is the probability that Bob will go on to six wins? Bob needs three wins in a row, i.e. $P(B|p,D) = (1 - p) ^ 3$.
310
+ \item
311
+
312
+ $P(D|p)$: this is another easy-to-compute term. In words: given a probability $p$, what is the likelihood of exactly 5 positive outcomes out of eight trials? The answer comes from the Binomial distribution: $P(D|p) \propto p^5 (1-p)^3$
313
+ \item
314
+
315
+ $P(p)$: this is our prior on the probability $p$. By the problem definition, we can assume that $p$ is evenly drawn between 0 and 1. That is, $P(p) \propto 1$ for $0 \le p \le 1$.
316
+ \end{itemize}
317
+
318
+
319
+ Putting this all together and simplifying gives\begin{equation*}
320
+ P(B|D) = \frac{\int_0^1 (1 - p)^6 p^5 dp}{\int_0^1 (1 - p)^3 p^5 dp}.
321
+ \end{equation*}These integrals are instances of the beta function, so we can quickly evaluate the result using scipy:\vspace{1mm}
322
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
323
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{k+kn}{from} \PY{n+nn}{scipy.special} \PY{k+kn}{import} \PY{n}{beta}
324
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{P\PYZus{}B\PYZus{}D} \PY{o}{=} \PY{n}{beta}\PY{p}{(}\PY{l+m+mi}{6}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{5}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)} \PY{o}{/} \PY{n}{beta}\PY{p}{(}\PY{l+m+mi}{3}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{5}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}
325
+ \end{Verbatim}
326
+ \vspace{1mm}
327
+ This gives $P(B|D) = 0.091$, or odds of 10 to 1 against Bob winning.
328
+
329
+ \subsubsection{Discussion\label{discussion}}
330
+
331
+
332
+ The Bayesian approach gives odds of 10 to 1 against Bob, while the naïve frequentist approach gives odds of 18 to 1 against Bob. So which one is correct?
333
+
334
+ For a simple problem like this, we can answer this question empirically by simulating a large number of games and count the fraction of suitable games which Bob goes on to win. This can be coded in a couple dozen lines of Python (see part II of \cite{VanderPlas2014}). The result of such a simulation confirms the Bayesian result: 10 to 1 against Bob winning.
335
+
336
+ So what is the takeaway: is frequentism wrong? Not necessarily: in this case, the incorrect result is more a matter of the approach being ``naïve'' than it being ``frequentist''. The approach above does not consider how $p$ may vary. There exist frequentist methods that can address this by, e.g. applying a transformation and conditioning of the data to isolate dependence on $p$, or by performing a Bayesian-like integral over the sampling distribution of the frequentist estimator $\hat{p}$.
337
+
338
+ Another potential frequentist response is that the question itself is posed in a way that does not lend itself to the classical, frequentist approach. A frequentist might instead hope to give the answer in terms of null tests or confidence intervals: that is, they might devise a procedure to construct limits which would provably bound the correct answer in $100\times(1 - \alpha)$ percent of similar trials, for some value of $\alpha$ – say, 0.05. We will discuss the meaning of such confidence intervals below.
339
+
340
+ There is one clear common point of these two frequentist responses: both require some degree of effort and/or special expertise in classical methods; perhaps a suitable frequentist approach would be immediately obvious to an expert statistician, but is not particularly obvious to a statistical lay-person. In this sense, it could be argued that for a problem such as this (i.e. with a well-motivated prior), Bayesianism provides a more natural framework for handling nuisance parameters: by simple algebraic manipulation of a few well-known axioms of probability interpreted in a Bayesian sense, we straightforwardly arrive at the correct answer without need for other special statistical expertise.
341
+
342
+ \subsection{Confidence vs. Credibility: Jaynes' Truncated Exponential\label{confidence-vs-credibility-jaynes-truncated-exponential}}
343
+
344
+
345
+ A second major consequence of the philosophical difference between frequentism and Bayesianism is in the handling of uncertainty, exemplified by the standard tools of each method: frequentist confidence intervals (CIs) and Bayesian credible regions (CRs). Despite their apparent similarity, the two approaches are fundamentally different. Both are statements of probability, but the probability refers to different aspects of the computed bounds. For example, when constructing a standard 95\% bound about a parameter $\theta$:\begin{itemize}
346
+
347
+ \item
348
+
349
+ A Bayesian would say: ``Given our observed data, there is a 95\% probability that the true value of $\theta$ lies within the credible region''.
350
+ \item
351
+
352
+ A frequentist would say: ``If this experiment is repeated many times, in 95\% of these cases the computed confidence interval will contain the true $\theta$.''\DUfootnotemark{id13}{wasserman-note}{5}
353
+ \end{itemize}
354
+ \DUfootnotetext{wasserman-note}{id13}{5}{
355
+ \cite{Wasserman2004}, however, notes on p. 92 that we need not consider repetitions of the same experiment; it's sufficient to consider repetitions of any correctly-performed frequentist procedure.}
356
+
357
+
358
+ Notice the subtle difference: the Bayesian makes a statement of probability about the \emph{parameter value} given a \emph{fixed credible region}. The frequentist makes a statement of probability about the \emph{confidence interval itself} given a \emph{fixed parameter value}. This distinction follows straightforwardly from the definition of probability discussed above: the Bayesian probability is a statement of degree of knowledge about a parameter; the frequentist probability is a statement of long-term limiting frequency of quantities (such as the CI) derived from the data.
359
+
360
+ This difference must necessarily affect our interpretation of results. For example, it is common in scientific literature to see it claimed that it is 95\% certain that an unknown parameter lies within a given 95\% CI, but this is not the case! This is erroneously applying the Bayesian interpretation to a frequentist construction. This frequentist oversight can perhaps be forgiven, as under most circumstances (such as the simple flux measurement example above), the Bayesian CR and frequentist CI will more-or-less overlap. But, as we will see below, this overlap cannot always be assumed, especially in the case of non-Gaussian distributions constrained by few data points. As a result, this common misinterpretation of the frequentist CI can lead to dangerously erroneous conclusions.
361
+
362
+ To demonstrate a situation in which the frequentist confidence interval and the Bayesian credibility region do not overlap, let us turn to an example given by E.T. Jaynes, a 20th century physicist who wrote extensively on statistical inference. In his words, consider a device that\begin{quotation}\begin{quote}
363
+
364
+
365
+ ``...will operate without failure for a time $\theta$ because of a protective chemical inhibitor injected into it; but at time $\theta$ the supply of the chemical is exhausted, and failures then commence, following the exponential failure law. It is not feasible to observe the depletion of this inhibitor directly; one can observe only the resulting failures. From data on actual failure times, estimate the time $\theta$ of guaranteed safe operation...'' \cite{Jaynes1976}
366
+ \end{quote}
367
+ \end{quotation}
368
+
369
+ Essentially, we have data $D$ drawn from the model:\begin{equation*}
370
+ P(x|\theta) = \left\{
371
+ \begin{array}{lll}
372
+ \exp(\theta - x) &,& x > \theta\\
373
+ 0 &,& x < \theta
374
+ \end{array}
375
+ \right\}
376
+ \end{equation*}where $p(x|\theta)$ gives the probability of failure at time $x$, given an inhibitor which lasts for a time $\theta$. We observe some failure times, say $D = \{10, 12, 15\}$, and ask for 95\% uncertainty bounds on the value of $\theta$.
377
+
378
+ First, let's think about what common-sense would tell us. Given the model, an event can only happen after a time $\theta$. Turning this around tells us that the upper-bound for $\theta$ must be $\min(D)$. So, for our particular data, we would immediately write $\theta \le 10$. With this in mind, let's explore how a frequentist and a Bayesian approach compare to this observation.
379
+
380
+ \subsubsection{Truncated Exponential: A Frequentist Approach\label{truncated-exponential-a-frequentist-approach}}
381
+
382
+
383
+ In the frequentist paradigm, we'd like to compute a confidence interval on the value of $\theta$. We might start by observing that the population mean is given by\begin{equation*}
384
+ E(x) = \int_0^\infty xp(x)dx = \theta + 1.
385
+ \end{equation*}So, using the sample mean as the point estimate of $E(x)$, we have an unbiased estimator for $\theta$ given by\begin{equation*}
386
+ \hat{\theta} = \frac{1}{N} \sum_{i=1}^N x_i - 1.
387
+ \end{equation*}In the large-$N$ limit, the central limit theorem tells us that the sampling distribution is normal with standard deviation given by the standard error of the mean: $\sigma_{\hat{\theta}}^2 = 1/N$, and we can write the 95\% (i.e. $2\sigma$) confidence interval as\begin{equation*}
388
+ CI_{\rm large~N} = \left(\hat{\theta} - 2 N^{-1/2},~\hat{\theta} + 2 N^{-1/2}\right)
389
+ \end{equation*}For our particular observed data, this gives a confidence interval around our unbiased estimator of $CI(\theta) = (10.2, 12.5)$, entirely above our common-sense bound of $\theta < 10$! We might hope that this discrepancy is due to our use of the large-$N$ approximation with a paltry $N=3$ samples. A more careful treatment of the problem (See \cite{Jaynes1976} or part III of \cite{VanderPlas2014}) gives the exact confidence interval $(10.2, 12.2)$: the 95\% confidence interval entirely excludes the sensible bound $\theta < 10$!
390
+
391
+ \subsubsection{Truncated Exponential: A Bayesian Approach\label{truncated-exponential-a-bayesian-approach}}
392
+
393
+
394
+ A Bayesian approach to the problem starts with Bayes' rule:\begin{equation*}
395
+ P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)}.
396
+ \end{equation*}We use the likelihood given by\begin{equation*}
397
+ P(D|\theta) \propto \prod_{i=1}^N P(x_i|\theta)
398
+ \end{equation*}and, in the absence of other information, use an uninformative flat prior on $\theta$ to find\begin{equation*}
399
+ P(\theta|D) \propto \left\{
400
+ \begin{array}{lll}
401
+ N\exp\left[N(\theta - \min(D))\right] &,& \theta < \min(D)\\
402
+ 0 &,& \theta > \min(D)
403
+ \end{array}
404
+ \right\}
405
+ \end{equation*}where $\min(D)$ is the smallest value in the data $D$, which enters because of the truncation of $P(x_i|\theta)$. Because $P(\theta|D)$ increases exponentially up to the cutoff, the shortest 95\% credibility interval $(\theta_1, \theta_2)$ will be given by $\theta_2 = \min(D)$, and $\theta_1$ given by the solution to the equation\begin{equation*}
406
+ \int_{\theta_1}^{\theta_2} P(\theta|D){\rm d}\theta = f
407
+ \end{equation*}which has the solution\begin{equation*}
408
+ \theta_1 = \theta_2 + \frac{1}{N}\ln\left[1 - f(1 - e^{-N\theta_2})\right].
409
+ \end{equation*}For our particular data, the Bayesian credible region is\begin{equation*}
410
+ CR(\theta) = (9.0, 10.0)
411
+ \end{equation*}which agrees with our common-sense bound.
412
+
413
+ \subsubsection{Discussion\label{id18}}
414
+
415
+
416
+ Why do the frequentist CI and Bayesian CR give such different results? The reason goes back to the definitions of the CI and CR, and to the fact that \emph{the two approaches are answering different questions}. The Bayesian CR answers a question about the value of $\theta$ itself (the probability that the parameter is in the fixed CR), while the frequentist CI answers a question about the procedure used to construct the CI (the probability that any potential CI will contain the fixed parameter).
417
+
418
+ Using Monte Carlo simulations, it is possible to confirm that both the above results correctly answer their respective questions (see \cite{VanderPlas2014}, III). In particular, 95\% of frequentist CIs constructed using data drawn from this model in fact contain the true $\theta$. Our particular data are simply among the unhappy 5\% which the confidence interval misses. But this makes clear the danger of misapplying the Bayesian interpretation to a CI: this particular CI is not 95\% likely to contain the true value of $\theta$; it is in fact 0\% likely!
419
+
420
+ This shows that when using frequentist methods on fixed data, we must carefully keep in mind what question frequentism is answering. Frequentism does not seek a \emph{probabilistic statement about a fixed interval} as the Bayesian approach does; it instead seeks probabilistic statements about an \emph{ensemble of constructed intervals}, with the particular computed interval just a single draw from among them. Despite this, it is common to see a 95\% confidence interval interpreted in the Bayesian sense: as a fixed interval that the parameter is expected to be found in 95\% of the time. As seen clearly here, this interpretation is flawed, and should be carefully avoided.
421
+
422
+ Though we used a correct unbiased frequentist estimator above, it should be emphasized that the unbiased estimator is not always optimal for any given problem: especially one with small $N$ and/or censored models; see, e.g. \cite{Hardy2003}. Other frequentist estimators are available: for example, if the (biased) maximum likelihood estimator were used here instead, the confidence interval would be very similar to the Bayesian credible region derived above. Regardless of the choice of frequentist estimator, however, the correct interpretation of the CI is the same: it gives probabilities concerning the \emph{recipe for constructing limits}, not for the \emph{parameter values given the observed data}. For sensible parameter constraints from a single dataset, Bayesianism may be preferred, especially if the difficulties of uninformative priors can be avoided through the use of true prior information.
423
+
424
+ \subsection{Bayesianism in Practice: Markov Chain Monte Carlo\label{bayesianism-in-practice-markov-chain-monte-carlo}}
425
+
426
+
427
+ Though Bayesianism has some nice features in theory, in practice it can be extremely computationally intensive: while simple problems like those examined above lend themselves to relatively easy analytic integration, real-life Bayesian computations often require numerical integration of high-dimensional parameter spaces.
428
+
429
+ A turning-point in practical Bayesian computation was the development and application of sampling methods such as Markov Chain Monte Carlo (MCMC). MCMC is a class of algorithms which can efficiently characterize even high-dimensional posterior distributions through drawing of randomized samples such that the points are distributed according to the posterior. A detailed discussion of MCMC is well beyond the scope of this paper; an excellent introduction can be found in \cite{Gelman2004}. Below, we will propose a straightforward model and compare a standard frequentist approach with three MCMC implementations available in Python.
430
+
431
+ \subsection{Application: A Simple Linear Model\label{application-a-simple-linear-model}}
432
+
433
+
434
+ As an example of a more realistic data-driven analysis, let's consider a simple three-parameter linear model which fits a straight-line to data with unknown errors. The parameters will be the the y-intercept $\alpha$, the slope $\beta$, and the (unknown) normal scatter $\sigma$ about the line.
435
+
436
+ For data $D = \{x_i, y_i\}$, the model is\begin{equation*}
437
+ \hat{y}(x_i|\alpha,\beta) = \alpha + \beta x_i,
438
+ \end{equation*}and the likelihood is the product of the Gaussian distribution for each point:\begin{equation*}
439
+ \mathcal{L}(D|\alpha,\beta,\sigma) = (2\pi\sigma^2)^{-N/2} \prod_{i=1}^N \exp\left[\frac{-[y_i - \hat{y}(x_i|\alpha, \beta)]^2}{2\sigma^2}\right].
440
+ \end{equation*}We will evaluate this model on the following data set:\vspace{1mm}
441
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
442
+ \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k+kn}{as} \PY{n+nn}{np}
443
+ \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{42}\PY{p}{)} \PY{c}{\PYZsh{} for repeatability}
444
+ \PY{n}{theta\PYZus{}true} \PY{o}{=} \PY{p}{(}\PY{l+m+mi}{25}\PY{p}{,} \PY{l+m+mf}{0.5}\PY{p}{)}
445
+ \PY{n}{xdata} \PY{o}{=} \PY{l+m+mi}{100} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{random}\PY{p}{(}\PY{l+m+mi}{20}\PY{p}{)}
446
+ \PY{n}{ydata} \PY{o}{=} \PY{n}{theta\PYZus{}true}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{+} \PY{n}{theta\PYZus{}true}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{*} \PY{n}{xdata}
447
+ \PY{n}{ydata} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{normal}\PY{p}{(}\PY{n}{ydata}\PY{p}{,} \PY{l+m+mi}{10}\PY{p}{)} \PY{c}{\PYZsh{} add error}
448
+ \end{Verbatim}
449
+ \vspace{1mm}
450
+ Below we will consider a frequentist solution to this problem computed with the statsmodels package\DUfootnotemark{id22}{statsmodels}{6}, as well as a Bayesian solution computed with several MCMC implementations in Python: emcee\DUfootnotemark{id23}{emcee}{7}, PyMC\DUfootnotemark{id24}{pymc}{8}, and PyStan\DUfootnotemark{id25}{pystan}{9}. A full discussion of the strengths and weaknesses of the various MCMC algorithms used by the packages is out of scope for this paper, as is a full discussion of performance benchmarks for the packages. Rather, the purpose of this section is to show side-by-side examples of the Python APIs of the packages. First, though, we will consider a frequentist solution.\DUfootnotetext{statsmodels}{id22}{6}{\phantomsection\label{statsmodels}
451
+ statsmodels: Statistics in Python \url{http://statsmodels.sourceforge.net/}}
452
+ \DUfootnotetext{emcee}{id23}{7}{\phantomsection\label{emcee}
453
+ emcee: The MCMC Hammer \url{http://dan.iel.fm/emcee/}}
454
+ \DUfootnotetext{pymc}{id24}{8}{\phantomsection\label{pymc}
455
+ PyMC: Bayesian Inference in Python \url{http://pymc-devs.github.io/pymc/}}
456
+ \DUfootnotetext{pystan}{id25}{9}{\phantomsection\label{pystan}
457
+ PyStan: The Python Interface to Stan \url{https://pystan.readthedocs.org/}}
458
+
459
+
460
+ \subsubsection{Frequentist Solution\label{frequentist-solution}}
461
+
462
+
463
+ A frequentist solution can be found by computing the maximum likelihood point estimate. For standard linear problems such as this, the result can be computed using efficient linear algebra. If we define the \emph{parameter vector}, $\theta = [\alpha~\beta]^T$; the \emph{response vector}, $Y = [y_1~y_2~y_3~\cdots~y_N]^T$; and the \emph{design matrix},\begin{equation*}
464
+ X = \left[
465
+ \begin{array}{lllll}
466
+ 1 & 1 & 1 &\cdots & 1\\
467
+ x_1 & x_2 & x_3 & \cdots & x_N
468
+ \end{array}\right]^T,
469
+ \end{equation*}it can be shown that the maximum likelihood solution is\begin{equation*}
470
+ \hat{\theta} = (X^TX)^{-1}(X^T Y).
471
+ \end{equation*}The confidence interval around this value is an ellipse in parameter space defined by the following matrix:\begin{equation*}
472
+ \Sigma_{\hat{\theta}}
473
+ \equiv \left[
474
+ \begin{array}{ll}
475
+ \sigma_\alpha^2 & \sigma_{\alpha\beta} \\
476
+ \sigma_{\alpha\beta} & \sigma_\beta^2
477
+ \end{array}
478
+ \right]
479
+ = \sigma^2 (M^TM)^{-1}.
480
+ \end{equation*}Here $\sigma$ is our unknown error term; it can be estimated based on the variance of the residuals about the fit. The off-diagonal elements of $\Sigma_{\hat{\theta}}$ are the correlated uncertainty between the estimates. In code, the computation looks like this:\vspace{1mm}
481
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
482
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{X} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{vstack}\PY{p}{(}\PY{p}{[}\PY{n}{np}\PY{o}{.}\PY{n}{ones\PYZus{}like}\PY{p}{(}\PY{n}{xdata}\PY{p}{)}\PY{p}{,} \PY{n}{xdata}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
483
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{theta\PYZus{}hat} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linalg}\PY{o}{.}\PY{n}{solve}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{X}\PY{o}{.}\PY{n}{T}\PY{p}{,} \PY{n}{X}\PY{p}{)}\PY{p}{,}
484
+ \PY{o}{.}\PY{o}{.}\PY{o}{.} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{X}\PY{o}{.}\PY{n}{T}\PY{p}{,} \PY{n}{ydata}\PY{p}{)}\PY{p}{)}
485
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{y\PYZus{}hat} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{X}\PY{p}{,} \PY{n}{theta\PYZus{}hat}\PY{p}{)}
486
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{sigma\PYZus{}hat} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{ydata} \PY{o}{\PYZhy{}} \PY{n}{y\PYZus{}hat}\PY{p}{)}
487
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{Sigma} \PY{o}{=} \PY{n}{sigma\PYZus{}hat} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2} \PY{o}{*}\PYZbs{}
488
+ \PY{o}{.}\PY{o}{.}\PY{o}{.} \PY{n}{np}\PY{o}{.}\PY{n}{linalg}\PY{o}{.}\PY{n}{inv}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{X}\PY{o}{.}\PY{n}{T}\PY{p}{,} \PY{n}{X}\PY{p}{)}\PY{p}{)}
489
+ \end{Verbatim}
490
+ \vspace{1mm}
491
+ The $1\sigma$ and $2\sigma$ results are shown by the black ellipses in Figure \DUrole{ref}{fig1}.
492
+
493
+ In practice, the frequentist approach often relies on many more statistal diagnostics beyond the maximum likelihood and confidence interval. These can be computed quickly using convenience routines built-in to the \texttt{statsmodels} package \cite{Seabold2010}. For this problem, it can be used as follows:\vspace{1mm}
494
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
495
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{k+kn}{import} \PY{n+nn}{statsmodels.api} \PY{k+kn}{as} \PY{n+nn}{sm} \PY{c}{\PYZsh{} version 0.5}
496
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{X} \PY{o}{=} \PY{n}{sm}\PY{o}{.}\PY{n}{add\PYZus{}constant}\PY{p}{(}\PY{n}{xdata}\PY{p}{)}
497
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{result} \PY{o}{=} \PY{n}{sm}\PY{o}{.}\PY{n}{OLS}\PY{p}{(}\PY{n}{ydata}\PY{p}{,} \PY{n}{X}\PY{p}{)}\PY{o}{.}\PY{n}{fit}\PY{p}{(}\PY{p}{)}
498
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{sigma\PYZus{}hat} \PY{o}{=} \PY{n}{result}\PY{o}{.}\PY{n}{params}
499
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{n}{Sigma} \PY{o}{=} \PY{n}{result}\PY{o}{.}\PY{n}{cov\PYZus{}params}\PY{p}{(}\PY{p}{)}
500
+ \PY{o}{\PYZgt{}\PYZgt{}}\PY{o}{\PYZgt{}} \PY{k}{print}\PY{p}{(}\PY{n}{result}\PY{o}{.}\PY{n}{summary2}\PY{p}{(}\PY{p}{)}\PY{p}{)}
501
+ \end{Verbatim}
502
+ \vspace{1mm}
503
+ \vspace{1mm}
504
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
505
+ ====================================================
506
+ Model: OLS AIC: 147.773
507
+ Dependent Variable: y BIC: 149.765
508
+ No. Observations: 20 Log\PYZhy{}Likelihood: \PYZhy{}71.887
509
+ Df Model: 1 F\PYZhy{}statistic: 41.97
510
+ Df Residuals: 18 Prob (F\PYZhy{}statistic): 4.3e\PYZhy{}06
511
+ R\PYZhy{}squared: 0.70 Scale: 86.157
512
+ Adj. R\PYZhy{}squared: 0.68
513
+ \PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}
514
+ Coef. Std.Err. t P\PYZgt{}|t| [0.025 0.975]
515
+ \PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}
516
+ const 24.6361 3.7871 6.5053 0.0000 16.6797 32.592
517
+ x1 0.4483 0.0692 6.4782 0.0000 0.3029 0.593
518
+ \PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}
519
+ Omnibus: 1.996 Durbin\PYZhy{}Watson: 2.75
520
+ Prob(Omnibus): 0.369 Jarque\PYZhy{}Bera (JB): 1.63
521
+ Skew: 0.651 Prob(JB): 0.44
522
+ Kurtosis: 2.486 Condition No.: 100
523
+ ====================================================
524
+ \end{Verbatim}
525
+ \vspace{1mm}
526
+ The summary output includes many advanced statistics which we don't have space to fully discuss here. For a trained practitioner these diagnostics are very useful for evaluating and comparing fits, especially for more complicated models; see \cite{Wasserman2004} and the statsmodels project documentation for more details.
527
+
528
+ \subsubsection{Bayesian Solution: Overview\label{bayesian-solution-overview}}
529
+
530
+
531
+ The Bayesian result is encapsulated in the posterior, which is proportional to the product of the likelihood and the prior; in this case we must be aware that a flat prior is not uninformative. Because of the nature of the slope, a flat prior leads to a much higher probability for steeper slopes. One might imagine addressing this by transforming variables, e.g. using a flat prior on the angle the line makes with the x-axis rather than the slope. It turns out that the appropriate change of variables can be determined much more rigorously by following arguments first developed by \cite{Jeffreys1946}.
532
+
533
+ Our model is given by $y = \alpha + \beta x$ with probability element $P(\alpha, \beta)d\alpha d\beta$. By symmetry, we could just as well have written $x = \alpha^\prime + \beta^\prime y$ with probability element $Q(\alpha^\prime, \beta^\prime)d\alpha^\prime d\beta^\prime$. It then follows that $(\alpha^\prime, \beta^\prime) = (-\beta^{-1}\alpha, \beta^{-1})$. Computing the determinant of the Jacobian of this transformation, we can then show that $Q(\alpha^\prime, \beta^\prime) = \beta^3 P(\alpha, \beta)$. The symmetry of the problem requires equivalence of $P$ and $Q$, or $\beta^3 P(\alpha,\beta) = P(-\beta^{-1}\alpha, \beta^{-1})$, which is a functional equation satisfied by\begin{equation*}
534
+ P(\alpha, \beta) \propto (1 + \beta^2)^{-3/2}.
535
+ \end{equation*}This turns out to be equivalent to choosing flat priors on the alternate variables $(\theta, \alpha_\perp) = (\tan^{-1}\beta, \alpha\cos\theta)$.
536
+
537
+ Through similar arguments based on the invariance of $\sigma$ under a change of units, we can show that\begin{equation*}
538
+ P(\sigma) \propto 1/\sigma,
539
+ \end{equation*}which is most commonly known a the \emph{Jeffreys Prior} for scale factors after \cite{Jeffreys1946}, and is equivalent to flat prior on $\log\sigma$. Putting these together, we find the following uninformative prior for our linear regression problem:\begin{equation*}
540
+ P(\alpha,\beta,\sigma) \propto \frac{1}{\sigma}(1 + \beta^2)^{-3/2}.
541
+ \end{equation*}With this prior and the above likelihood, we are prepared to numerically evaluate the posterior via MCMC.
542
+
543
+ \subsubsection{Solution with emcee\label{solution-with-emcee}}
544
+
545
+
546
+ The emcee package \cite{ForemanMackey2013} is a lightweight pure-Python package which implements Affine Invariant Ensemble MCMC \cite{Goodman2010}, a sophisticated version of MCMC sampling. To use \texttt{emcee}, all that is required is to define a Python function representing the logarithm of the posterior. For clarity, we will factor this definition into two functions, the log-prior and the log-likelihood:\vspace{1mm}
547
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
548
+ \PY{k+kn}{import} \PY{n+nn}{emcee} \PY{c}{\PYZsh{} version 2.0}
549
+
550
+ \PY{k}{def} \PY{n+nf}{log\PYZus{}prior}\PY{p}{(}\PY{n}{theta}\PY{p}{)}\PY{p}{:}
551
+ \PY{n}{alpha}\PY{p}{,} \PY{n}{beta}\PY{p}{,} \PY{n}{sigma} \PY{o}{=} \PY{n}{theta}
552
+ \PY{k}{if} \PY{n}{sigma} \PY{o}{\PYZlt{}} \PY{l+m+mi}{0}\PY{p}{:}
553
+ \PY{k}{return} \PY{o}{\PYZhy{}}\PY{n}{np}\PY{o}{.}\PY{n}{inf} \PY{c}{\PYZsh{} log(0)}
554
+ \PY{k}{else}\PY{p}{:}
555
+ \PY{k}{return} \PY{p}{(}\PY{o}{\PYZhy{}}\PY{l+m+mf}{1.5} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{+} \PY{n}{beta}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}
556
+ \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{sigma}\PY{p}{)}\PY{p}{)}
557
+
558
+ \PY{k}{def} \PY{n+nf}{log\PYZus{}like}\PY{p}{(}\PY{n}{theta}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{y}\PY{p}{)}\PY{p}{:}
559
+ \PY{n}{alpha}\PY{p}{,} \PY{n}{beta}\PY{p}{,} \PY{n}{sigma} \PY{o}{=} \PY{n}{theta}
560
+ \PY{n}{y\PYZus{}model} \PY{o}{=} \PY{n}{alpha} \PY{o}{+} \PY{n}{beta} \PY{o}{*} \PY{n}{x}
561
+ \PY{k}{return} \PY{o}{\PYZhy{}}\PY{l+m+mf}{0.5} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{pi}\PY{o}{*}\PY{n}{sigma}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{o}{+}
562
+ \PY{p}{(}\PY{n}{y}\PY{o}{\PYZhy{}}\PY{n}{y\PYZus{}model}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{/} \PY{n}{sigma}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}
563
+
564
+ \PY{k}{def} \PY{n+nf}{log\PYZus{}posterior}\PY{p}{(}\PY{n}{theta}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{y}\PY{p}{)}\PY{p}{:}
565
+ \PY{k}{return} \PY{n}{log\PYZus{}prior}\PY{p}{(}\PY{n}{theta}\PY{p}{)} \PY{o}{+} \PY{n}{log\PYZus{}like}\PY{p}{(}\PY{n}{theta}\PY{p}{,}\PY{n}{x}\PY{p}{,}\PY{n}{y}\PY{p}{)}
566
+ \end{Verbatim}
567
+ \vspace{1mm}
568
+ Next we set up the computation. \texttt{emcee} combines multiple interacting ``walkers'', each of which results in its own Markov chain. We will also specify a burn-in period, to allow the chains to stabilize prior to drawing our final traces:\vspace{1mm}
569
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
570
+ \PY{n}{ndim} \PY{o}{=} \PY{l+m+mi}{3} \PY{c}{\PYZsh{} number of parameters in the model}
571
+ \PY{n}{nwalkers} \PY{o}{=} \PY{l+m+mi}{50} \PY{c}{\PYZsh{} number of MCMC walkers}
572
+ \PY{n}{nburn} \PY{o}{=} \PY{l+m+mi}{1000} \PY{c}{\PYZsh{} \PYZdq{}burn\PYZhy{}in\PYZdq{} to stabilize chains}
573
+ \PY{n}{nsteps} \PY{o}{=} \PY{l+m+mi}{2000} \PY{c}{\PYZsh{} number of MCMC steps to take}
574
+ \PY{n}{starting\PYZus{}guesses} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{rand}\PY{p}{(}\PY{n}{nwalkers}\PY{p}{,} \PY{n}{ndim}\PY{p}{)}
575
+ \end{Verbatim}
576
+ \vspace{1mm}
577
+ Now we call the sampler and extract the trace:\vspace{1mm}
578
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
579
+ \PY{n}{sampler} \PY{o}{=} \PY{n}{emcee}\PY{o}{.}\PY{n}{EnsembleSampler}\PY{p}{(}\PY{n}{nwalkers}\PY{p}{,} \PY{n}{ndim}\PY{p}{,}
580
+ \PY{n}{log\PYZus{}posterior}\PY{p}{,}
581
+ \PY{n}{args}\PY{o}{=}\PY{p}{[}\PY{n}{xdata}\PY{p}{,}\PY{n}{ydata}\PY{p}{]}\PY{p}{)}
582
+ \PY{n}{sampler}\PY{o}{.}\PY{n}{run\PYZus{}mcmc}\PY{p}{(}\PY{n}{starting\PYZus{}guesses}\PY{p}{,} \PY{n}{nsteps}\PY{p}{)}
583
+
584
+ \PY{c}{\PYZsh{} chain is of shape (nwalkers, nsteps, ndim):}
585
+ \PY{c}{\PYZsh{} discard burn\PYZhy{}in points and reshape:}
586
+ \PY{n}{trace} \PY{o}{=} \PY{n}{sampler}\PY{o}{.}\PY{n}{chain}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{nburn}\PY{p}{:}\PY{p}{,} \PY{p}{:}\PY{p}{]}
587
+ \PY{n}{trace} \PY{o}{=} \PY{n}{trace}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{ndim}\PY{p}{)}\PY{o}{.}\PY{n}{T}
588
+ \end{Verbatim}
589
+ \vspace{1mm}
590
+ The result is shown by the blue curve in Figure \DUrole{ref}{fig1}.
591
+
592
+ \subsubsection{Solution with PyMC\label{solution-with-pymc}}
593
+
594
+
595
+ The PyMC package \cite{Patil2010} is an MCMC implementation written in Python and Fortran. It makes use of the classic Metropolis-Hastings MCMC sampler \cite{Gelman2004}, and includes many built-in features, such as support for efficient sampling of common prior distributions. Because of this, it requires more specialized boilerplate than does emcee, but the result is a very powerful tool for flexible Bayesian inference.
596
+
597
+ The example below uses PyMC version 2.3; as of this writing, there exists an early release of version 3.0, which is a complete rewrite of the package with a more streamlined API and more efficient computational backend. To use PyMC, we first we define all the variables using its classes and decorators:\vspace{1mm}
598
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
599
+ \PY{k+kn}{import} \PY{n+nn}{pymc} \PY{c}{\PYZsh{} version 2.3}
600
+
601
+ \PY{n}{alpha} \PY{o}{=} \PY{n}{pymc}\PY{o}{.}\PY{n}{Uniform}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{alpha}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{100}\PY{p}{,} \PY{l+m+mi}{100}\PY{p}{)}
602
+
603
+ \PY{n+nd}{@pymc.stochastic}\PY{p}{(}\PY{n}{observed}\PY{o}{=}\PY{n+nb+bp}{False}\PY{p}{)}
604
+ \PY{k}{def} \PY{n+nf}{beta}\PY{p}{(}\PY{n}{value}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{)}\PY{p}{:}
605
+ \PY{k}{return} \PY{o}{\PYZhy{}}\PY{l+m+mf}{1.5} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{+} \PY{n}{value}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}
606
+
607
+ \PY{n+nd}{@pymc.stochastic}\PY{p}{(}\PY{n}{observed}\PY{o}{=}\PY{n+nb+bp}{False}\PY{p}{)}
608
+ \PY{k}{def} \PY{n+nf}{sigma}\PY{p}{(}\PY{n}{value}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:}
609
+ \PY{k}{return} \PY{o}{\PYZhy{}}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n+nb}{abs}\PY{p}{(}\PY{n}{value}\PY{p}{)}\PY{p}{)}
610
+
611
+ \PY{c}{\PYZsh{} Define the form of the model and likelihood}
612
+ \PY{n+nd}{@pymc.deterministic}
613
+ \PY{k}{def} \PY{n+nf}{y\PYZus{}model}\PY{p}{(}\PY{n}{x}\PY{o}{=}\PY{n}{xdata}\PY{p}{,} \PY{n}{alpha}\PY{o}{=}\PY{n}{alpha}\PY{p}{,} \PY{n}{beta}\PY{o}{=}\PY{n}{beta}\PY{p}{)}\PY{p}{:}
614
+ \PY{k}{return} \PY{n}{alpha} \PY{o}{+} \PY{n}{beta} \PY{o}{*} \PY{n}{x}
615
+
616
+ \PY{n}{y} \PY{o}{=} \PY{n}{pymc}\PY{o}{.}\PY{n}{Normal}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{y}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{mu}\PY{o}{=}\PY{n}{y\PYZus{}model}\PY{p}{,} \PY{n}{tau}\PY{o}{=}\PY{l+m+mf}{1.}\PY{o}{/}\PY{n}{sigma}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{,}
617
+ \PY{n}{observed}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{,} \PY{n}{value}\PY{o}{=}\PY{n}{ydata}\PY{p}{)}
618
+
619
+ \PY{c}{\PYZsh{} package the full model in a dictionary}
620
+ \PY{n}{model} \PY{o}{=} \PY{n+nb}{dict}\PY{p}{(}\PY{n}{alpha}\PY{o}{=}\PY{n}{alpha}\PY{p}{,} \PY{n}{beta}\PY{o}{=}\PY{n}{beta}\PY{p}{,} \PY{n}{sigma}\PY{o}{=}\PY{n}{sigma}\PY{p}{,}
621
+ \PY{n}{y\PYZus{}model}\PY{o}{=}\PY{n}{y\PYZus{}model}\PY{p}{,} \PY{n}{y}\PY{o}{=}\PY{n}{y}\PY{p}{)}
622
+ \end{Verbatim}
623
+ \vspace{1mm}
624
+ Next we run the chain and extract the trace:\vspace{1mm}
625
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
626
+ \PY{n}{S} \PY{o}{=} \PY{n}{pymc}\PY{o}{.}\PY{n}{MCMC}\PY{p}{(}\PY{n}{model}\PY{p}{)}
627
+ \PY{n}{S}\PY{o}{.}\PY{n}{sample}\PY{p}{(}\PY{n+nb}{iter}\PY{o}{=}\PY{l+m+mi}{100000}\PY{p}{,} \PY{n}{burn}\PY{o}{=}\PY{l+m+mi}{50000}\PY{p}{)}
628
+ \PY{n}{trace} \PY{o}{=} \PY{p}{[}\PY{n}{S}\PY{o}{.}\PY{n}{trace}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{alpha}\PY{l+s}{\PYZsq{}}\PY{p}{)}\PY{p}{[}\PY{p}{:}\PY{p}{]}\PY{p}{,} \PY{n}{S}\PY{o}{.}\PY{n}{trace}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{beta}\PY{l+s}{\PYZsq{}}\PY{p}{)}\PY{p}{[}\PY{p}{:}\PY{p}{]}\PY{p}{,}
629
+ \PY{n}{S}\PY{o}{.}\PY{n}{trace}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{sigma}\PY{l+s}{\PYZsq{}}\PY{p}{)}\PY{p}{[}\PY{p}{:}\PY{p}{]}\PY{p}{]}
630
+ \end{Verbatim}
631
+ \vspace{1mm}
632
+ The result is shown by the red curve in Figure \DUrole{ref}{fig1}.
633
+
634
+ \subsubsection{Solution with PyStan\label{solution-with-pystan}}
635
+
636
+
637
+ PyStan is the official Python interface to Stan, a probabilistic programming language implemented in C++ and making use of a Hamiltonian MCMC using a No U-Turn Sampler \cite{Hoffman2014}. The Stan language is specifically designed for the expression of probabilistic models; PyStan lets Stan models specified in the form of Python strings be parsed, compiled, and executed by the Stan library. Because of this, PyStan is the least ``Pythonic'' of the three frameworks:\vspace{1mm}
638
+ \begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
639
+ \PY{k+kn}{import} \PY{n+nn}{pystan} \PY{c}{\PYZsh{} version 2.2}
640
+
641
+ \PY{n}{model\PYZus{}code} \PY{o}{=} \PY{l+s}{\PYZdq{}\PYZdq{}\PYZdq{}}
642
+ \PY{l+s}{data \PYZob{}}
643
+ \PY{l+s}{ int\PYZlt{}lower=0\PYZgt{} N; // number of points}
644
+ \PY{l+s}{ real x[N]; // x values}
645
+ \PY{l+s}{ real y[N]; // y values}
646
+ \PY{l+s}{\PYZcb{}}
647
+ \PY{l+s}{parameters \PYZob{}}
648
+ \PY{l+s}{ real alpha\PYZus{}perp;}
649
+ \PY{l+s}{ real\PYZlt{}lower=\PYZhy{}pi()/2, upper=pi()/2\PYZgt{} theta;}
650
+ \PY{l+s}{ real log\PYZus{}sigma;}
651
+ \PY{l+s}{\PYZcb{}}
652
+ \PY{l+s}{transformed parameters \PYZob{}}
653
+ \PY{l+s}{ real alpha;}
654
+ \PY{l+s}{ real beta;}
655
+ \PY{l+s}{ real sigma;}
656
+ \PY{l+s}{ real ymodel[N];}
657
+ \PY{l+s}{ alpha \PYZlt{}\PYZhy{} alpha\PYZus{}perp / cos(theta);}
658
+ \PY{l+s}{ beta \PYZlt{}\PYZhy{} sin(theta);}
659
+ \PY{l+s}{ sigma \PYZlt{}\PYZhy{} exp(log\PYZus{}sigma);}
660
+ \PY{l+s}{ for (j in 1:N)}
661
+ \PY{l+s}{ ymodel[j] \PYZlt{}\PYZhy{} alpha + beta * x[j];}
662
+ \PY{l+s}{ \PYZcb{}}
663
+ \PY{l+s}{model \PYZob{}}
664
+ \PY{l+s}{ y \PYZti{} normal(ymodel, sigma);}
665
+ \PY{l+s}{\PYZcb{}}
666
+ \PY{l+s}{\PYZdq{}\PYZdq{}\PYZdq{}}
667
+
668
+ \PY{c}{\PYZsh{} perform the fit \PYZam{} extract traces}
669
+ \PY{n}{data} \PY{o}{=} \PY{p}{\PYZob{}}\PY{l+s}{\PYZsq{}}\PY{l+s}{N}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{n+nb}{len}\PY{p}{(}\PY{n}{xdata}\PY{p}{)}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{x}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{n}{xdata}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{y}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{n}{ydata}\PY{p}{\PYZcb{}}
670
+ \PY{n}{fit} \PY{o}{=} \PY{n}{pystan}\PY{o}{.}\PY{n}{stan}\PY{p}{(}\PY{n}{model\PYZus{}code}\PY{o}{=}\PY{n}{model\PYZus{}code}\PY{p}{,} \PY{n}{data}\PY{o}{=}\PY{n}{data}\PY{p}{,}
671
+ \PY{n+nb}{iter}\PY{o}{=}\PY{l+m+mi}{25000}\PY{p}{,} \PY{n}{chains}\PY{o}{=}\PY{l+m+mi}{4}\PY{p}{)}
672
+ \PY{n}{tr} \PY{o}{=} \PY{n}{fit}\PY{o}{.}\PY{n}{extract}\PY{p}{(}\PY{p}{)}
673
+ \PY{n}{trace} \PY{o}{=} \PY{p}{[}\PY{n}{tr}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{alpha}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{tr}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{beta}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{tr}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{sigma}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{]}
674
+ \end{Verbatim}
675
+ \vspace{1mm}
676
+ The result is shown by the green curve in Figure \DUrole{ref}{fig1}.
677
+
678
+ \subsubsection{Comparison\label{comparison}}
679
+ \begin{figure}[]\noindent\makebox[\columnwidth][c]{\includegraphics[width=\columnwidth]{figure1.png}}\caption{Comparison of model fits using frequentist maximum likelihood, and Bayesian MCMC using three Python packages: emcee, PyMC, and PyStan. \DUrole{label}{fig1}}
680
+ \end{figure}
681
+
682
+ The $1\sigma$ and $2\sigma$ posterior credible regions computed with these three packages are shown beside the corresponding frequentist confidence intervals in Figure \DUrole{ref}{fig1}. The frequentist result gives slightly tighter bounds; this is primarily due to the confidence interval being computed assuming a single maximum likelihood estimate of the unknown scatter, $\sigma$ (this is analogous to the use of the single point estimate for the nuisance parameter $p$ in the billiard game, above). This interpretation can be confirmed by plotting the Bayesian posterior conditioned on the maximum likelihood estimate $\hat{\sigma}$; this gives a credible region much closer to the frequentist confidence interval.
683
+
684
+ The similarity of the three MCMC results belie the differences in algorithms used to compute them: by default, PyMC uses a Metropolis-Hastings sampler, PyStan uses a No U-Turn Sampler (NUTS), while emcee uses an affine-invariant ensemble sampler. These approaches are known to have differing performance characteristics depending on the features of the posterior being explored. As expected for the near-Gaussian posterior used here, the three approaches give very similar results.
685
+
686
+ A main apparent difference between the packages is the Python interface. Emcee is perhaps the simplest, while PyMC requires more package-specific boilerplate code. PyStan is the most complicated, as the model specification requires directly writing a string of Stan code.
687
+
688
+ \subsection{Conclusion\label{conclusion}}
689
+
690
+
691
+ This paper has offered a brief philosophical and practical glimpse at the differences between frequentist and Bayesian approaches to statistical analysis. These differences have their root in differing conceptions of probability: frequentists define probability as related to \emph{frequencies of repeated events}, while Bayesians define probability as a \emph{measure of uncertainty}. In practice, this means that frequentists generally quantify the properties of \emph{data-derived quantities} in light of \emph{fixed model parameters}, while Bayesians generally quantify the properties of \emph{unknown models parameters} in light of \emph{observed data}. This philosophical distinction often makes little difference in simple problems, but becomes important within more sophisticated analysis.
692
+
693
+ We first considered the case of nuisance parameters, and showed that Bayesianism offers more natural machinery to deal with nuisance parameters through \emph{marginalization}. Of course, this marginalization depends on having an accurate prior probability for the parameter being marginalized.
694
+
695
+ Next we considered the difference in the handling of uncertainty, comparing frequentist confidence intervals with Bayesian credible regions. We showed that when attempting to find a single, fixed interval bounding the true value of a parameter, the Bayesian solution answers the question that researchers most often ask. The frequentist solution can be informative; we just must be careful to correctly interpret the frequentist confidence interval.
696
+
697
+ Finally, we combined these ideas and showed several examples of the use of frequentism and Bayesianism on a more realistic linear regression problem, using several mature packages available in the Python language ecosystem. Together, these packages offer a set of tools for statistical analysis in both the frequentist and Bayesian frameworks.
698
+
699
+ So which approach is best? That is somewhat a matter of personal ideology, but also depends on the nature of the problem at hand. Frequentist approaches are often easily computed and are well-suited to truly repeatible processes and measurements, but can hit snags with small sets of data and models which depart strongly from Gaussian. Frequentist tools for these situations do exist, but often require subtle considerations and specialized expertise. Bayesian approaches require specification of a potentially subjective prior, and often involve intensive computation via MCMC. However, they are often conceptually more straightforward, and pose results in a way that is much closer to the questions a scientist wishes to answer: i.e. how do \emph{these particular data} constrain the unknowns in a certain model? When used with correct understanding of their application, both sets of statistical tools can be used to effectively interpret of a wide variety of scientific and technical results.
700
+ \begin{thebibliography}{ForemanMackey2013}
701
+ \bibitem[Bayes1763]{Bayes1763}{
702
+
703
+ T. Bayes.
704
+ \emph{An essay towards solving a problem in the doctrine of chances}.
705
+ Philosophical Transactions of the Royal Society of London
706
+ 53(0):370-418, 1763}
707
+ \bibitem[Eddy2004]{Eddy2004}{
708
+
709
+ S.R. Eddy. \emph{What is Bayesian statistics?}.
710
+ Nature Biotechnology 22:1177-1178, 2004}
711
+ \bibitem[Evans2002]{Evans2002}{
712
+
713
+ S.N. Evans \& P.B. Stark. \emph{Inverse Problems as Statistics}.
714
+ Mathematics Statistics Library, 609, 2002.}
715
+ \bibitem[ForemanMackey2013]{ForemanMackey2013}{
716
+
717
+ D. Foreman-Mackey, D.W. Hogg, D. Lang, J.Goodman.
718
+ \emph{emcee: the MCMC Hammer}. PASP 125(925):306-312, 2014}
719
+ \bibitem[Gelman2004]{Gelman2004}{
720
+
721
+ A. Gelman, J.B. Carlin, H.S. Stern, and D.B. Rubin.
722
+ \emph{Bayesian Data Analysis, Second Edition.}
723
+ Chapman and Hall/CRC, Boca Raton, FL, 2004.}
724
+ \bibitem[Goodman2010]{Goodman2010}{
725
+
726
+ J. Goodman \& J. Weare.
727
+ \emph{Ensemble Samplers with Affine Invariance}.
728
+ Comm. in Applied Mathematics and
729
+ Computational Science 5(1):65-80, 2010.}
730
+ \bibitem[Hardy2003]{Hardy2003}{
731
+
732
+ M. Hardy. \emph{An illuminating counterexample}.
733
+ Am. Math. Monthly 110:234–238, 2003.}
734
+ \bibitem[Hoffman2014]{Hoffman2014}{
735
+
736
+ M.C. Hoffman \& A. Gelman.
737
+ \emph{The No-U-Turn Sampler: Adaptively Setting Path Lengths
738
+ in Hamiltonian Monte Carlo}. JMLR, submitted, 2014.}
739
+ \bibitem[Jaynes1976]{Jaynes1976}{
740
+
741
+ E.T. Jaynes. \emph{Confidence Intervals vs Bayesian Intervals (1976)}
742
+ Papers on Probability, Statistics and Statistical Physics
743
+ Synthese Library 158:149, 1989}
744
+ \bibitem[Jeffreys1946]{Jeffreys1946}{
745
+
746
+ H. Jeffreys \emph{An Invariant Form for the Prior Probability in Estimation Problems}.
747
+ Proc. of the Royal Society of London. Series A
748
+ 186(1007): 453, 1946}
749
+ \bibitem[Patil2010]{Patil2010}{
750
+
751
+ A. Patil, D. Huard, C.J. Fonnesbeck.
752
+ \emph{PyMC: Bayesian Stochastic Modelling in Python}
753
+ Journal of Statistical Software, 35(4):1-81, 2010.}
754
+ \bibitem[Seabold2010]{Seabold2010}{
755
+
756
+ J.S. Seabold and J. Perktold.
757
+ \emph{Statsmodels: Econometric and Statistical Modeling with Python}
758
+ Proceedings of the 9th Python in Science Conference, 2010}
759
+ \bibitem[VanderPlas2014]{VanderPlas2014}{
760
+
761
+ J. VanderPlas. \emph{Frequentism and Bayesianism}.
762
+ Four-part series (\href{http://jakevdp.github.io/blog/2014/03/11/frequentism-and-bayesianism-a-practical-intro/}{I},
763
+ \href{http://jakevdp.github.io/blog/2014/06/06/frequentism-and-bayesianism-2-when-results-differ/}{II},
764
+ \href{http://jakevdp.github.io/blog/2014/06/12/frequentism-and-bayesianism-3-confidence-credibility/}{III},
765
+ \href{http://jakevdp.github.io/blog/2014/06/14/frequentism-and-bayesianism-4-bayesian-in-python/}{IV}) on \emph{Pythonic Perambulations}
766
+ \url{http://jakevdp.github.io/}, 2014.}
767
+ \bibitem[Wasserman2004]{Wasserman2004}{
768
+
769
+ L. Wasserman.
770
+ \emph{All of statistics: a concise course in statistical inference}.
771
+ Springer, 2004.}
772
+ \end{thebibliography}
773
+
774
+ \end{document}
papers/1412/1412.0035.tex ADDED
@@ -0,0 +1,589 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,twocolumn,letterpaper]{article}
2
+ \usepackage{cvpr}
3
+ \usepackage{times}
4
+ \usepackage{graphicx}
5
+ \usepackage{amsmath}
6
+ \usepackage{amssymb}
7
+ \usepackage{xspace}
8
+ \usepackage{xcolor}
9
+ \usepackage{enumitem}
10
+ \usepackage{listings}
11
+ \usepackage{tabularx}
12
+ \usepackage{multirow}
13
+ \usepackage[export]{adjustbox}
14
+ \usepackage{tikz}
15
+ \usepackage{pgffor}
16
+ \usepackage[numbers,square,sort&compress]{natbib}
17
+ \usepackage[
18
+ pagebackref=true,
19
+ breaklinks=true,
20
+ letterpaper=true,
21
+ colorlinks,
22
+ bookmarks=false]{hyperref}
23
+
24
+ \cvprfinalcopy
25
+ \def\cvprPaperID{532}
26
+
27
+
28
+ \graphicspath{{./figures}{./}{./figures/v14}}
29
+
30
+ \setitemize{noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,nolistsep,leftmargin=*}
31
+
32
+ \renewcommand{\paragraph}[1]{\medskip\noindent{\bf #1}}
33
+
34
+ \newcommand{\oid}{{\sc Oid}\xspace}
35
+ \newcommand{\dataname}{\oid}
36
+ \definecolor{attr}{rgb}{0.85,0.88,0.90}
37
+ \definecolor{code}{rgb}{0.95,0.93,0.90}
38
+
39
+ \newcommand{\off}[1]{}
40
+ \newcommand{\annoitem}[1]{\paragraph{\colorbox{attr}{#1.}}}
41
+ \newcommand{\AP}{\operatorname{AP}}
42
+ \newcommand{\argmin}[1]{\underset{#1}{\mathrm{argmin}}}
43
+ \newcommand{\todo}[1]{\textcolor{red}{\bf {#1}}\xspace}
44
+ \newcommand{\bb}{\mathbf{b}}
45
+ \newcommand{\bx}{\mathbf{x}}
46
+ \newcommand{\by}{\mathbf{y}}
47
+ \newcommand{\bz}{\mathbf{z}}
48
+ \newcommand{\bw}{\mathbf{w}}
49
+ \newcommand{\bu}{\mathbf{u}}
50
+ \newcommand{\bg}{\mathbf{g}}
51
+ \newcommand{\be}{\mathbf{e}}
52
+ \newcommand{\tpr}{\operatorname{tpr}}
53
+ \newcommand{\tnr}{\operatorname{tnr}}
54
+ \newcommand{\pr}{\operatorname{pr}}
55
+ \newcommand{\rc}{\operatorname{rc}}
56
+ \newcommand{\real}{\mathbb{R}}
57
+ \newcommand{\attr}[1]{\operatorname{\mathtt{\color{black}#1}}}
58
+ \newcommand{\avalue}[1]{\operatorname{\mathtt{\color{blue}#1}}}
59
+ \newcommand{\red}{\bf\color{red}}
60
+
61
+ \setcounter{topnumber}{2}
62
+ \setcounter{bottomnumber}{2}
63
+ \setcounter{totalnumber}{4}
64
+ \renewcommand{\topfraction}{0.9}
65
+ \renewcommand{\bottomfraction}{0.9}
66
+ \renewcommand{\textfraction}{0.1}
67
+ \renewcommand{\floatpagefraction}{0.9}
68
+ \setlength{\textfloatsep}{5pt plus 1.0pt minus 2.0pt}
69
+ \setlength{\dbltextfloatsep}{5pt plus 1.0pt minus 2.0pt}
70
+
71
+ \makeatletter
72
+ \g@addto@macro\normalsize{\setlength\abovedisplayskip{0.7em}
73
+ \setlength\belowdisplayskip{0.7em}
74
+ \setlength\abovedisplayshortskip{0.7em}
75
+ \setlength\belowdisplayshortskip{0.7em}
76
+ }
77
+ \makeatother
78
+
79
+ \tikzset{
80
+ image label/.style={
81
+ fill=white,
82
+ text=black,
83
+ font=\tiny,
84
+ anchor=south east,
85
+ xshift=-0.1cm,
86
+ yshift=0.1cm,
87
+ at={(0,0)}
88
+ }
89
+ }
90
+ \renewcommand{\baselinestretch}{0.98}
91
+ \begin{document}
92
+ \title{Understanding Deep Image Representations by Inverting Them}
93
+ \vspace{-2em}
94
+ \author{
95
+ Aravindh Mahendran \\
96
+ University of Oxford
97
+ \and
98
+ Andrea Vedaldi \\
99
+ University of Oxford}
100
+ \maketitle
101
+
102
+
103
+ \begin{abstract}
104
+ Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system.
105
+ Nevertheless, our understanding of them remains limited.
106
+ In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself?
107
+ To answer this question we contribute a general framework to invert representations.
108
+ We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too.
109
+ We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time.
110
+ Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.
111
+ \end{abstract}
112
+
113
+
114
+
115
+
116
+
117
+
118
+
119
+
120
+
121
+
122
+ \section{Introduction}\label{s:intro}
123
+
124
+
125
+ Most image understanding and computer vision methods build on image representations such as textons~\cite{leung01representing}, histogram of oriented gradients (SIFT~\cite{lowe04distinctive} and HOG~\cite{dalal05histograms}), bag of visual words~\cite{csurka04visual}\cite{sivic03video}, sparse~\cite{yang10supervised} and local coding~\cite{wang10locality-constrained}, super vector coding~\cite{zhou10image}, VLAD~\cite{jegou10aggregating}, Fisher Vectors~\cite{perronnin06fisher}, and, lately, deep neural networks, particularly of the convolutional variety~\cite{krizhevsky12imagenet,zeiler14visualizing,sermanet14overfeat:}.
126
+ However, despite the progress in the development of visual representations, their design is still driven empirically and a good understanding of their properties is lacking.
127
+ While this is true of shallower hand-crafted features, it is even more so for the latest generation of deep representations, where millions of parameters are learned from data.
128
+
129
+ In this paper we conduct a direct analysis of representations by characterising the image information that they retain (Fig.~\ref{f:intro}).
130
+ We do so by modeling a representation as a function $\Phi(\bx)$ of the image $\bx$ and then computing an approximated inverse $\phi^{-1}$, \emph{reconstructing $\bx$ from the code $\Phi(\bx)$}.
131
+ A common hypothesis is that representations collapse irrelevant differences in images (e.g. illumination or viewpoint), so that $\Phi$ should not be uniquely invertible.
132
+ Hence, we pose this as a reconstruction problem and find a number of possible reconstructions rather than a single one.
133
+ By doing so, we obtain insights into the invariances captured by the representation.
134
+
135
+ \begin{figure}
136
+ \begin{tikzpicture}
137
+ \node[anchor=north west,inner sep=0] at (0,0) {
138
+ \includegraphics[width=\columnwidth]{figures/v26/bfcnna3-repeats/ILSVRC2012_val_00000024/l19-recon}
139
+ };
140
+ \node[anchor=north west,inner sep=0,at={(0,0)}] (I) {
141
+ \includegraphics[width=0.3333\columnwidth]{figures/v26/bfcnna2/ILSVRC2012_val_00000024/l01-orig}
142
+ };
143
+ \end{tikzpicture}
144
+ \caption{{\bf What is encoded by a CNN?} The figure shows five possible reconstructions of the reference image obtained from the 1,000-dimensional code extracted at the penultimate layer of a reference CNN\cite{krizhevsky12imagenet} (before the softmax is applied) trained on the ImageNet data. From the viewpoint of the model, all these images are practically equivalent. This image is best viewed in color/screen.}\label{f:intro}
145
+ \end{figure}
146
+
147
+ Our contributions are as follows.
148
+ First, we propose a general method to invert representations, including SIFT, HOG, and CNNs (Sect.~\ref{s:method}).
149
+ Crucially, this method {\bf uses only information from the image representation} and a generic natural image prior, starting from random noise as initial solution, and hence captures only the information contained in the representation itself.
150
+ We discuss and evaluate different regularization penalties as natural image priors.
151
+ Second, we show that, despite its simplicity and generality, this method recovers significantly better reconstructions from DSIFT and HOG compared to recent alternatives~\cite{vondrick13hoggles:}.
152
+ As we do so, we emphasise a number of subtle differences between these representations and their effect on invertibility.
153
+ Third, we apply the inversion technique to the analysis of recent deep CNNs, exploring their invariance by sampling possible approximate reconstructions.
154
+ We relate this to the depth of the representation, showing that the CNN gradually builds an increasing amount of invariance, layer after layer.
155
+ Fourth, we study the locality of the information stored in the representations by reconstructing images from selected groups of neurons, either spatially or by channel.
156
+
157
+ The rest of the paper is organised as follows.
158
+ Sect.~\ref{s:method} introduces the inversion method, posing this as a regularised regression problem and proposing a number of image priors to aid the reconstruction.
159
+ Sect.~\ref{s:representations} introduces various representations: HOG and DSIFT as examples of shallow representations, and state-of-the-art CNNs as an example of deep representations.
160
+ It also shows how HOG and DSIFT can be implemented as CNNs, simplifying the computation of their derivatives. Sect.~\ref{s:results-shallow} and~\ref{s:results-deep} apply the inversion technique to the analysis of respectively shallow (HOG and DSIFT) and deep (CNNs) representations.
161
+ Finally, Sect.~\ref{s:summary} summarises our findings.
162
+
163
+ We use the matconvnet toolbox~\cite{matconvnet2014vedaldi} for implementing convolutional neural networks.
164
+
165
+ \paragraph{Related work.} There is a significant amount of work in understanding representations by means of visualisations.
166
+ The works most related to ours are Weinzaepfel~\etal~\cite{weinzaepfel11reconstructing} and Vondrick~\etal~\cite{vondrick13hoggles:} which invert sparse DSIFT and HOG features respectively.
167
+ While our goal is similar to theirs, our method is substantially different from a technical viewpoint, being based on the direct solution of a regularised regression problem.
168
+ The benefit is that our technique applies equally to shallow (SIFT, HOG) and deep (CNN) representations. Compared to existing inversion techniques for dense shallow representations~\cite{vondrick13hoggles:}, it is also shown to achieve superior results, both quantitatively and qualitatively.
169
+
170
+ An interesting conclusion of~\cite{weinzaepfel11reconstructing,vondrick13hoggles:} is that, while HOG and SIFT may not be exactly invertible, they capture a significant amount of information about the image.
171
+ This is in apparent contradiction with the results of Tatu~\etal~\cite{tatu11exploring} who show that it is possible to make any two images look nearly identical in SIFT space up to the injection of adversarial noise.
172
+ A symmetric effect was demonstrated for CNNs by Szegedy~\etal~\cite{szegedy13intriguing}, where an imperceptible amount of adversarial noise suffices to change the predicted class of an image.
173
+ The apparent inconsistency is easily resolved, however, as the methods of~\cite{tatu11exploring,szegedy13intriguing} require the injection of high-pass structured noise which is very unlikely to occur in natural images.
174
+
175
+ Our work is also related to the DeConvNet method of Zeiler and Fergus~\cite{zeiler14visualizing}, who backtrack the network computations to identify which image patches are responsible for certain neural activations.
176
+ Simonyan~\etal~\cite{simonyan14deep}, however, demonstrated that DeConvNets can be interpreted as a sensitivity analysis of the network input/output relation.
177
+ A consequence is that DeConvNets do not study the problem of representation inversion in the sense adopted here, which has significant methodological consequences; for example, DeConvNets require \emph{auxiliary information} about the activations in several intermediate layers, while our inversion uses only the final image code.
178
+ In other words, DeConvNets look at \emph{how} certain network outputs are obtained, whereas we look for \emph{what} information is preserved by the network output.
179
+
180
+ The problem of inverting representations, particularly CNN-based ones, is related to the problem of inverting neural networks, which received significant attention in the past.
181
+ Algorithms similar to the back-propagation technique developed here were proposed by~\cite{williams86inverting,linden89inversion,lee94inverse,lu99inverting}, along with alternative optimisation strategies based on sampling.
182
+ However, these methods did not use natural image priors as we do, nor were applied to the current generation of deep networks.
183
+ Other works~\cite{jensen99inversion,varkonyi-koczy05observer} specialised on inverting networks in the context of dynamical systems and will not be discussed further here.
184
+ Others~\cite{bishop95neural} proposed to learn a second neural network to act as the inverse of the original one, but this is complicated by the fact that the inverse is usually not unique.
185
+ Finally, auto-encoder architectures~\cite{hinton06reducing} train networks together with their inverses as a form of supervision; here we are interested instead in visualising feed-forward and discriminatively-trained CNNs now popular in computer vision.
186
+
187
+ \section{Inverting representations}\label{s:method}
188
+
189
+
190
+ This section introduces our method to compute an approximate inverse of an image representation.
191
+ This is formulated as the problem of finding an image whose representation best matches the one given~\cite{williams86inverting}.
192
+ Formally, given a representation function $\Phi : \real^{H\times W \times C} \rightarrow \real^d$ and a representation $\Phi_0 = \Phi(\bx_0)$ to be inverted, reconstruction finds the image $\bx\in\real^{H \times W \times C}$ that minimizes the objective:
193
+ \begin{equation}\label{e:objective}
194
+ \bx^* = \operatornamewithlimits{argmin}_{\bx\in\real^{H \times W \times C}} \ell(\Phi(\bx), \Phi_0) + \lambda \mathcal{R}(\bx)
195
+ \end{equation}
196
+ where the loss $\ell$ compares the image representation $\Phi(\bx)$ to the target one $\Phi_0$ and $\mathcal{R} : \real^{H \times W \times C} \rightarrow \real$ is a regulariser capturing a \emph{natural image prior}.
197
+
198
+ Minimising \eqref{e:objective} results in an image $\bx^*$ that ``resembles'' $\bx_0$ from the viewpoint of the representation.
199
+ While there may be no unique solution to this problem, sampling the space of possible reconstructions can be used to characterise the space of images that the representation deems to be equivalent, revealing its invariances.
200
+
201
+ We next discusses the choice of loss and regularizer.
202
+
203
+ \paragraph{Loss function.} There are many possible choices of the loss function $\ell$.
204
+ While we use the Euclidean distance:
205
+ \begin{equation}\label{e:objective2}
206
+ \ell(\Phi(\bx),\Phi_0) = \| \Phi(\bx) - \Phi_0 \|^2,
207
+ \end{equation}
208
+ it is possible to change the nature of the loss entirely, for example to optimize selected neural responses.
209
+ The latter was used in~\cite{erhan09visualizing,simonyan14deep} to generate images representative of given neurons.
210
+
211
+ \paragraph{Regularisers.} Discriminatively-trained representations may discard a significant amount of low-level image statistics as these are usually not interesting for high-level tasks.
212
+ As this information is nonetheless useful for visualization, it can be partially recovered by restricting the inversion to the subset of natural images $\mathcal{X}\subset \real^{H\times W \times C}$.
213
+ However, minimising over $\mathcal{X}$ requires addressing the challenge of modeling this set.
214
+ As a proxy one can incorporate in the reconstruction an appropriate \emph{image prior}.
215
+ Here we experiment with two such priors.
216
+ The first one is simply the $\alpha$-norm $\mathcal{R}_\alpha(\bx) = \|\bx\|_\alpha^\alpha$, where $\bx$ is the vectorised and mean-subtracted image.
217
+ By choosing a relatively large exponent ($\alpha=6$ is used in the experiments) the range of the image is encouraged to stay within a target interval instead of diverging.
218
+
219
+ \newcommand{\TV}{{V^\beta}}
220
+
221
+ A second richer regulariser is \emph{total variation} (TV) $\mathcal{R}_\TV(\bx)$, encouraging images to consist of piece-wise constant patches.
222
+ For continuous functions (or distributions) $f : \real^{H\times W} \supset \Omega \rightarrow \real$, the TV norm is given by:
223
+ \[
224
+ \mathcal{R}_\TV(f)
225
+ =
226
+ \int_{\Omega}
227
+ \left(
228
+ \left(\frac{\partial f}{\partial u}(u,v)\right)^2 +
229
+ \left(\frac{\partial f}{\partial v}(u,v)\right)^2
230
+ \right)^{\frac{\beta}{2}}\,du\,dv
231
+ \]
232
+ where $\beta = 1$.
233
+ Here images are discrete ($\bx \in \real^{H \times W}$) and the TV norm is replaced by the finite-difference approximation:
234
+ \[
235
+ \mathcal{R}_\TV(\bx)
236
+ =
237
+ \sum_{i,j}
238
+ \left(
239
+ \left(x_{i,j+1} - x_{ij}\right)^2 +
240
+ \left(x_{i+1,j} - x_{ij}\right)^2
241
+ \right)^\frac{\beta}{2}.
242
+ \]
243
+ It was observed empirically that the TV regularizer ($\beta = 1$) in the presence of subsampling, also caused by max pooling in CNNs, leads to ``spikes'' in the reconstruction.
244
+ This is a known problem in TV-based image interpolation (see \eg Fig.~3 in \cite{chen2014bi}) and is illustrated in Fig.~\ref{fig:spikes}.left when inverting a layer in a CNN.
245
+ The ``spikes'' occur at the locations of the samples because: (1) the TV norm along any path between two samples depends only on the overall amount of intensity change (not on the sharpness of the changes) and (2) integrated on the 2D image, it is optimal to concentrate sharp changes around a boundary with a small perimeter.
246
+ Hyper-Laplacian priors with $\beta < 1$ are often used as a better match of the gradient statistics of natural images~\cite{krishnan09fast}, but they only exacerbate this issue.
247
+ Instead, we trade-off the sharpness of the image with the removal of such artifacts by choosing $\beta > 1$ which, by penalising large gradients, distributes changes across regions rather than concentrating them at a point or curve.
248
+ We refer to this as the $\TV$ regularizer.
249
+ As seen in Fig.~\ref{fig:spikes} (right), the spikes are removed with $\beta = 2$ but the image is washed out as edges are penalized more than with $\beta = 1$.
250
+
251
+ When the target of the reconstruction is a colour image, both regularisers are summed for each colour channel.
252
+
253
+ \begin{figure}
254
+ \hfill
255
+ {\adjincludegraphics[width=0.33\columnwidth,trim={120pt 0 0 120pt},clip]{figures/v26/spike_removal2_tvbeta1/ILSVRC2012_val_00000043/l04-recon}}
256
+ \hfill
257
+ {\adjincludegraphics[width=0.33\columnwidth,trim={120pt 0 0 120pt},clip]{figures/v26/spike_removal2_tvbeta2/ILSVRC2012_val_00000043/l04-recon}}
258
+ \hfill\mbox{}
259
+ \label{fig:spikes}
260
+ \caption{\textbf{Left:} Spikes in a inverse of norm1 features - detail shown. \textbf{Right:} Spikes removed by a $\TV$ regularizer with $\beta = 2$.}
261
+ \end{figure}
262
+
263
+ \paragraph{Balancing the different terms.} Balancing loss and regulariser(s) requires some attention.
264
+ While an optimal tuning can be achieved by cross-validation, it is important to start from reasonable settings of the parameters.
265
+ First, the loss is replaced by the normalized version $\|\Phi(\bx) - \Phi_0\|^2_2/\|\Phi_0\|^2_2$.
266
+ This fixes its dynamic range, as after normalisation the loss near the optimum can be expected to be contained in the $[0,1)$ interval, touching zero at the optimum.
267
+ In order to make the dynamic range of the regulariser(s) comparable one can aim for a solution $\bx^*$ which has roughly unitary Euclidean norm.
268
+ While representations are largely insensitive to the scaling of the image range, this is not exactly true for the first few layers of CNNs, where biases are tuned to a ``natural'' working range.
269
+ This can be addressed by considering the objective $\|\Phi(\sigma \bx) - \Phi_0\|^2_2/\|\Phi_0\|^2_2 + \mathcal{R}(\bx)$ where the scaling $\sigma$ is the average Euclidean norm of natural images in a training set.
270
+
271
+ Second, the multiplier $\lambda_\alpha$ of the $\alpha$-norm regularizer should be selected to encourage the reconstructed image $\sigma\bx$ to be contained in a natural range $[-B, B]$ (\eg in most CNN implementations $B=128$).
272
+ If most pixels in $\sigma
273
+ \bx$ have a magnitude similar to $B$, then $\mathcal{R}_\alpha(\bx)\approx HWB^\alpha/\sigma^\alpha$, and $\lambda_\alpha\approx \sigma^\alpha/(HWB^\alpha)$.
274
+ A similar argument suggests to pick the $\TV$-norm regulariser coefficient as $\lambda_{\TV} \approx \sigma^\beta/(HW(aB)^\beta)$, where $a$ is a small fraction (\eg $a=1\%$) relating the dynamic range of the image to that of its gradient.
275
+
276
+ The final form of the objective function is
277
+ \begin{equation}
278
+ \|\Phi(\sigma \bx) - \Phi_0\|^2_2/\|\Phi_0\|^2_2 + \lambda_\alpha \mathcal{R}_\alpha(\bx) + \lambda_\TV \mathcal{R}_\TV(\bx) \label{eq:obj}
279
+ \end{equation}
280
+ It is in general non convex because of the nature of $\Phi$. We next discuss how to optimize it.
281
+
282
+ \subsection{Optimisation}\label{s:optimisation}
283
+
284
+
285
+ Finding an optimizer of the objective~\eqref{e:objective} may seem a hopeless task as most representations $\Phi$ involve strong non-linearities; in particular, deep representations are a chain of \emph{several non-linear layers}.
286
+ Nevertheless, simple gradient descent (GD) procedures have been shown to be very effective in \emph{learning} such models from data, which is arguably an even harder task.
287
+ Hence, it is not unreasonable to use GD to solve~\eqref{e:objective} too.
288
+ We extend GD to incorporate a few extensions that proved useful in learning deep networks~\cite{krizhevsky12imagenet}, as discussed below.
289
+
290
+ \paragraph{Momentum.} GD is extended to use \emph{momentum}:
291
+ \[
292
+ \mu_{t+1} \leftarrow m \mu_{t} - \eta_t \nabla E(\bx),\qquad
293
+ \bx_{t+1} \leftarrow \bx_{t} + \mathbf{\mu_t}
294
+ \]
295
+ where $E(\bx) = \ell(\Phi(\bx), \Phi_0) + \lambda \mathcal{R}(\bx)$ is the objective function.
296
+ The vector $\mu_t$ is a weighed average of the last several gradients, with decaying factor $m=0.9$.
297
+ Learning proceeds a few hundred iterations with a fixed learning rate $\eta_t$ and is reduced tenfold, until convergence.
298
+
299
+ \paragraph{Computing derivatives.} Applying GD requires computing the derivatives of the loss function composed with the representation $\Phi(\bx)$.
300
+ While the squared Euclidean loss is smooth, this is not the case for the representation.
301
+ A key feature of CNNs is the ability of computing the derivatives of each computational layer, composing the latter in an overall derivative of the whole function using back-propagation.
302
+ Our translation of HOG and DSIFT into CNN allows us to apply the same technique to these computer vision representations too.
303
+
304
+ \section{Representations}\label{s:representations}
305
+
306
+
307
+ This section describes the image representations studied in the paper: DSIFT (Dense-SIFT), HOG, and reference deep CNNs.
308
+ Furthermore, it shows how to implement DSIFT and HOG in a standard CNN framework in order to compute their derivatives.
309
+ Being able to compute derivatives is the only requirement imposed by the algorithm of Sect.~\ref{s:optimisation}.
310
+ Implementing DSIFT and HOG in a standard CNN framework makes derivative computation convenient.
311
+
312
+ \paragraph{CNN-A: deep networks.} As a reference deep network we consider the Caffe-Alex~\cite{jia13caffe} model (CNN-A), which closely reproduces the network by Krizhevsky \etal~\cite{krizhevsky12imagenet}.
313
+ This and many other similar networks alternate the following computational building blocks: linear convolution, ReLU gating, spatial max-pooling, and group normalisation.
314
+ Each such block takes as input a $d$-dimensional image and produces as output a $k$-dimensional one.
315
+ Blocks can additionally pad the image (with zeros for the convolutional blocks and with $-\infty$ for max pooling) or subsample the data.
316
+ The last several layers are deemed ``fully connected'' as the support of the linear filters coincides with the size of the image; however, they are equivalent to filtering layers in all other respects. Table~\ref{f:cnna} details the structure of CNN-A.
317
+
318
+ \paragraph{CNN-DSIFT and CNN-HOG.} This section shows how DSIFT~\cite{lowe99object,nowak06sampling} and HOG~\cite{dalal05histograms} can be implemented as CNNs.
319
+ This formalises the relation between CNNs and these standard representations.
320
+ It also makes derivative computation for these representations simple; for the inversion algorithm of Sect.~\ref{s:method}.
321
+ The DSIFT and HOG implementations in the VLFeat library~\cite{vedaldi07open} are used as numerical references. These are equivalent to Lowe's~\cite{lowe99object} SIFT and the DPM V5~HOG~\cite{lsvm-pami,voc-release5}.
322
+
323
+ SIFT and HOG involve: computing and binning image gradients, pooling binned gradients into cell histograms, grouping cells into blocks, and normalising the blocks.
324
+ Denote by $\bg$ the gradient at a given pixel and consider binning this into one of $K$ orientations (where $K=8$ for SIFT and $K=18$ for HOG).
325
+ This can be obtained in two steps: directional filtering and gating.
326
+ The $k$-th directional filter is $G_k = u_{1k} G_x + u_{2k } G_y$ where
327
+ \[
328
+ \bu_k = \begin{bmatrix} \cos \frac{2\pi k}{K} \\ \sin \frac{2\pi k}{K} \end{bmatrix},
329
+ \quad
330
+ G_x = \begin{bmatrix} 0 & 0 & 0 \\ -1 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix},
331
+ \quad
332
+ G_y = G_x^\top.
333
+ \]
334
+ The output of a directional filter is the projection $\langle \bg, \bu_k \rangle$ of the gradient along direction $\bu_k$.
335
+ A suitable gating function implements binning into a histogram element $h_k$. DSIFT uses bilinear orientation binning, given by
336
+ \[
337
+ h_k= \|\bg\|
338
+ \max\left\{0, 1 - \frac{K}{2\pi} \cos^{-1} \frac{\langle \bg, \bu_k \rangle}{\|\bg\|} \right\},
339
+ \]
340
+ whereas HOG (in the DPM V5 variant) uses hard assignments $h_k = \|\bg\| \mathbf{1}\left[\langle \bg, \bu_k \rangle > \|\bg\| \cos\pi/K \right]$.
341
+ Filtering is a standard CNN operation but these binning functions are not.
342
+ While their implementation is simple, an interesting alternative is the approximated bilinear binning:
343
+ \begin{align*}
344
+ h_k
345
+ &\approx \|\bg\|
346
+ \max\left\{0, \frac{1}{1-a} \frac{\langle \bg, \bu_k \rangle }{\|\bg\|} - \frac{a}{1-a}\right\}
347
+ \\
348
+ &\propto \max\left\{0, \langle \bg, \bu_k \rangle - a\|\bg\| \right\},
349
+ \quad a = \cos 2\pi/K.
350
+ \end{align*}
351
+ The norm-dependent offset $\|\bg\|$ is still non-standard, but the ReLU operator is, which shows to which extent approximate binning can be achieved in typical CNNs.
352
+
353
+ The next step is to pool the binned gradients into cell histograms using bilinear spatial pooling, followed by extracting blocks of $2\times 2$ (HOG) or $4 \times 4$ (SIFT) cells.
354
+ Both such operations can be implemented by banks of linear filters.
355
+ Cell blocks are then $l^2$ normalised, which is a special case of the standard local response normalisation layer.
356
+ For HOG, blocks are further decomposed back into cells, which requires another filter bank.
357
+ Finally, the descriptor values are clamped from above by applying $y = \min\{x,0.2\}$ to each component, which can be reduced to a combination of linear and ReLU layers.
358
+
359
+ The conclusion is that approximations to DSIFT and HOG can be implemented with conventional CNN components plus the non-conventional gradient norm offset.
360
+ However, all the filters involved are much sparser and simpler than the generic 3D filters in learned CNNs.
361
+ Nonetheless, in the rest of the paper we will use exact CNN equivalents of DSIFT and HOG, using modified or additional CNN components as needed.
362
+ \footnote{This requires addressing a few more subtleties. In DSIFT gradient contributions are usually weighted by a Gaussian centered at each descriptor (a $4 \times 4$ cell block); here we use the VLFeat approximation (\texttt{fast} option) of weighting cells rather than gradients, which can be incorporated in the block-forming filters. In UoCTTI HOG, cells contain both oriented and unoriented gradients (27 components in total) as well as 4 texture components. The latter are ignored for simplicity, while the unoriented gradients are obtained as average of the oriented ones in the block-forming filters. Curiously, in UoCTTI HOG the $l^2$ normalisation factor is computed considering only the unoriented gradient components in a block, but applied to all, which requires modifying the normalization operator. Finally, when blocks are decomposed back to cells, they are averaged rather than stacked as in the original Dalal-Triggs HOG, which can be implemented in the block-decomposition filters.} These CNNs are numerically indistinguishable from the VLFeat reference implementations, but, true to their CNN nature, allow computing the feature derivatives as required by the algorithm of Sect.~\ref{s:method}.
363
+
364
+ \newcommand{\puti}[2]
365
+ {\begin{tikzpicture}
366
+ \node[anchor=south east,inner sep=0] at (0,0) {#2};
367
+ \node[image label]{#1};
368
+ \end{tikzpicture}}
369
+
370
+ \begin{figure*}[ht!]
371
+ \newcommand{\tri}{trim={0 0 {.5\width} {.5\height}},clip}
372
+ \puti{(a) Orig.}{\includegraphics[width=0.166\textwidth]{figures/v26/bfhog-tv2/hoggle-orig-1/lInf-orig}}\puti{(b) HOG}{\includegraphics[width=0.166\textwidth]{figures/v26/hog-hoggle/hoggle-orig-1/lInf-hog}}\puti{(c) HOGgle~\cite{vondrick13hoggles:}}{\includegraphics[width=0.166\textwidth]{figures/v26/hog-hoggle/hoggle-orig-1/lInf-recon}}\puti{(d) $\text{HOG}^{-1}$}{\includegraphics[width=0.166\textwidth]{figures/v26/bfhog-tv2/hoggle-orig-1/lInf-recon}}\puti{(e) $\text{HOGb}^{-1}$}{\includegraphics[width=0.166\textwidth]{figures/v26/bfhogb-tv2/hoggle-orig-1/lInf-recon}}\puti{(f) $\text{DSIFT}^{-1}$}{\includegraphics[width=0.166\textwidth]{figures/v26/bfdsift-tv2/hoggle-orig-1/lInf-recon}}
373
+ {\adjincludegraphics[width=0.166\textwidth,trim={150pt 300pt 80pt 100pt},clip]{figures/v26/bfhog-tv2/hoggle-orig-1/lInf-orig}}{\adjincludegraphics[width=0.166\textwidth,trim={150pt 300pt 80pt 100pt},clip]{figures/v26/hog-hoggle/hoggle-orig-1/lInf-hog}}{\adjincludegraphics[width=0.166\textwidth,trim={150pt 300pt 80pt 100pt},clip]{figures/v26/hog-hoggle/hoggle-orig-1/lInf-recon}}{\adjincludegraphics[width=0.166\textwidth,trim={150pt 300pt 80pt 100pt},clip]{figures/v26/bfhog-tv2/hoggle-orig-1/lInf-recon}}{\adjincludegraphics[width=0.166\textwidth,trim={150pt 300pt 80pt 100pt},clip]{figures/v26/bfhogb-tv2/hoggle-orig-1/lInf-recon}}{\adjincludegraphics[width=0.166\textwidth,trim={150pt 300pt 80pt 100pt},clip]{figures/v26/bfdsift-tv2/hoggle-orig-1/lInf-recon}}
374
+ \caption{Reconstruction quality of different representation inversion methods, applied to HOG and DSIFT. HOGb denotes HOG with bilinear orientation assignments. This image is best viewed on screen.}\label{f:hoggles}
375
+ \end{figure*}
376
+
377
+ Next we apply the algorithm from Sect.~\ref{s:method} on \textbf{CNN-A}, \textbf{CNN-DSIFT} and \textbf{CNN-HOG} to analyze our method.
378
+
379
+ \section{Experiments with shallow representations}\label{s:results-shallow}
380
+ \begin{table}
381
+ \centering
382
+ \begin{tabular}{|c|cc|c|c|}
383
+ \hline
384
+ descriptors & HOG & HOG & HOGb & DSIFT \\
385
+ method & HOGgle & our & our & our \\
386
+ \hline
387
+ error (\%) &$ 66.20$&$ 28.10$&$ 10.67$&$ 10.89$\\[-0.5em]
388
+ ~&\tiny$\pm13.7$&\tiny$\pm 7.9$&\tiny$\pm 5.2$&\tiny$\pm 7.5$\\
389
+ \hline
390
+ \end{tabular}
391
+ \vspace{0.5em}
392
+ \caption{Average reconstruction error of different representation inversion methods, applied to HOG and DSIFT. HOGb denotes HOG with bilinear orientation assignments. The standard deviation shown is the standard deviation of the error and not the standard deviation of the mean error.}\label{t:hog-errors}
393
+ \end{table}
394
+
395
+ \begin{figure}
396
+ \hfill
397
+ {\adjincludegraphics[width=0.2\columnwidth,trim={180pt 300pt 80pt 140pt},clip]{figures/v26/bfhog-tv2/hoggle-orig-1/lInf-orig}}
398
+ \hfill
399
+ {\adjincludegraphics[width=0.2\columnwidth,trim={180pt 300pt 80pt 140pt},clip]{figures/v26/bfhog-tv1/hoggle-orig-1/lInf-recon}}
400
+ \hfill
401
+ {\adjincludegraphics[width=0.2\columnwidth,trim={180pt 300pt 80pt 140pt},clip]{figures/v26/bfhog-tv2/hoggle-orig-1/lInf-recon}}
402
+ \hfill
403
+ {\adjincludegraphics[width=0.2\columnwidth,trim={180pt 300pt 80pt 140pt},clip]{figures/v26/bfhog-tv3/hoggle-orig-1/lInf-recon}}
404
+ \hfill\mbox{}
405
+ \caption{Effect of $\TV$ regularization. The same inversion algorithm visualized in Fig.~\ref{f:hoggles}(d) is used with a smaller ($\lambda_{\TV} = 0.5$), comparable ($\lambda_{\TV} = 5.0$), and larger ($\lambda_{\TV} = 50$) regularisation coefficient.}\label{f:tv}
406
+ \end{figure}
407
+
408
+ This section evaluates the representation inversion method of Sect.~\ref{s:method} by applying it to HOG and DSIFT.
409
+ The analysis includes both a qualitative (Fig.~\ref{f:hoggles}) and quantitative (Table~\ref{t:hog-errors}) comparison with existing technique.
410
+ The quantitative evaluation reports a normalized reconstruction error $\|\Phi(\bx^*) - \Phi(\bx_i)\|_2/\mathrm{N}_{\Phi}$ averaged over $100$ images $\bx_i$ from the ILSVRC 2012 challenge~\cite{ILSVRCarxiv14} validation data (images 1 to 100).
411
+ A normalization is essential to place the Euclidean distance in the context of the volume occupied by the features: if the features are close together, then even an Euclidean distance of $0.1$ is very large,
412
+ but if the features are spread out, then even an Euclidean distance of $10^5$ may be very small.
413
+ We use $\mathrm{N}_{\Phi}$ to be the average pairwise euclidean distance between $\Phi(\bx_i)$'s across the 100 test images.
414
+
415
+ We fix the parameters in equation \ref{eq:obj} to $\lambda_\alpha = 2.16\times 10^{8}$, $\lambda_{\TV} = 5$, and $\beta = 2$.
416
+
417
+ The closest alternative to our method is HOGgle, a technique introduced by Vondrick~\etal~\cite{vondrick13hoggles:} for the visualisation of HOG features.
418
+ The HOGgle code is publicly available from the authors' website and is used throughout these experiments.
419
+ Crucially, HOGgle is pre-trained to invert the UoCTTI implementation of HOG, which is numerically equivalent to CNN-HOG (Sect.~\ref{s:representations}), allowing for a direct comparison between algorithms.
420
+
421
+ Compared to our method, HOGgle is fast (2-3s vs 60s on the same CPU) but not very accurate, as it is apparent both qualitatively (Fig.~\ref{f:hoggles}.c vs d) and quantitatively (66\% vs 28\% reconstruction error, see Table.~\ref{t:hog-errors}).
422
+ Interestingly, \cite{vondrick13hoggles:} propose a direct optimisation method similar to~\eqref{e:objective}, but show that it does not perform better than HOGgle.
423
+ This demonstrates the importance of the choice of regulariser and the ability of computing the derivative of the representation.
424
+ The effect of the regularizer $\lambda_{\TV}$ is further analysed in Fig.~\ref{f:tv} (and later in Table~\ref{t:cnn-errors}): without this prior information, the reconstructions present a significant amount of discretization artifacts.
425
+
426
+ In terms of speed, an advantage of optimizing~\eqref{e:objective} is that it can be switched to use GPU code immediately given the underlying CNN framework; doing so results in a ten-fold speedup.
427
+ Furthermore the CNN-based implementation of HOG and DSIFT wastes significant resources using generic filtering code despite the particular nature of the filters in these two representations.
428
+ Hence we expect that an optimized implementation could be several times faster than this.
429
+
430
+ It is also apparent that different representations can be easier or harder to invert.
431
+ In particular, modifying HOG to use bilinear gradient orientation assignments as SIFT (Sect.~\ref{s:representations}) significantly reduces the reconstruction error (from 28\% down to 11\%) and improves the reconstruction quality (Fig.~\ref{f:hoggles}.e).
432
+ More impressive is DSIFT: it is quantitatively similar to HOG with bilinear orientations, but produces significantly more detailed images (Fig.~\ref{f:hoggles}.f).
433
+ Since HOG uses a finer quantisation of the gradient compared to SIFT but otherwise the same cell size and sampling, this result can be imputed to the heavier block-normalisation of HOG that evidently discards more image information than SIFT.
434
+
435
+ \section{Experiments with deep representations}\label{s:results-deep}
436
+ \begin{figure}
437
+ \newcommand{\putx}[2]{\puti{#1}{\noexpand{\adjincludegraphics[width=0.11\textwidth]{figures/v26/bfcnna1-group1/#2/l01-orig}}}}\hfill \putx{a}{ILSVRC2012_val_00000013}\hfill \putx{b}{stock_fish}\hfill \putx{c}{stock_abstract}\hfill \putx{d}{ILSVRC2012_val_00000043}\hfill\mbox{}
438
+ \caption{Test images for qualitative results.}\label{f:test-image}
439
+ \end{figure}
440
+
441
+ \begin{table*}
442
+ \setlength{\tabcolsep}{2pt}
443
+ \footnotesize
444
+ \centering
445
+ \begin{tabular}{|l|cccccccccccccccccccc|}
446
+ \hline
447
+ layer& 1& 2& 3& 4& 5& 6& 7& 8& 9& 10& 11& 12& 13& 14& 15& 16& 17& 18& 19& 20 \\
448
+ \hline
449
+ name& conv1 & relu1& mpool1& norm1 & conv2 & relu2 & mpool2& norm2 & conv3 & relu3 & conv4 & relu4 & conv5 & relu5 & mpool5 & fc6 & relu6 & fc7 & relu7& fc8 \\
450
+ type& cnv& relu& mpool& nrm& cnv& relu& mpool& nrm& cnv& relu& cnv& relu& cnv& relu& mpool& cnv& relu& cnv& relu& cnv \\
451
+ channels& 96& 96& 96& 96& 256& 256& 256& 256& 384& 384& 384& 384& 256& 256& 256& 4096& 4096& 4096& 4096& 1000 \\
452
+ \hline
453
+ rec. field& 11& 11& 19& 19& 51& 51& 67& 67& 99& 99& 131& 131& 163& 163& 195& 355& 355& 355& 355& 355 \\
454
+ \hline
455
+ \end{tabular}
456
+ \vspace{0.5em}
457
+ \caption{{\bf CNN-A structure.} The table specifies the structure of CNN-A along with receptive field size of each neuron. The filters in layers from 16 to 20 operate as ``fully connected'': given the standard image input size of $227 \times 227$ pixels, their support covers the whole image. Note also that their receptive field is larger than 227 pixels, but can be contained in the image domain due to padding.}\label{f:cnna}
458
+ \end{table*}
459
+
460
+ \begin{figure*}
461
+ \centering
462
+ \newcommand{\putx}[3]{\puti{#1}{\noexpand{\adjincludegraphics[width=0.099\textwidth]{figures/v26/#2/ILSVRC2012_val_00000013/l#3-recon}}}}\putx{conv1}{bfcnna1}{01}\putx{relu1}{bfcnna1}{02}\putx{mpool1}{bfcnna1}{03}\putx{norm1}{bfcnna1}{04}\putx{conv2}{bfcnna1}{05}\putx{relu2}{bfcnna1}{06}\putx{mpool2}{bfcnna2}{07}\putx{norm2}{bfcnna2}{08}\putx{conv3}{bfcnna2}{09}\putx{relu3}{bfcnna2}{10}
463
+ \putx{conv4}{bfcnna2}{11}\putx{relu4}{bfcnna2}{12}\putx{conv5}{bfcnna3}{13}\putx{relu5}{bfcnna3}{14}\putx{mpool5}{bfcnna3}{15}\putx{fc6}{bfcnna3}{16}\putx{relu6}{bfcnna3}{17}\putx{fc7}{bfcnna3}{18}\putx{relu7}{bfcnna3}{19}\putx{fc8}{bfcnna3}{20}
464
+ \caption{{\bf CNN reconstruction.} Reconstruction of the image of Fig.~\ref{f:test-image}.a from each layer of CNN-A. To generate these results, the regularization coefficient for each layer is chosen to match the highlighted rows in table~\ref{t:cnn-errors}. This figure is best viewed in color/screen.}\label{f:cnn-layers}
465
+ \end{figure*}
466
+
467
+ \begin{table*}
468
+ \setlength{\tabcolsep}{1pt}
469
+ \begin{tabular}{|c||cccccccccccccccccccc|}
470
+ \hline
471
+ $\lambda_{\TV}$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\
472
+ & conv1 & relu1 & pool1 & norm1 & conv2 & relu2 & pool2 & norm2 & conv3 & relu3 & conv4 & relu4 & conv5 & relu5 & pool5 & fc6 & relu6 & fc7 & relu7 & fc8 \\
473
+ \hline
474
+ $\lambda_1$ & $\mathbf{10.0}$ & $\mathbf{11.3}$ & $\mathbf{21.9}$ & $\mathbf{20.3}$ & $\mathbf{12.4}$ & $\mathbf{12.9}$ & $15.5$ & $15.9$ & $14.5$ & $16.5$ & $14.9$ & $13.8$ & $12.6$ & $15.6$ & $16.6$ & $12.4$ & $15.8$ & $12.8$ & $10.5$ & $5.3$ \\[-0.5em]
475
+ &\tiny$\pm5.0$ &\tiny$\pm5.5$ &\tiny$\pm9.2$ &\tiny$\pm5.0$ &\tiny$\pm3.1$ &\tiny$\pm5.3$ &\tiny$\pm4.7$ &\tiny$\pm4.6$ &\tiny$\pm4.7$ &\tiny$\pm5.3$ &\tiny$\pm3.8$ &\tiny$\pm3.8$ &\tiny$\pm2.8$ &\tiny$\pm5.1$ &\tiny$\pm4.6$ &\tiny$\pm3.5$ &\tiny$\pm4.5$ &\tiny$\pm6.4$ &\tiny$\pm1.9$ &\tiny$\pm1.1$ \\
476
+ $\lambda_2$ & $20.2$ & $22.4$ & $30.3$ & $28.2$ & $20.0$ & $17.4$ & $\mathbf{18.2}$ & $\mathbf{18.4}$ & $\mathbf{14.4}$ & $\mathbf{15.1}$ & $\mathbf{13.3}$ & $\mathbf{14.0}$ & $15.4$ & $13.9$ & $15.5$ & $14.2$ & $13.7$ & $15.4$ & $10.8$ & $5.9$ \\[-0.5em]
477
+ &\tiny$\pm9.3$&\tiny$\pm10.3$&\tiny$\pm13.6$ &\tiny$\pm7.6$ &\tiny$\pm4.9$ &\tiny$\pm5.0$ &\tiny$\pm5.5$ &\tiny$\pm5.0$ &\tiny$\pm3.6$ &\tiny$\pm3.3$ &\tiny$\pm2.6$ &\tiny$\pm2.8$ &\tiny$\pm2.7$ &\tiny$\pm3.2$ &\tiny$\pm3.5$ &\tiny$\pm3.7$ &\tiny$\pm3.1$&\tiny$\pm10.3$ &\tiny$\pm1.6$ &\tiny$\pm0.9$ \\
478
+ $\lambda_3$ & $40.8$ & $45.2$ & $54.1$ & $48.1$ & $39.7$ & $32.8$ & $32.7$ & $32.4$ & $25.6$ & $26.9$ & $23.3$ & $23.9$ & $\mathbf{25.7}$ & $\mathbf{20.1}$ & $\mathbf{19.0}$ & $\mathbf{18.6}$ & $\mathbf{18.7}$ & $\mathbf{17.1}$ & $\mathbf{15.5}$ & $\mathbf{8.5}$ \\[-0.5em]
479
+ &\tiny$\pm17.0$&\tiny$\pm18.7$&\tiny$\pm22.7$&\tiny$\pm11.8$ &\tiny$\pm9.1$ &\tiny$\pm7.7$ &\tiny$\pm8.0$ &\tiny$\pm7.0$ &\tiny$\pm5.6$ &\tiny$\pm5.2$ &\tiny$\pm4.1$ &\tiny$\pm4.6$ &\tiny$\pm4.3$ &\tiny$\pm4.3$ &\tiny$\pm4.3$ &\tiny$\pm4.9$ &\tiny$\pm3.8$ &\tiny$\pm3.4$ &\tiny$\pm2.1$ &\tiny$\pm1.3$ \\
480
+ \hline
481
+ \end{tabular}
482
+ \vspace{0.1em}
483
+ \caption{{\bf Inversion error for CNN-A.} Average inversion percentage error (normalized) for all the layers of CNN-A and various amounts of $\TV$ regularisation: $\lambda_1=0.5$, $\lambda_2=10\lambda_1$ and $\lambda_3=100\lambda_1$. In bold face are the error values corresponding to the regularizer that works best both qualitatively and quantitatively. The deviations specified in this table are the standard deviations of the errors and not the standard deviations of the mean error value.}\label{t:cnn-errors}
484
+ \end{table*}
485
+
486
+ \begin{figure*}[ht!]
487
+ \newcommand{\putx}[3]{\puti{#1}{\noexpand{\adjincludegraphics[width=0.23\textwidth,trim={0 0 {0.3333\width} 0},clip]{figures/v26/bfcnna#2-repeats/\which/l#3-recon}}}}\begin{center}
488
+ \newcommand{\which}{stock_abstract}
489
+ \putx{pool5}{1}{15} \putx{relu6}{2}{17} \putx{relu7}{3}{19} \putx{fc8}{3}{20}
490
+ \renewcommand{\which}{ILSVRC2012_val_00000043}
491
+ \putx{pool5}{1}{15} \putx{relu6}{2}{17} \putx{relu7}{3}{19} \putx{fc8}{3}{20}
492
+ \end{center}
493
+ \vspace{-1em}
494
+ \caption{{\bf CNN invariances.} Multiple reconstructions of the images of Fig.~\ref{f:test-image}.c--d from different deep codes obtained from CNN-A. This figure is best seen in colour/screen.}\label{f:cnn-invariance}
495
+ \end{figure*}
496
+
497
+ \begin{figure}[ht!]
498
+ \hfill
499
+ {\adjincludegraphics[width=0.15\textwidth]{figures/v26/bfcnna1/ILSVRC2012_val_00000043/l20-recon}}
500
+ \hfill
501
+ {\adjincludegraphics[width=0.15\textwidth]{figures/v26/bfcnna2/ILSVRC2012_val_00000043/l20-recon}}
502
+ \hfill
503
+ {\adjincludegraphics[width=0.15\textwidth]{figures/v26/bfcnna3/ILSVRC2012_val_00000043/l20-recon}}
504
+ \hfill\mbox{}
505
+ \caption{Effect of $\TV$ regularization on CNNs. Inversions of the last layers of CNN-A for Fig.~\ref{f:test-image}.d with a progressively larger regulariser $\lambda_{\TV}$. This image is best viewed in color/screen.}\label{f:cnn-tv}
506
+ \end{figure}
507
+
508
+ \begin{figure*}[ht!]
509
+ \centering
510
+ \newcommand{\putx}[3]{\puti{#1}{\noexpand{\adjincludegraphics[width=0.135\textwidth]{figures/v26/bfcnna#2-neigh5/ILSVRC2012_val_00000013/l#3-recon-fovoverlaid}}}}\putx{conv1}{3}{01}\putx{relu1}{3}{02}\putx{mpool1}{3}{03}\putx{norm1}{3}{04}\putx{conv2}{3}{05}\putx{relu2}{3}{06}\putx{mpool2}{3}{07}
511
+ \putx{norm2}{3}{08}\putx{conv3}{3}{09}\putx{relu3}{3}{10}\putx{conv4}{3}{11}\putx{relu4}{3}{12}\putx{conv5}{3}{13}\putx{relu5}{3}{14}\caption{{\bf CNN receptive field.} Reconstructions of the image of Fig.~\ref{f:test-image}.a from the central $5\times 5$ neuron fields at different depths of CNN-A. The white box marks the field of view of the $5\times 5$ neuron field. The field of view is the entire image for conv5 and relu5.}\label{f:cnn-neigh}
512
+ \end{figure*}
513
+
514
+ \begin{figure*}[ht!]
515
+ \centering
516
+ \newcommand{\putx}[2]{\puti{#1}{\noexpand{\includegraphics[width=0.160\textwidth]{figures/v26/bfcnna1-group2/stock_abstract/l#2-recon}}}}\putx{conv1-grp1}{01}\putx{norm1-grp1}{04}\putx{norm2-grp1}{08}\renewcommand{\putx}[2]{\puti{#1}{\noexpand{\includegraphics[width=0.160\textwidth]{figures/v26/bfcnna1-group2/stock_fish/l#2-recon}}}}\putx{conv1-grp1}{01}\putx{norm1-grp1}{04}\putx{norm2-grp1}{08}
517
+ \renewcommand{\putx}[2]{\puti{#1}{\noexpand{\includegraphics[width=0.160\textwidth]{figures/v26/bfcnna1-group1/stock_abstract/l#2-recon}}}}\putx{conv1-grp2}{01}\putx{norm1-grp2}{04}\putx{norm2-grp2}{08}\renewcommand{\putx}[2]{\puti{#1}{\noexpand{\includegraphics[width=0.160\textwidth]{figures/v26/bfcnna1-group1/stock_fish/l#2-recon}}}}\putx{conv1-grp2}{01}\putx{norm1-grp2}{04}\putx{norm2-grp2}{08}\caption{{\bf CNN neural streams.} Reconstructions of the images of Fig.~\ref{f:test-image}.c-b from either of the two neural streams of CNN-A. This figure is best seen in colour/screen.}\label{f:cnn-streams}
518
+ \end{figure*}
519
+ \vspace{-0.5em}
520
+ This section evaluates the inversion method applied to CNN-A described in Sect.~\ref{s:representations}.
521
+ Compared to CNN-HOG and CNN-DSIFT, this network is significantly larger and deeper.
522
+ It seems therefore that the inversion problem should be considerably harder.
523
+ Also, CNN-A is not handcrafted but learned from 1.2M images of the ImageNet ILSVRC 2012 data~\cite{ILSVRCarxiv14}.
524
+
525
+
526
+ The algorithm of Sect.~\ref{s:optimisation} is used to invert the code obtained from each individual CNN layer for 100 ILSVRC validation images (these were not used to train the CNN-A model~\citep{krizhevsky12imagenet}).
527
+ Similar to Sect.~\ref{s:results-shallow}, the normalized inversion error is computed and reported in Table~\ref{t:cnn-errors}.
528
+ The experiment is repeated by fixing $\lambda_\alpha$ to a fixed value of $2.16\times10^{8}$ and gradually increasing $\lambda_{\TV}$ ten-folds, starting from a relatively small value $\lambda_1 = 0.5$.
529
+ The ImageNet ILSVRC mean image is added back to the reconstruction before visualisation as this is subtracted when training the network.
530
+ Somewhat surprisingly, the quantitative results show that CNNs are, in fact, not much harder to invert than HOG.
531
+ The error rarely exceeds $20\%$, which is comparable to the accuracy of HOG (Sect.~\ref{s:results-shallow}).
532
+ The last layer is in particular easy to invert with an average error of $8.5\%$.
533
+
534
+ We choose the regularizer coefficients for each representation/layer based on a quantitative and qualitative study of the reconstruction.
535
+ We pick $\lambda_1 = 0.5$ for layers 1-6, $\lambda_2 = 5.0$ for layers 7-12 and $\lambda_3 = 50$ for layers 13-20.
536
+ The error value corresponding to these parameters is marked in bold face in table \ref{t:cnn-errors}.
537
+ Increasing $\lambda_{\TV}$ causes a deterioration for the first layers, but for the latter layers it helps recover a more visually interpretable reconstruction.
538
+ Though this parameter can be tuned by cross validation on the normalized reconstruction error, a selection based on qualitative analysis is preferred because the method should yield images that are visually meaningful.
539
+
540
+ Qualitatively, Fig.~\ref{f:cnn-layers} illustrates the reconstruction for a test image from each layer of CNN-A.
541
+ The progression is remarkable.
542
+ The first few layers are essentially an invertible code of the image.
543
+ All the convolutional layers maintain a photographically faithful representation of the image, although with increasing fuzziness.
544
+ The 4,096-dimensional fully connected layers are perhaps more interesting, as they invert back to a \emph{composition of parts similar but not identical to the ones found in the original image}.
545
+ Going from relu7 to fc8 reduces the dimensionality further to just 1,000; nevertheless some of these visual elements can still be identified.
546
+ Similar effects can be observed in the reconstructions in~Fig.~\ref{f:cnn-invariance}.
547
+ This figure includes also the reconstruction of an abstract pattern, which is not included in any of the ImageNet classes; still, all CNN codes capture distinctive visual features of the original pattern, clearly indicating that even very deep layers capture visual information.
548
+
549
+ Next, Fig.~\ref{f:cnn-invariance} examines the invariance captured by the CNN model by considering multiple reconstructions out of each deep layer.
550
+ A careful examination of these images reveals that the codes capture progressively larger deformations of the object.
551
+ In the ``flamingo'' reconstruction, in particular, relu7 and fc8 invert back to multiple copies of the object/parts at different positions and scales.
552
+
553
+ Note that all these and the original images are nearly indistinguishable from the viewpoint of the CNN model; it is therefore interesting to note the lack of detail in the deepest reconstructions, showing that the network captures just a sketch of the objects, which evidently suffices for classification.
554
+ Considerably lowering the regulariser parameter still yields very accurate inversions, but this time with barely any resemblance to a natural image.
555
+ This confirms that CNNs have strong non-natural confounders.
556
+
557
+ We now examine reconstructions obtained from subset of neural responses in different CNN layers.
558
+ Fig.~\ref{f:cnn-neigh} explores the \emph{locality} of the codes by reconstructing a central $5\times 5$ patch of features in each layer.
559
+ The regulariser encourages portions of the image that do not contribute to the neural responses to be switched off.
560
+ The locality of the features is obvious in the figure; what is less obvious is that the effective receptive field of the neurons is in some cases significantly smaller than the theoretical one - shown as a white box in the image.
561
+
562
+ Finally, Fig.~\ref{f:cnn-streams} reconstructs images from a subset of feature channels.
563
+ CNN-A contains in fact two subsets of feature channels which are independent for the first several layers (up to norm2)~\cite{krizhevsky12imagenet}.
564
+ Reconstructing from each subset individually, clearly shows that one group is tuned towards low-frequency colour information whereas the second one is tuned to towards high-frequency luminance components.
565
+ Remarkably, this behaviour emerges naturally in the learned network without any mechanism directly encouraging this pattern.
566
+
567
+ \begin{figure}[h!]
568
+ \centering
569
+ \newcommand{\putx}[4]{\noexpand{\adjincludegraphics[width=0.092\textwidth]{figures/v26/#1/ILSVRC2012_val_000000#2/l#3-#4}}}\putx{bfcnna3}{11}{01}{orig}\putx{bfcnna3}{14}{01}{orig}\putx{bfcnna3}{18}{01}{orig}\putx{bfcnna3}{23}{01}{orig}\putx{bfcnna3}{33}{01}{orig}
570
+ \putx{bfcnna3}{11}{15}{recon}\putx{bfcnna3}{14}{15}{recon}\putx{bfcnna3}{18}{15}{recon}\putx{bfcnna3}{23}{15}{recon}\putx{bfcnna3}{33}{15}{recon}
571
+ \caption{{\bf Diversity in the CNN model.} mpool5 reconstructions show that the network retains rich information even at such deep levels. This figure is best viewed in color/screen (zoom in).}\label{f:cnn-diversity}
572
+ \end{figure}
573
+
574
+ \section{Summary}\label{s:summary}
575
+
576
+
577
+ This paper proposed an optimisation method to invert shallow and deep representations based on optimizing an objective function with gradient descent.
578
+ Compared to alternatives, a key difference is the use of image priors such as the $\TV$ norm that can recover the low-level image statistics removed by the representation.
579
+ This tool performs better than alternative reconstruction methods for HOG.
580
+ Applied to CNNs, the visualisations shed light on the information represented at each layer.
581
+ In particular, it is clear that a progressively more invariant and abstract notion of the image content is formed in the network.
582
+
583
+ In the future, we shall experiment with more expressive natural image priors and analyze the effect of network hyper-parameters on the reconstructions.
584
+ We shall extract subsets of neurons that encode object parts and try to establish sub-networks that capture different details of the image.
585
+
586
+ \footnotesize
587
+ \bibliographystyle{ieee}
588
+ \bibliography{local}
589
+ \end{document}
papers/1412/1412.3555.tex ADDED
@@ -0,0 +1,758 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass{article} \usepackage{nips14submit_e,times}
2
+ \usepackage{hyperref}
3
+ \usepackage{url}
4
+ \usepackage{natbib}
5
+ \usepackage{amsmath}
6
+ \usepackage{amsthm}
7
+ \usepackage{amssymb}
8
+ \usepackage{graphicx}
9
+ \usepackage{xspace}
10
+ \usepackage{tabularx}
11
+ \usepackage{multirow}
12
+ \usepackage{wrapfig}
13
+
14
+
15
+ \newcommand{\fix}{\marginpar{FIX}}
16
+ \newcommand{\new}{\marginpar{NEW}}
17
+
18
+ \newcommand{\obs}{\text{obs}}
19
+ \newcommand{\mis}{\text{mis}}
20
+
21
+ \newcommand{\qt}[1]{\left<#1\right>}
22
+ \newcommand{\ql}[1]{\left[#1\right]}
23
+ \newcommand{\hess}{\mathbf{H}}
24
+ \newcommand{\jacob}{\mathbf{J}}
25
+ \newcommand{\hl}{HL}
26
+ \newcommand{\cost}{\mathcal{L}}
27
+ \newcommand{\lout}{\mathbf{r}}
28
+ \newcommand{\louti}{r}
29
+ \newcommand{\outi}{y}
30
+ \newcommand{\out}{\mathbf{y}}
31
+ \newcommand{\gauss}{\mathbf{G_N}}
32
+ \newcommand{\eye}{\mathbf{I}}
33
+ \newcommand{\softmax}{\phi}
34
+ \newcommand{\targ}{\mathbf{t}}
35
+ \newcommand{\metric}{\mathbf{G}}
36
+ \newcommand{\sample}{\mathbf{z}}
37
+
38
+ \newcommand{\bmx}[0]{\begin{bmatrix}}
39
+ \newcommand{\emx}[0]{\end{bmatrix}}
40
+ \newcommand{\qexp}[1]{\left<#1\right>}
41
+ \newcommand{\vect}[1]{\mathbf{#1}}
42
+ \newcommand{\vects}[1]{\boldsymbol{#1}}
43
+ \newcommand{\matr}[1]{\mathbf{#1}}
44
+ \newcommand{\var}[0]{\operatorname{Var}}
45
+ \newcommand{\std}[0]{\operatorname{std}}
46
+ \newcommand{\cov}[0]{\operatorname{Cov}}
47
+ \newcommand{\diag}[0]{\operatorname{diag}}
48
+ \newcommand{\matrs}[1]{\boldsymbol{#1}}
49
+ \newcommand{\va}[0]{\vect{a}}
50
+ \newcommand{\vb}[0]{\vect{b}}
51
+ \newcommand{\vc}[0]{\vect{c}}
52
+ \newcommand{\vh}[0]{\vect{h}}
53
+ \newcommand{\vv}[0]{\vect{v}}
54
+ \newcommand{\vx}[0]{\vect{x}}
55
+ \newcommand{\vr}[0]{\vect{r}}
56
+ \newcommand{\vw}[0]{\vect{w}}
57
+ \newcommand{\vs}[0]{\vect{s}}
58
+ \newcommand{\vf}[0]{\vect{f}}
59
+ \newcommand{\vy}[0]{\vect{y}}
60
+ \newcommand{\vg}[0]{\vect{g}}
61
+ \newcommand{\vm}[0]{\vect{m}}
62
+ \newcommand{\vu}[0]{\vect{u}}
63
+ \newcommand{\vL}[0]{\vect{L}}
64
+ \newcommand{\mW}[0]{\matr{W}}
65
+ \newcommand{\mG}[0]{\matr{G}}
66
+ \newcommand{\mX}[0]{\matr{X}}
67
+ \newcommand{\mQ}[0]{\matr{Q}}
68
+ \newcommand{\mU}[0]{\matr{U}}
69
+ \newcommand{\mV}[0]{\matr{V}}
70
+ \newcommand{\mA}{\matr{A}}
71
+ \newcommand{\mD}{\matr{D}}
72
+ \newcommand{\mS}{\matr{S}}
73
+ \newcommand{\mI}{\matr{I}}
74
+ \newcommand{\td}[0]{\text{d}}
75
+ \newcommand{\TT}[0]{\vects{\theta}}
76
+ \newcommand{\vsig}[0]{\vects{\sigma}}
77
+ \newcommand{\valpha}[0]{\vects{\alpha}}
78
+ \newcommand{\vmu}[0]{\vects{\mu}}
79
+ \newcommand{\vzero}[0]{\vect{0}}
80
+ \newcommand{\tf}[0]{\text{m}}
81
+ \newcommand{\tdf}[0]{\text{dm}}
82
+ \newcommand{\grad}[0]{\nabla}
83
+ \newcommand{\alert}[1]{\textcolor{red}{#1}}
84
+ \newcommand{\N}[0]{\mathcal{N}}
85
+ \newcommand{\LL}[0]{\mathcal{L}}
86
+ \newcommand{\HH}[0]{\mathcal{H}}
87
+ \newcommand{\RR}[0]{\mathbb{R}}
88
+ \newcommand{\II}[0]{\mathbb{I}}
89
+ \newcommand{\Scal}[0]{\mathcal{S}}
90
+ \newcommand{\sigmoid}{\sigma}
91
+ \newcommand{\E}[0]{\mathbb{E}}
92
+ \newcommand{\enabla}[0]{\ensuremath{\overset{\raisebox{-0.3ex}[0.5ex][0ex]{\ensuremath{\scriptscriptstyle e}}}{\nabla}}}
93
+ \newcommand{\enhnabla}[0]{\nabla_{\hspace{-0.5mm}e}\,}
94
+ \newcommand{\tred}[1]{\textcolor{red}{#1}}
95
+ \newcommand{\todo}[1]{{\Large\textcolor{red}{#1}}}
96
+ \newcommand{\done}[1]{{\Large\textcolor{green}{#1}}}
97
+ \newcommand{\dd}[1]{\ensuremath{\mbox{d}#1}}
98
+
99
+ \DeclareMathOperator*{\argmax}{\arg \max}
100
+ \DeclareMathOperator*{\argmin}{\arg \min}
101
+ \newcommand{\newln}{\\&\quad\quad{}}
102
+
103
+ \newcommand{\tb}[1]{\textcolor{blue}{#1}}
104
+
105
+
106
+ \newcommand{\specialcell}[2][c]{\begin{tabular}[#1]{@{}c@{}}#2\end{tabular}}
107
+
108
+
109
+ \nipsfinalcopy
110
+
111
+ \title{Empirical Evaluation of \\ Gated Recurrent Neural Networks \\ on Sequence Modeling}
112
+
113
+
114
+ \author{
115
+ Junyoung Chung ~ ~ ~ Caglar Gulcehre ~ ~ ~
116
+ KyungHyun Cho\\
117
+ Universit\'{e} de Montr\'{e}al
118
+ \And
119
+ Yoshua Bengio\\
120
+ Universit\'{e} de Montr\'{e}al \\
121
+ CIFAR Senior Fellow
122
+ }
123
+
124
+
125
+
126
+ \begin{document}
127
+
128
+ \maketitle
129
+ \begin{abstract}
130
+ In this paper we compare different types of recurrent units in recurrent
131
+ neural networks (RNNs). Especially, we focus on more sophisticated units that
132
+ implement a gating mechanism, such as a long short-term memory (LSTM) unit
133
+ and a recently proposed gated recurrent unit (GRU). We evaluate these
134
+ recurrent units on the tasks of polyphonic music modeling and speech signal
135
+ modeling. Our experiments revealed that these advanced recurrent units are
136
+ indeed better than more traditional recurrent units such as $\tanh$ units.
137
+ Also, we found GRU to be comparable to LSTM.
138
+ \end{abstract}
139
+
140
+ \section{Introduction}
141
+
142
+ Recurrent neural networks have recently shown promising results in many machine
143
+ learning tasks, especially when input and/or output are of variable
144
+ length~\citep[see, e.g.,][]{Graves-book2012}. More recently, \citet{Sutskever-et-al-arxiv2014}
145
+ and \citet{Bahdanau-et-al-arxiv2014} reported that recurrent neural networks are
146
+ able to perform as well as the existing, well-developed systems on a challenging
147
+ task of machine translation.
148
+
149
+ One interesting observation, we make from these recent successes is that almost
150
+ none of these successes were achieved with a vanilla recurrent neural network. Rather, it was a
151
+ recurrent neural network with sophisticated recurrent hidden units, such as long
152
+ short-term memory units~\citep{Hochreiter+Schmidhuber-1997}, that was used in
153
+ those successful applications.
154
+
155
+ Among those sophisticated recurrent units, in this paper, we are interested in
156
+ evaluating two closely related variants. One is a long short-term memory (LSTM)
157
+ unit, and the other is a gated recurrent unit (GRU) proposed more recently by
158
+ \citet{cho2014properties}. It is well established in the field that the LSTM
159
+ unit works well on sequence-based tasks with long-term dependencies, but the
160
+ latter has only recently been introduced and used in the context of machine
161
+ translation.
162
+
163
+ In this paper, we evaluate these two units and a more traditional
164
+ $\tanh$ unit on the task of sequence modeling. We consider three
165
+ polyphonic music datasets~\citep[see, e.g.,][]{Boulanger-et-al-ICML2012} as well as two
166
+ internal datasets provided by Ubisoft in which each sample is a raw speech
167
+ representation.
168
+
169
+ Based on our experiments, we concluded that by using fixed number of parameters for all models
170
+ on some datasets GRU, can outperform LSTM units both in terms of convergence in CPU time and
171
+ in terms of parameter updates and generalization.
172
+
173
+ \section{Background: Recurrent Neural Network}
174
+
175
+ A recurrent neural network (RNN) is an extension of a conventional feedforward neural
176
+ network, which is able to handle a variable-length sequence input. The RNN
177
+ handles the variable-length sequence by having a recurrent hidden state whose
178
+ activation at each time is dependent on that of the previous time.
179
+
180
+ More formally, given a sequence $\vx=\left( \vx_1, \vx_2, \cdots, \vx_{\scriptscriptstyle{T}} \right)$, the
181
+ RNN updates its recurrent hidden state $h_t$ by
182
+ \begin{align}
183
+ \label{eq:rnn_hidden}
184
+ \vh_t =& \begin{cases}
185
+ 0, & t = 0\\
186
+ \phi\left(\vh_{t-1}, \vx_{t}\right), & \mbox{otherwise}
187
+ \end{cases}
188
+ \end{align}
189
+ where $\phi$ is a nonlinear function such as composition of a logistic sigmoid with an affine transformation.
190
+ Optionally, the RNN may have an output $\vy=\left(y_1, y_2, \dots, y_{\scriptscriptstyle{T}}\right)$ which
191
+ may again be of variable length.
192
+
193
+ Traditionally, the update of the recurrent hidden state in Eq.~\eqref{eq:rnn_hidden} is implemented as
194
+
195
+ \begin{align}
196
+ \label{eq:rnn_trad}
197
+ \vh_t = g\left( W \vx_t + U \vh_{t-1} \right),
198
+ \end{align}
199
+ where $g$ is a smooth, bounded function such as a logistic sigmoid function or a
200
+ hyperbolic tangent function.
201
+
202
+ A generative RNN outputs a probability distribution over the next element of the sequence,
203
+ given its current state $\vh_t$, and this generative model can capture a distribution
204
+ over sequences of variable length by using a special output symbol to represent
205
+ the end of the sequence. The sequence probability can be decomposed into
206
+
207
+ \begin{align}
208
+ \label{eq:seq_model}
209
+ p(x_1, \dots, x_{\scriptscriptstyle{T}}) = p(x_1) p(x_2\mid x_1) p(x_3 \mid x_1, x_2) \cdots
210
+ p(x_{\scriptscriptstyle{T}} \mid x_1, \dots, x_{\scriptscriptstyle{T-1}}),
211
+ \end{align}
212
+
213
+ where the last element is a special end-of-sequence value. We model each conditional probability distribution with
214
+
215
+ \begin{align*}
216
+ p(x_t \mid x_1, \dots, x_{t-1}) =& g(h_t),
217
+ \end{align*}
218
+ where $h_t$ is from Eq.~\eqref{eq:rnn_hidden}. Such generative RNNs are the subject
219
+ of this paper.
220
+
221
+ Unfortunately, it has been observed by, e.g., \citet{Bengio-trnn94} that it is
222
+ difficult to train RNNs to capture long-term dependencies because the gradients
223
+ tend to either vanish (most of the time) or explode (rarely, but with severe effects).
224
+ This makes gradient-based optimization method struggle, not just because of the
225
+ variations in gradient magnitudes but because of the effect of long-term dependencies
226
+ is hidden (being exponentially smaller with respect to sequence length) by the
227
+ effect of short-term dependencies.
228
+ There have been two dominant approaches by which many researchers have tried to reduce the negative
229
+ impacts of this issue. One such approach is to devise a better learning algorithm
230
+ than a simple stochastic gradient descent~\citep[see, e.g.,][]{Bengio-et-al-ICASSP-2013,Pascanu-et-al-ICML2013,Martens+Sutskever-ICML2011},
231
+ for example using the very simple {\em clipped gradient}, by which the norm of the
232
+ gradient vector is clipped, or using second-order methods which may be less sensitive
233
+ to the issue if the second derivatives follow the same growth pattern as the first derivatives
234
+ (which is not guaranteed to be the case).
235
+
236
+ The other approach, in which we are more interested in this paper, is to design
237
+ a more sophisticated activation function than a usual activation function,
238
+ consisting of affine transformation followed by a simple element-wise
239
+ nonlinearity by using gating units. The earliest attempt in this direction
240
+ resulted in an activation function, or a recurrent unit, called a long short-term memory (LSTM)
241
+ unit~\citep{Hochreiter+Schmidhuber-1997}. More recently, another type of
242
+ recurrent unit, to which we refer as a gated recurrent unit (GRU), was proposed
243
+ by \citet{cho2014properties}. RNNs employing either of these recurrent units
244
+ have been shown to perform well in tasks that require capturing long-term
245
+ dependencies. Those tasks include, but are not limited to, speech
246
+ recognition~\citep[see, e.g.,][]{Graves-et-al-ICASSP2013} and machine
247
+ translation~\citep[see, e.g.,][]{Sutskever-et-al-arxiv2014,Bahdanau-et-al-arxiv2014}.
248
+
249
+
250
+
251
+ \section{Gated Recurrent Neural Networks}
252
+
253
+ In this paper, we are interested in evaluating the performance of those recently
254
+ proposed recurrent units (LSTM unit and GRU) on sequence modeling.
255
+ Before the empirical evaluation, we first describe each of those recurrent units
256
+ in this section.
257
+
258
+ \begin{figure}[t]
259
+ \centering
260
+ \begin{minipage}[b]{0.4\textwidth}
261
+ \centering
262
+ \includegraphics[height=0.18\textheight]{lstm.pdf}
263
+ \end{minipage}
264
+ \hspace{5mm}
265
+ \begin{minipage}[b]{0.4\textwidth}
266
+ \centering
267
+ \includegraphics[height=0.16\textheight]{gated_rec.pdf}
268
+ \end{minipage}
269
+
270
+ \vspace{2mm}
271
+ \begin{minipage}{0.4\textwidth}
272
+ \centering
273
+ (a) Long Short-Term Memory
274
+ \end{minipage}
275
+ \hspace{5mm}
276
+ \begin{minipage}{0.4\textwidth}
277
+ \centering
278
+ (b) Gated Recurrent Unit
279
+ \end{minipage}
280
+
281
+ \caption{
282
+ Illustration of (a) LSTM and (b) gated recurrent units. (a) $i$, $f$ and
283
+ $o$ are the input, forget and output gates, respectively. $c$ and
284
+ $\tilde{c}$ denote the memory cell and the new memory cell content. (b)
285
+ $r$ and $z$ are the reset and update gates, and $h$ and $\tilde{h}$ are
286
+ the activation and the candidate activation.
287
+ }
288
+ \label{fig:gated_units}
289
+ \end{figure}
290
+
291
+ \subsection{Long Short-Term Memory Unit}
292
+ \label{sec:lstm}
293
+
294
+ The Long Short-Term Memory (LSTM) unit was initially proposed by
295
+ \citet{Hochreiter+Schmidhuber-1997}. Since then, a number of minor modifications
296
+ to the original LSTM unit have been made. We follow the implementation of LSTM as used in
297
+ \citet{graves2013generating}.
298
+
299
+ Unlike to the recurrent unit which simply computes a weighted sum of the input signal and
300
+ applies a nonlinear function, each $j$-th LSTM unit maintains a memory $c_t^j$ at
301
+ time $t$. The output $h_t^j$, or the activation, of the LSTM unit is then
302
+ \begin{align*}
303
+ h_t^j = o_t^j \tanh \left( c_t^j \right),
304
+ \end{align*}
305
+ where $o_t^j$ is an {\it output gate} that modulates the amount of memory content
306
+ exposure. The output gate is computed by
307
+ \begin{align*}
308
+ o_t^j = \sigma\left(
309
+ W_o \vx_t + U_o \vh_{t-1} + V_o \vc_t
310
+ \right)^j,
311
+ \end{align*}
312
+ where $\sigma$ is a logistic sigmoid function. $V_o$ is a diagonal matrix.
313
+
314
+ The memory cell $c_t^j$ is updated by partially forgetting the existing memory and adding
315
+ a new memory content $\tilde{c}_t^j$ :
316
+ \begin{align}
317
+ \label{eq:lstm_memory_up}
318
+ c_t^j = f_t^j c_{t-1}^j + i_t^j \tilde{c}_t^j,
319
+ \end{align}
320
+ where the new memory content is
321
+ \begin{align*}
322
+ \tilde{c}_t^j = \tanh\left( W_c \vx_t + U_c \vh_{t-1}\right)^j.
323
+ \end{align*}
324
+
325
+ The extent to which the existing memory is forgotten is modulated by a {\it
326
+ forget gate} $f_t^j$, and the degree to which the new memory content is added to
327
+ the memory cell is modulated by an {\it input gate} $i_t^j$. Gates are computed by
328
+ \begin{align*}
329
+ f_t^j =& \sigma\left( W_f \vx_t + U_f \vh_{t-1} + V_f \vc_{t-1} \right)^j, \\
330
+ i_t^j =& \sigma\left( W_i \vx_t + U_i \vh_{t-1} + V_i \vc_{t-1} \right)^j.
331
+ \end{align*}
332
+ Note that $V_f$ and $V_i$ are diagonal matrices.
333
+
334
+ Unlike to the traditional recurrent unit which overwrites its content at each
335
+ time-step (see Eq.~\eqref{eq:rnn_trad}), an LSTM unit is able to decide whether
336
+ to keep the existing memory via the introduced gates. Intuitively, if the LSTM
337
+ unit detects an important feature from an input sequence at early stage, it
338
+ easily carries this information (the existence of the feature) over a long
339
+ distance, hence, capturing potential long-distance dependencies.
340
+
341
+ See Fig.~\ref{fig:gated_units}~(a) for the graphical illustration.
342
+
343
+ \subsection{Gated Recurrent Unit}
344
+ \label{sec:gru}
345
+
346
+ A gated recurrent unit (GRU) was proposed by \cite{cho2014properties} to make
347
+ each recurrent unit to adaptively capture dependencies of different time scales.
348
+ Similarly to the LSTM unit, the GRU has gating units that modulate the flow of information
349
+ inside the unit, however, without having a separate memory cells.
350
+
351
+ The activation $h_t^j$ of the GRU at time $t$ is a linear interpolation between
352
+ the previous activation $h_{t-1}^j$ and the candidate activation $\tilde{h}_t^j$:
353
+ \begin{align}
354
+ \label{eq:gru_memory_up}
355
+ h_t^j = (1 - z_t^j) h_{t-1}^j + z_t^j \tilde{h}_t^j,
356
+ \end{align}
357
+ where an {\it update gate} $z_t^j$ decides how much the unit updates its activation,
358
+ or content. The update gate is computed by
359
+ \begin{align*}
360
+ z_t^j = \sigma\left( W_z \vx_t + U_z \vh_{t-1} \right)^j.
361
+ \end{align*}
362
+
363
+ This procedure of taking a linear sum between the existing state and the newly
364
+ computed state is similar to the LSTM unit. The GRU, however, does not have any
365
+ mechanism to control the degree to which its state is exposed, but exposes the
366
+ whole state each time.
367
+
368
+ The candidate activation $\tilde{h}_t^j$ is computed similarly to that of the traditional
369
+ recurrent unit (see Eq.~\eqref{eq:rnn_trad}) and as in \citep{Bahdanau-et-al-arxiv2014},
370
+
371
+ \begin{align*}
372
+ \tilde{h}_t^j = \tanh\left( W \vx_t + U \left( \vr_t \odot \vh_{t-1}\right) \right)^j,
373
+ \end{align*}
374
+ where $\vr_t$ is a set of reset gates and $\odot$ is an element-wise
375
+ multiplication.
376
+ \footnote{
377
+ Note that we use the reset gate in a slightly different way from the
378
+ original GRU proposed in \cite{cho2014properties}. Originally, the
379
+ candidate activation was computed by
380
+ \begin{align*}
381
+ \tilde{h}_t^j = \tanh\left( W \vx_t + \vr_t\odot\left( U \vh_{t-1}\right) \right)^j,
382
+ \end{align*}
383
+ where $r_t^j$ is a {\it reset gate}.
384
+ We found in our preliminary experiments that both of these
385
+ formulations performed as well as each other.
386
+ }
387
+ When off ($r_t^j$ close to $0$), the reset gate effectively makes the
388
+ unit act as if it is reading the first symbol of an input sequence, allowing it
389
+ to {\it forget} the previously computed state.
390
+
391
+ The reset gate $r_t^j$ is computed similarly to the update gate:
392
+ \begin{align*}
393
+ r_t^j = \sigma\left( W_r \vx_t + U_r \vh_{t-1} \right)^j.
394
+ \end{align*}
395
+ See Fig.~\ref{fig:gated_units}~(b) for the graphical illustration of the GRU.
396
+
397
+ \subsection{Discussion}
398
+
399
+ It is easy to notice similarities between the LSTM unit and the GRU
400
+ from Fig.~\ref{fig:gated_units}.
401
+
402
+ The most prominent feature shared between these units is the additive component of their
403
+ update from $t$ to $t+1$, which is lacking in the traditional recurrent unit. The traditional recurrent
404
+ unit always replaces the activation, or the content of a unit with a new value
405
+ computed from the current input and the previous hidden state. On the other
406
+ hand, both LSTM unit and GRU keep the existing content and add the new content
407
+ on top of it (see Eqs.~\eqref{eq:lstm_memory_up}~and~\eqref{eq:gru_memory_up}).
408
+
409
+ This additive nature has two advantages. First, it is easy for each unit to
410
+ remember the existence of a specific feature in the input stream for a long series of steps.
411
+ Any important feature, decided by either the forget gate of the LSTM unit or the
412
+ update gate of the GRU, will not be overwritten but be maintained as it is.
413
+
414
+ Second, and perhaps more importantly, this addition effectively creates shortcut
415
+ paths that bypass multiple temporal steps. These shortcuts allow the error to be
416
+ back-propagated easily without too quickly vanishing (if the gating unit is nearly
417
+ saturated at $1$) as a result of passing through multiple, bounded nonlinearities,
418
+ thus reducing the difficulty due to vanishing gradients~\citep{Hochreiter91,Bengio-trnn94}.
419
+
420
+ These two units however have a number of differences as well. One feature of the
421
+ LSTM unit that is missing from the GRU is the controlled exposure of the memory
422
+ content. In the LSTM unit, the amount of the memory content that is seen, or
423
+ used by other units in the network is controlled by the output gate. On
424
+ the other hand the GRU exposes its full content without any control.
425
+
426
+ Another difference is in the location of the input gate, or the corresponding
427
+ reset gate. The LSTM unit computes the new memory content without any separate
428
+ control of the amount of information flowing from the previous time step.
429
+ Rather, the LSTM unit controls the amount of the new memory content being added
430
+ to the memory cell {\it independently} from the forget gate. On the other hand,
431
+ the GRU controls the information flow from the previous activation when
432
+ computing the new, candidate activation, but does not independently control the
433
+ amount of the candidate activation being added (the control is tied via the
434
+ update gate).
435
+
436
+ From these similarities and differences alone, it is difficult to conclude which
437
+ types of gating units would perform better in general. Although \citet{Bahdanau-et-al-arxiv2014}
438
+ reported that these two units performed comparably to each other according to
439
+ their preliminary experiments on machine translation, it is unclear whether this
440
+ applies as well to tasks other than machine translation. This motivates us to
441
+ conduct more thorough empirical comparison between the LSTM unit and the GRU in
442
+ this paper.
443
+
444
+
445
+ \section{Experiments Setting}
446
+
447
+ \subsection{Tasks and Datasets}
448
+
449
+ We compare the LSTM unit, GRU and $\tanh$ unit in the task of sequence modeling.
450
+ Sequence modeling aims at learning a probability distribution over sequences, as
451
+ in Eq.~\eqref{eq:seq_model}, by maximizing the log-likelihood of a model given a
452
+ set of training sequences:
453
+ \begin{align*}
454
+ \max_{\TT} \frac{1}{N} \sum_{n=1}^N \sum_{t=1}^{T_n} \log p\left(x_t^n
455
+ \mid x_1^n, \dots, x_{t-1}^n; \TT \right),
456
+ \end{align*}
457
+ where $\TT$ is a set of model parameters. More specifically, we evaluate these
458
+ units in the tasks of polyphonic music modeling and speech signal modeling.
459
+
460
+ For the polyphonic music modeling, we use three polyphonic music
461
+ datasets from \citep{Boulanger-et-al-ICML2012}: Nottingham, JSB
462
+ Chorales, MuseData and Piano-midi. These datasets contain
463
+ sequences of which each symbol is respectively a $93$-, $96$-, $105$-, and
464
+ $108$-dimensional binary vector. We use logistic sigmoid function
465
+ as output units.
466
+
467
+ We use two internal datasets provided by Ubisoft\footnote{
468
+ \url{http://www.ubi.com/}
469
+ } for speech signal modeling. Each sequence is an one-dimensional
470
+ raw audio signal, and at each time step, we design a recurrent
471
+ neural network to look at $20$ consecutive samples to predict the
472
+ following $10$ consecutive samples. We have used two different
473
+ versions of the dataset: One with sequences of length $500$
474
+ (Ubisoft A) and the other with sequences of length $8,000$ (Ubisoft B).
475
+ Ubisoft A and Ubisoft B have $7,230$ and $800$ sequences each.
476
+ We use mixture of Gaussians with 20 components as output layer.
477
+ \footnote{Our implementation is available at \url{https://github.com/jych/librnn.git}}
478
+
479
+ \subsection{Models}
480
+
481
+ For each task, we train three different recurrent neural
482
+ networks, each having either LSTM units (LSTM-RNN, see
483
+ Sec.~\ref{sec:lstm}), GRUs (GRU-RNN, see Sec.~\ref{sec:gru}) or $\tanh$
484
+ units ($\tanh$-RNN, see Eq.~\eqref{eq:rnn_trad}). As the primary objective of
485
+ these experiments is to compare all three units fairly, we choose
486
+ the size of each model so that each model has approximately the
487
+ same number of parameters. We intentionally made the models to be
488
+ small enough in order to avoid overfitting which can easily
489
+ distract the comparison. This approach of comparing different
490
+ types of hidden units in neural networks has been done before,
491
+ for instance, by \citet{gulcehre2014learned}. See
492
+ Table~\ref{tab:models} for the details of the model
493
+ sizes.
494
+
495
+ \begin{table}[ht]
496
+ \centering
497
+ \begin{tabular}{c || c | c}
498
+ Unit & \# of Units & \# of Parameters \\
499
+ \hline
500
+ \hline
501
+ \multicolumn{3}{c}{Polyphonic music modeling} \\
502
+ \hline
503
+ LSTM & 36 & $\approx 19.8 \times 10^3$ \\
504
+ GRU & 46 & $\approx 20.2 \times 10^3$ \\
505
+ $\tanh$ & 100 & $\approx 20.1 \times 10^3$ \\
506
+ \hline
507
+ \multicolumn{3}{c}{Speech signal modeling} \\
508
+ \hline
509
+ LSTM & 195 & $\approx 169.1 \times 10^3$ \\
510
+ GRU & 227 & $\approx 168.9 \times 10^3$ \\
511
+ $\tanh$ & 400 & $\approx 168.4 \times 10^3$ \\
512
+ \end{tabular}
513
+ \caption{The sizes of the models tested in the experiments.}
514
+ \label{tab:models}
515
+ \end{table}
516
+
517
+
518
+ \begin{table}[ht]
519
+ \centering
520
+ \begin{tabular}{ c | c | c || c | c | c }
521
+ \hline
522
+ \multicolumn{3}{c||}{} & $\tanh$ & GRU & LSTM \\
523
+ \hline
524
+ \hline
525
+ \multirow{4}{*}{Music Datasets}
526
+ & Nottingham &\specialcell{train \\ test}
527
+ &\specialcell{3.22 \\ \bf 3.13} &
528
+ \specialcell{2.79 \\ 3.23 } &
529
+ \specialcell{3.08 \\ 3.20 } \\
530
+ \cline{2-6}
531
+ & JSB Chorales &\specialcell{ train \\ test}
532
+ & \specialcell{8.82 \\ 9.10 } &
533
+ \specialcell{ 6.94 \\ \bf 8.54 } &
534
+ \specialcell{ 8.15 \\ 8.67 } \\
535
+ \cline{2-6}
536
+ & MuseData &\specialcell{train \\ test}
537
+ & \specialcell{5.64 \\ 6.23 } &
538
+ \specialcell{ 5.06 \\ \bf 5.99 } &
539
+ \specialcell{ 5.18 \\ 6.23 } \\
540
+ \cline{2-6}
541
+ & Piano-midi &\specialcell{train \\ test}
542
+ & \specialcell{5.64 \\ 9.03 } &
543
+ \specialcell{ 4.93 \\ \bf 8.82 } &
544
+ \specialcell{ 6.49 \\ 9.03 } \\
545
+ \cline{1-6}
546
+ \multirow{4}{*}{Ubisoft Datasets}
547
+ & Ubisoft dataset A &\specialcell{train \\ test}
548
+ &\specialcell{ 6.29 \\ 6.44 } &
549
+ \specialcell{2.31 \\ 3.59 } &
550
+ \specialcell{1.44 \\ \bf 2.70 } \\
551
+ \cline{2-6}
552
+ & Ubisoft dataset B &\specialcell{train \\ test}
553
+ & \specialcell{7.61 \\ 7.62} &
554
+ \specialcell{0.38 \\ \bf 0.88} &
555
+ \specialcell{0.80 \\ 1.26} \\
556
+ \hline
557
+ \end{tabular}
558
+ \caption{The average negative log-probabilities of the
559
+ training and test sets.
560
+ }
561
+ \label{tab:model_perfs}
562
+ \end{table}
563
+
564
+ We train each model with RMSProp~\citep[see,
565
+ e.g.,][]{Hinton-Coursera2012} and use weight noise with standard
566
+ deviation fixed to $0.075$~\citep{graves2011practical}. At every
567
+ update, we rescale the norm of the gradient to $1$, if it is
568
+ larger than $1$~\citep{Pascanu-et-al-ICML2013} to prevent
569
+ exploding gradients. We select a learning rate (scalar
570
+ multiplier in RMSProp) to maximize the validation performance,
571
+ out of $10$ randomly chosen log-uniform candidates sampled from
572
+ $\mathcal{U}(-12, -6)~$\citep{bergstra2012random}.
573
+ The validation set is used for early-stop training as well.
574
+
575
+ \section{Results and Analysis}
576
+
577
+ Table~\ref{tab:model_perfs} lists all the results from our
578
+ experiments.
579
+ In the case of the polyphonic music datasets, the GRU-RNN
580
+ outperformed all the others (LSTM-RNN and $\tanh$-RNN) on all the
581
+ datasets except for the Nottingham. However, we can see that on
582
+ these music datasets, all the three models performed closely to
583
+ each other.
584
+
585
+ On the other hand, the RNNs with the gating units (GRU-RNN and
586
+ LSTM-RNN) clearly outperformed the more traditional $\tanh$-RNN
587
+ on both of the Ubisoft datasets. The LSTM-RNN was best with the
588
+ Ubisoft A, and with the Ubisoft B, the GRU-RNN performed best.
589
+
590
+ In Figs.~\ref{fig:music_results}--\ref{fig:ubi_results}, we show
591
+ the learning curves of the best validation runs. In the case of
592
+ the music datasets (Fig.~\ref{fig:music_results}), we see that
593
+ the GRU-RNN makes faster progress in terms of both the number of
594
+ updates and actual CPU time. If we consider the Ubisoft datasets
595
+ (Fig.~\ref{fig:ubi_results}), it is clear that although the
596
+ computational requirement for each update in the $\tanh$-RNN is
597
+ much smaller than the other models, it did not make much
598
+ progress each update and eventually stopped making any progress
599
+ at much worse level.
600
+
601
+ These results clearly indicate the advantages of the gating units
602
+ over the more traditional recurrent units. Convergence is often
603
+ faster, and the final solutions tend to be better. However, our
604
+ results are not conclusive in comparing the LSTM and the GRU,
605
+ which suggests that the choice of the type of gated recurrent
606
+ unit may depend heavily on the dataset and corresponding task.
607
+
608
+
609
+ \begin{figure}[ht]
610
+ \centering
611
+ \begin{minipage}{1\textwidth}
612
+ \centering
613
+ Per epoch
614
+
615
+ \begin{minipage}[b]{0.48\textwidth}
616
+ \centering
617
+ \includegraphics[width=1.\textwidth,clip=true,trim=0
618
+ 28 0 0]{nottingham_results.pdf}
619
+ \end{minipage}
620
+ \hfill
621
+ \begin{minipage}[b]{0.48\textwidth}
622
+ \centering
623
+ \includegraphics[width=1.\textwidth,clip=true,trim=0
624
+ 28 0 0]{muse_results.pdf}
625
+ \end{minipage}
626
+
627
+
628
+ \end{minipage}
629
+
630
+ \vspace{4mm}
631
+ \begin{minipage}{1\textwidth}
632
+ \centering
633
+ Wall Clock Time (seconds)
634
+
635
+ \begin{minipage}[b]{0.48\textwidth}
636
+ \centering
637
+ \includegraphics[width=1.\textwidth,clip=true,trim=0 29 0 0]{nottingham_results_in_total_time.pdf}
638
+ \end{minipage}
639
+ \hfill
640
+ \begin{minipage}[b]{0.48\textwidth}
641
+ \centering
642
+ \includegraphics[width=1.\textwidth,clip=true,trim=0 29 0 0]{muse_results_in_total_time.pdf}
643
+ \end{minipage}
644
+
645
+
646
+ \begin{minipage}{0.48\textwidth}
647
+ \centering
648
+ (a) Nottingham Dataset
649
+ \end{minipage}
650
+ \hfill
651
+ \begin{minipage}{0.48\textwidth}
652
+ \centering
653
+ (b) MuseData Dataset
654
+ \end{minipage}
655
+
656
+
657
+ \end{minipage}
658
+
659
+ \caption{Learning curves for training and validation sets of different
660
+ types of units with respect to (top) the number of iterations and
661
+ (bottom) the wall clock time. y-axis corresponds to the
662
+ negative-log likelihood of the model shown in log-scale.}
663
+ \label{fig:music_results}
664
+ \end{figure}
665
+
666
+ \begin{figure}[ht]
667
+ \centering
668
+ \begin{minipage}{1\textwidth}
669
+ \centering
670
+ Per epoch
671
+
672
+ \begin{minipage}[b]{0.48\textwidth}
673
+ \centering
674
+ \includegraphics[width=1.\textwidth,clip=true,trim=0 28 0 0]{ubidata_a_results.pdf}
675
+ \end{minipage}
676
+ \hfill
677
+ \begin{minipage}[b]{0.48\textwidth}
678
+ \centering
679
+ \includegraphics[width=1.\textwidth,clip=true,trim=0 28 0 0]{ubidata_b_results.pdf}
680
+ \end{minipage}
681
+
682
+ \end{minipage}
683
+
684
+ \vspace{4mm}
685
+ \begin{minipage}{1\textwidth}
686
+ \centering
687
+ Wall Clock Time (seconds)
688
+
689
+ \begin{minipage}[b]{0.48\textwidth}
690
+ \centering
691
+ \includegraphics[width=1.\textwidth,clip=true,trim=0 28 0 0]{ubidata_a_results_in_time.pdf}
692
+ \end{minipage}
693
+ \hfill
694
+ \begin{minipage}[b]{0.48\textwidth}
695
+ \centering
696
+ \includegraphics[width=1.\textwidth,clip=true,trim=0 28 0 0]{ubidata_b_results_in_time.pdf}
697
+ \end{minipage}
698
+
699
+ \begin{minipage}{0.48\textwidth}
700
+ \centering
701
+ (a) Ubisoft Dataset A
702
+ \end{minipage}
703
+ \hfill
704
+ \begin{minipage}{0.48\textwidth}
705
+ \centering
706
+ (b) Ubisoft Dataset B
707
+ \end{minipage}
708
+
709
+ \end{minipage}
710
+
711
+ \caption{Learning curves for training and validation sets of different
712
+ types of units with respect to (top) the number of iterations and
713
+ (bottom) the wall clock time. x-axis is the number of epochs and
714
+ y-axis corresponds to the negative-log likelihood of the model
715
+ shown in log-scale.
716
+ }
717
+ \label{fig:ubi_results}
718
+ \end{figure}
719
+
720
+
721
+ \section{Conclusion}
722
+
723
+ In this paper we empirically evaluated recurrent neural networks
724
+ (RNN) with three widely used recurrent units; (1) a traditional
725
+ $\tanh$ unit, (2) a long short-term memory (LSTM) unit and (3) a
726
+ recently proposed gated recurrent unit (GRU). Our evaluation
727
+ focused on the task of sequence modeling on a number of datasets
728
+ including polyphonic music data and raw speech signal data.
729
+
730
+ The evaluation clearly demonstrated the superiority of the gated
731
+ units; both the LSTM unit and GRU, over the traditional $\tanh$
732
+ unit. This was more evident with the more challenging task of raw
733
+ speech signal modeling. However, we could not make concrete
734
+ conclusion on which of the two gating units was better.
735
+
736
+ We consider the experiments in this paper as preliminary. In
737
+ order to understand better how a gated unit helps learning and to
738
+ separate out the contribution of each component, for instance
739
+ gating units in the LSTM unit or the GRU, of the gating units,
740
+ more thorough experiments will be required in the future.
741
+
742
+
743
+ \section*{Acknowledgments}
744
+
745
+ The authors would like to thank Ubisoft for providing the
746
+ datasets and for the support. The authors would like to thank
747
+ the developers of
748
+ Theano~\citep{bergstra+al:2010-scipy,Bastien-Theano-2012} and
749
+ Pylearn2~\citep{pylearn2_arxiv_2013}. We acknowledge the support
750
+ of the following agencies for research funding and computing
751
+ support: NSERC, Calcul Qu\'{e}bec, Compute Canada, the Canada
752
+ Research Chairs and CIFAR.
753
+
754
+ \newpage
755
+ \bibliography{strings,strings-shorter,ml,aigaion,myref}
756
+ \bibliographystyle{abbrvnat}
757
+
758
+ \end{document}
papers/1412/1412.6856.tex ADDED
@@ -0,0 +1,386 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[]{article} \usepackage{iclr2015,times}
2
+ \usepackage{hyperref}
3
+ \usepackage{url}
4
+ \usepackage{epsfig}
5
+ \usepackage{graphicx}
6
+
7
+ \title{Object detectors emerge in Deep Scene CNNs}
8
+
9
+
10
+ \author{
11
+ Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba \\
12
+ Computer Science and Artificial Intelligence Laboratory, MIT\\
13
+ \texttt{\{bolei,khosla,agata,oliva,torralba\}@mit.edu}
14
+ }
15
+
16
+
17
+
18
+ \newcommand{\fix}{\marginpar{FIX}}
19
+ \newcommand{\new}{\marginpar{NEW}}
20
+
21
+ \iclrfinalcopy
22
+
23
+ \iclrconference
24
+
25
+ \begin{document}
26
+
27
+
28
+ \maketitle
29
+ \begin{abstract}
30
+
31
+ With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.
32
+
33
+ \end{abstract}
34
+
35
+
36
+ \section{Introduction}
37
+
38
+ Current deep neural networks achieve remarkable performance at a number of vision tasks surpassing techniques based on hand-crafted features. However, while the structure of the representation in hand-crafted features is often clear and interpretable, in the case of deep networks it remains unclear what the nature of the learned representation is and why it works so well. A convolutional neural network (CNN) trained on ImageNet~\citep{deng2009imagenet} significantly outperforms the best hand crafted features on the ImageNet challenge ~\citep{ILSVRCarxiv14}. But more surprisingly, the same network, when used as a generic feature extractor, is also very successful at other tasks like object detection on the PASCAL VOC dataset~\citep{Everingham10}.
39
+
40
+ A number of works have focused on understanding the representation learned by CNNs. The work by \cite{Zeiler14} introduces a procedure to visualize what activates each unit. Recently \citet{yosinski2014transferable} use transfer learning to measure how generic/specific the learned features are. In \citet{agrawal2014analyzing} and \citet{szegedy2013intriguing}, they suggest that the CNN for ImageNet learns a distributed code for objects. They all use ImageNet, an object-centric dataset, as a training set.
41
+
42
+ When training a CNN to distinguish different object classes, it is unclear what the underlying representation should be. Objects have often been described using part-based representations where parts can be shared across objects, forming a distributed code. However, what those parts should be is unclear. For instance, one would think that the meaningful parts of a face are the mouth, the two eyes, and the nose. However, those are simply functional parts, with words associated with them; the object parts that are important for visual recognition might be different from these semantic parts, making it difficult to evaluate how efficient a representation is. In fact, the strong internal configuration of objects makes the definition of what is a useful part poorly constrained: an algorithm can find different and arbitrary part configurations, all giving similar recognition performance.
43
+
44
+ Learning to classify scenes (i.e., classifying an image as being an office, a restaurant, a street, etc) using the Places dataset \citep{zhou2014learning} gives the opportunity to study the internal representation learned by a CNN on a task other than object recognition.
45
+
46
+
47
+
48
+
49
+ In the case of scenes, the representation is clearer. Scene categories are defined by the objects they contain and, to some extent, by the spatial configuration of those objects. For instance, the important parts of a bedroom are the bed, a side table, a lamp, a cabinet, as well as the walls, floor and ceiling. Objects represent therefore a distributed code for scenes (i.e., object classes are shared across different scene categories). Importantly, in scenes, the spatial configuration of objects, although compact, has a much larger degree of freedom. It is this loose spatial dependency that, we believe, makes scene representation different from most object classes (most object classes do not have a loose interaction between parts). In addition to objects, other feature regularities of scene categories allow for other representations to emerge, such as textures~\citep{walkermalik}, GIST~\citep{oliva2006building}, bag-of-words~\citep{lazebnik2006beyond}, part-based models ~\citep{pandey2011scene}, and
50
+ ObjectBank~\citep{li2010object}. While a CNN has enough flexibility to learn any of those representations, if meaningful objects emerge without supervision inside the inner layers of the CNN, there will be little ambiguity as to which type of representation these networks are learning.
51
+
52
+ The main contribution of this paper is to show that object detection emerges inside a CNN trained to recognize scenes, even more than when trained with ImageNet. This is surprising because our results demonstrate that reliable object detectors are found even though, unlike ImageNet, no supervision is provided for objects. Although object discovery with deep neural networks has been shown before in an unsupervised setting~\citep{le2013building}, here we find that many more objects can be naturally discovered, in a supervised setting tuned to scene classification rather than object classification.
53
+
54
+ Importantly, the emergence of object detectors inside the CNN suggests that a single network can support recognition at several levels of abstraction (e.g., edges, texture, objects, and scenes) without needing multiple outputs or a collection of networks. Whereas other works have shown that one can detect objects by applying the network multiple times in different locations~\citep{girshick2014rich}, or focusing attention~\citep{ruslanNips2014}, or by doing segmentation~\citep{Grangier09,Farabet13}, here we show that the same network can do both object localization and scene recognition in a single forward-pass. Another set of recent works~\citep{oquab2014weakly, bergamo2014self} demonstrate the ability of deep networks trained on object classification to do localization without bounding box supervision. However, unlike our work, these require object-level supervision while we only use scenes.
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+
69
+
70
+
71
+ \section{ImageNet-CNN and Places-CNN}
72
+ \label{sec:cnns}
73
+
74
+ Convolutional neural networks have recently obtained astonishing performance on object classification~\citep{krizhevsky2012imagenet} and scene classification~\citep{zhou2014learning}. The ImageNet-CNN from~\citet{Jia13caffe} is trained on 1.3 million images from 1000 object categories of ImageNet (ILSVRC 2012) and achieves a top-1 accuracy of $57.4\%$. With the same network architecture, Places-CNN is trained on 2.4 million images from 205 scene categories of Places Database~\citep{zhou2014learning}, and achieves a top-1 accuracy of $50.0\%$. The network architecture used for both CNNs, as proposed in~\citep{krizhevsky2012imagenet}, is summarized in Table~\ref{network}\footnote{We use \textit{unit} to refer to neurons in the various layers and \textit{features} to refer to their activations.}. Both networks are trained from scratch using only the specified dataset.
75
+
76
+ The deep features from Places-CNN tend to perform better on scene-related recognition tasks compared to the features from ImageNet-CNN. For example, as compared to the Places-CNN that achieves 50.0\% on scene classification, the ImageNet-CNN combined with a linear SVM only achieves $40.8\%$ on the same test set\footnote{Scene recognition demo of Places-CNN is available at \url{http://places.csail.mit.edu/demo.html}. The demo has 77.3\% top-5 recognition rate in the wild estimated from 968 anonymous user responses.} illustrating the importance of having scene-centric data.
77
+
78
+
79
+
80
+
81
+
82
+
83
+ \begin{table}\caption{The parameters of the network architecture used for ImageNet-CNN and Places-CNN.}\label{network}
84
+ \footnotesize
85
+ \centering
86
+ \begin{tabular}{ccccccccccc}
87
+ \hline
88
+ \hline
89
+ Layer &conv1 & pool1 & conv2 & pool2& conv3& conv4& conv5& pool5& fc6 & fc7\\
90
+ \hline
91
+ Units & 96 & 96 & 256 & 256 & 384 & 384 & 256 & 256 & 4096 & 4096\\
92
+ Feature & 55$\times$55& 27$\times$27 & 27$\times$27 & 13$\times$13 & 13$\times$13 & 13$\times$13& 13$\times$13 & 6$\times$6 & 1 & 1 \\
93
+ \hline
94
+ \end{tabular}
95
+ \vspace*{-4mm}
96
+ \end{table}
97
+
98
+
99
+
100
+
101
+
102
+
103
+ To further highlight the difference in representations, we conduct a simple experiment to identify the differences in the type of images preferred at the different layers of each network: we create a set of 200k images with an approximately equal distribution of scene-centric and object-centric images\footnote{100k object-centric images from the test set of ImageNet LSVRC2012 and 108k scene-centric images from the SUN dataset~\citep{SUNDBijcv}.}, and run them through both networks, recording the activations at each layer. For each layer, we obtain the top 100 images that have the largest average activation (sum over all spatial locations for a given layer). Fig.~\ref{fig:preferred} shows the top 3 images for each layer. We observe that the earlier layers such as pool1 and pool2 prefer similar images for both networks while the later layers tend to be more specialized to the specific task of scene or object categorization. For layer pool2, $55\%$ and $47\%$ of the top-100 images belong to the ImageNet dataset
104
+ for ImageNet-CNN and Places-CNN. Starting from layer conv4, we observe a significant difference in the number of top-100
105
+ belonging to each dataset corresponding to each network. For fc7, we observe that $78\%$ and $24\%$ of the top-100 images belong to the ImageNet dataset for the ImageNet-CNN and Places-CNN respectively, illustrating a clear bias in each network.
106
+
107
+
108
+ \begin{figure}
109
+ \begin{center}
110
+ \includegraphics[width=1\textwidth]{figs/preferredImages2-eps-converted-to}
111
+ \end{center}
112
+ \caption{Top 3 images producing the largest activation of units in each layer of ImageNet-CNN (top) and Places-CNN (bottom).}
113
+ \label{fig:preferred}
114
+ \end{figure}
115
+
116
+
117
+ In the following sections, we further investigate the differences between these networks, and focus on better understanding the nature of the representation learned by Places-CNN when doing scene classification in order to clarify some part of the secret to their great performance.
118
+
119
+
120
+
121
+
122
+
123
+
124
+
125
+
126
+
127
+
128
+
129
+
130
+
131
+
132
+
133
+
134
+
135
+
136
+
137
+
138
+
139
+
140
+ \section{Uncovering the CNN representation}
141
+
142
+ The performance of scene recognition using Places-CNN is quite impressive given the difficulty of the task. In this section, our goal is to understand the nature of the representation that the network is learning.
143
+
144
+ \subsection{Simplifying the input images}
145
+
146
+
147
+
148
+ Simplifying images is a well known strategy to test human recognition. For example, one can remove information from the image to test if it is diagnostic or not of a particular object or scene (for a review see~\citet{biederman95}). A similar procedure was also used by ~\citet{tanaka93} to understand the receptive fields of complex cells in the inferior temporal cortex (IT).
149
+
150
+ Inspired by these approaches, our idea is the following: given an image that is correctly classified by the network, we want to simplify this image such that it keeps as little visual information as possible while still having a high classification score for the same category. This simplified image (named minimal image representation) will allow us to highlight the elements that lead to the high classification score. In order to do this, we manipulate images in the gradient space as typically done in computer graphics~\citep{Perez03}. We investigate two different approaches described below.
151
+
152
+ In the first approach, given an image, we create a segmentation of edges and regions and remove segments from the image iteratively. At each iteration we remove the segment that produces the smallest decrease of the correct classification score and we do this until the image is incorrectly classified. At the end, we get a representation of the original image that contains, approximately, the minimal amount of information needed by the network to correctly recognize the scene category. In Fig.~\ref{fig:simplified} we show some examples of these minimal image representations. Notice that objects seem to contribute important information for the network to recognize the scene. For instance, in the case of bedrooms these minimal image representations usually contain the region of the bed, or in the art gallery category, the regions of the paintings on the walls.
153
+
154
+ \begin{figure}
155
+ \begin{center}
156
+ \includegraphics[width=1\textwidth]{figs/visualizations-eps-converted-to}
157
+ \end{center}
158
+ \vspace{-6mm}
159
+ \caption{Each pair of images shows the original image (left) and a simplified image (right) that gets classified by the Places-CNN as the same scene category as the original image. From top to bottom, the four rows show different scene categories: bedroom, auditorium, art gallery, and dining room.}
160
+ \label{fig:simplified}
161
+ \end{figure}
162
+
163
+ Based on the previous results, we hypothesized that for the Places-CNN, some objects were crucial for recognizing scenes. This inspired our second approach: we generate the minimal image representations using the fully annotated image set of SUN Database~\citep{SUNDBijcv} (see section~\ref {sec_what_object_classes} for details on this dataset) instead of performing automatic segmentation. We follow the same procedure as the first approach using the ground-truth object segments provided in the database.
164
+
165
+ This led to some interesting observations: for bedrooms, the minimal representations retained the bed in $87\%$ of the cases. Other objects kept in bedrooms were wall ($28\%$) and window ($21\%$). For art gallery the minimal image representations contained paintings ($81\%$) and pictures ($58\%$); in amusement parks, carousel ($75\%$), ride ($64\%$), and roller coaster ($50\%$); in bookstore, bookcase ($96\%$), books ($68\%$), and shelves ($67\%$). These results suggest that object detection is an important part of the representation built by the network to obtain discriminative information for scene classification.
166
+
167
+
168
+
169
+
170
+
171
+
172
+ \subsection{Visualizing the receptive fields of units and their activation patterns}
173
+ \label{sec:viz}
174
+
175
+ \begin{figure}
176
+ \begin{center}
177
+ \includegraphics[width=1\linewidth]{figs/pipeline_receptivefield.pdf}
178
+ \end{center}
179
+ \vspace{-6mm}
180
+ \caption{The pipeline for estimating the RF of each unit. Each sliding-window stimuli contains a small randomized patch (example indicated by red arrow) at different spatial locations. By comparing the activation response of the sliding-window stimuli with the activation response of the original image, we obtain a discrepancy map for each image (middle top). By summing up the calibrated discrepancy maps (middle bottom) for the top ranked images, we obtain the actual RF of that unit (right).}
181
+ \label{pipeline_rf}
182
+ \end{figure}
183
+
184
+ In this section, we investigate the shape and size of the receptive fields (RFs) of the various units in the CNNs. While theoretical RF sizes can be computed given the network architecture~\citep{LongNIPS14}, we are interested in the actual, or \textit{empirical} size of the RFs. We expect the empirical RFs to be better localized and more representative of the information they capture than the theoretical ones, allowing us to better understand what is learned by each unit of the CNN.
185
+
186
+
187
+
188
+ Thus, we propose a data-driven approach to estimate the learned RF of each unit in each layer. It is simpler than the deconvolutional network visualization method~\citep{Zeiler14} and can be easily extended to visualize any learned CNNs\footnote{More visualizations are available at \url{http://places.csail.mit.edu/visualization}}.
189
+
190
+
191
+
192
+ The procedure for estimating a given unit's RF, as illustrated in Fig.~\ref{pipeline_rf}, is as follows. As input, we use an image set of 200k images with a roughly equal distribution of scenes and objects (similar to Sec.~\ref{sec:cnns}). Then, we select the top $K$ images with the highest activations for the given unit.
193
+
194
+ For each of the $K$ images, we now want to identify exactly which regions of the image lead to the high unit activations. To do this, we replicate each image many times with small random occluders (image patches of size 11$\times$11) at different locations in the image. Specifically, we generate occluders in a dense grid with a stride of 3. This results in about 5000 occluded images per original image. We now feed all the occluded images into the same network and record the change in activation as compared to using the original image. If there is a large discrepancy, we know that the given patch is important and vice versa. This allows us to build a discrepancy map for each image.
195
+
196
+ Finally, to consolidate the information from the $K$ images, we center the discrepancy map around the spatial location of the unit that caused the maximum activation for the given image. Then we average the re-centered discrepancy maps to generate the final RF.
197
+
198
+
199
+
200
+
201
+ In Fig.~\ref{plot_rf} we visualize the RFs for units from 4 different layers of the Places-CNN and ImageNet-CNN, along with their highest scoring activation regions inside the RF. We observe that, as the layers go deeper, the RF size gradually increases and the activation regions become more semantically meaningful. Further, as shown in Fig.~\ref{rf_segmentation}, we use the RFs to segment images using the feature maps of different units. Lastly, in Table~\ref{receptivefield}, we compare the theoretical and empirical size of the RFs at different layers. As expected, the actual size of the RF is much smaller than the theoretical size, especially in the later layers. Overall, this analysis allows us to better understand each unit by focusing precisely on the important regions of each image.
202
+
203
+ \begin{figure}
204
+ \begin{center}
205
+ \includegraphics[width=1\linewidth]{figs/plot_receptivefield-eps-converted-to}
206
+ \end{center}
207
+ \caption{The RFs of 3 units of pool1, pool2, conv4, and pool5 layers respectively for ImageNet- and Places-CNNs, along with the image patches corresponding to the top activation regions inside the RFs.}\label{plot_rf}
208
+ \end{figure}
209
+
210
+
211
+
212
+
213
+ \begin{table}\caption{Comparison of the theoretical and empirical sizes of the RFs for Places-CNN and ImageNet-CNN at different layers. Note that the RFs are assumed to be square shaped, and the sizes reported below are the length of each side of this square, in pixels.}\label{receptivefield}
214
+ \centering
215
+ \begin{tabular}{lccccc}
216
+ \hline
217
+ \hline
218
+ &pool1 & pool2 & conv3 & conv4 & pool5 \\
219
+ \hline
220
+ Theoretic size & 19 & 67 & 99 & 131 & 195\\
221
+ Places-CNN actual size & 17.8$\pm$ 1.6 & 37.4$\pm$ 5.9 & 52.1$\pm$10.6 & 60.0$\pm$ 13.7 & 72.0$\pm$ 20.0 \\
222
+ ImageNet-CNN actual size & 17.9$\pm$ 1.6 & 36.7$\pm$ 5.4 & 51.1$\pm$9.9 & 60.4$\pm$ 16.0 & 70.3$\pm$ 21.6\\
223
+ \hline
224
+ \end{tabular}
225
+ \end{table}
226
+
227
+ \begin{figure}
228
+ \begin{center}
229
+ \includegraphics[width=1\textwidth]{figs/layers_segments2-eps-converted-to}
230
+ \end{center}
231
+ \vspace{-5mm}
232
+ \caption{Segmentation based on RFs. Each row shows the 4 most confident images for some unit.}\label{rf_segmentation}
233
+ \end{figure}
234
+
235
+ \begin{figure}
236
+ \begin{center}
237
+ \includegraphics[width=1\textwidth]{figs/AMTexperiment-eps-converted-to}
238
+ \end{center}
239
+ \vspace{-5mm}
240
+ \caption{AMT interface for unit concept annotation. There are three tasks in each annotation.}
241
+ \label{fig:AMT}
242
+ \end{figure}
243
+
244
+ \subsection{Identifying the semantics of internal units}
245
+
246
+ In Section~\ref{sec:viz}, we found the exact RFs of units and observed that activation regions tended to become more semantically meaningful with increasing depth of layers. In this section, our goal is to understand and quantify the precise semantics learned by each unit.
247
+
248
+ In order to do this, we ask workers on Amazon Mechanical Turk (AMT) to identify the common theme or \textit{concept} that exists between the top scoring segmentations for each unit. We expect the tags provided by naive annotators to reduce biases. Workers provide tags without being constrained to a dictionary of terms that could bias or limit the identification of interesting properties.
249
+
250
+ Specifically, we divide the task into three main steps as shown in Fig.~\ref{fig:AMT}. We show workers the top 60 segmented images that most strongly activate one unit and we ask them to (1) identify the concept, or semantic theme given by the set of 60 images e.g., car, blue, vertical lines, etc, (2) mark the set of images that do not fall into this theme, and (3) categorize the concept provided in (1) to one of 6 semantic groups ranging from low-level to high-level: simple elements and colors (e.g., horizontal lines, blue), materials and textures (e.g., wood, square grid), regions ans surfaces (e.g., road, grass), object parts (e.g., head, leg), objects (e.g., car, person), and scenes (e.g., kitchen, corridor). This allows us to obtain both the semantic information for each unit, as well as the level of abstraction provided by the labeled concept.
251
+
252
+ To ensure high quality of annotation, we included 3 images with high negative scores that the workers were required to identify as negatives in order to submit the task. Fig.~\ref{fig:semantics} shows some example annotations by workers. For each unit, we measure its precision as the percentage of images that were selected as fitting the labeled concept. In Fig.~\ref{fig:distributionsemantics}.(a) we plot the average precision for ImageNet-CNN and Places-CNN for each layer.
253
+
254
+ In Fig.~\ref{fig:distributionsemantics}.(b-c) we plot the distribution of concept categories for ImageNet-CNN and Places-CNN at each layer. For this plot we consider only units that had a precision above $75\%$ as provided by the AMT workers. Around $60\%$ of the units on each layer where above that threshold. For both networks, units at the early layers (pool1, pool2) have more units responsive to simple elements and colors, while those at later layers (conv4, pool5) have more high-level semantics (responsive more to objects and scenes). Furthermore, we observe that conv4 and pool5 units in Places-CNN have higher ratios of high-level semantics as compared to the units in ImageNet-CNN.
255
+
256
+ Fig.~\ref{fig:distributionsemanticslayers} provides a different visualization of the same data as in Fig.~\ref{fig:distributionsemantics}.(b-c). This plot better reveals how different levels of abstraction emerge in different layers of both networks. The vertical axis indicates the percentage of units in each layer assigned to each concept category. ImageNet-CNN has more units tuned to simple elements and colors than Places-CNN while Places-CNN has more objects and scenes. ImageNet-CNN has more units tuned to object parts (with the maximum around conv4). It is interesting to note that Places-CNN discovers more objects than ImageNet-CNN despite having no object-level supervision.
257
+
258
+
259
+
260
+
261
+
262
+
263
+
264
+
265
+
266
+
267
+ \begin{figure}
268
+ \begin{center}
269
+ \includegraphics[width=1\textwidth]{figs/examplesAMT-eps-converted-to}
270
+ \includegraphics[width=1\textwidth]{figs/examplesAMTwrong-eps-converted-to}
271
+ \end{center}
272
+ \caption{Examples of unit annotations provided by AMT workers for 6 units from pool5 in Places-CNN. For each unit the figure shows the label provided by the worker, the type of label, the images selected as corresponding to the concept (green box) and the images marked as incorrect (red box). The precision is the percentage of correct images. The top three units have high performance while the bottom three have low performance ($<75\%$).}
273
+ \label{fig:semantics}
274
+ \end{figure}
275
+
276
+
277
+
278
+
279
+
280
+
281
+
282
+ \begin{figure}[t]
283
+ \begin{center}
284
+ \includegraphics[width=1\textwidth]{figs/summary_bars_revised-eps-converted-to}
285
+ \end{center}
286
+ \vspace*{-4mm}
287
+ \caption{(a) Average precision of all the units in each layer for both networks as reported by AMT workers. (b) and (c) show the number of units providing different levels of semantics for ImageNet-CNN and Places-CNN respectively.}
288
+ \label{fig:distributionsemantics}
289
+ \end{figure}
290
+
291
+ \begin{figure}[t]
292
+ \begin{center}
293
+ \includegraphics[width=1\textwidth]{figs/semanticsLayers_revised-eps-converted-to}
294
+ \end{center}
295
+ \vspace*{-4mm}
296
+ \caption{Distribution of semantic types found for all the units in both networks. From left to right, each plot corresponds to the distribution of units in each layer assigned to simple elements or colors, textures or materials, regions or surfaces, object parts, objects, and scenes. The vertical axis is the percentage of units with each layer assigned to each type of concept.}
297
+ \label{fig:distributionsemanticslayers}
298
+ \end{figure}
299
+
300
+ \section{Emergence of objects as the internal representation}
301
+
302
+ As shown before, a large number of units in pool5 are devoted to detecting objects and scene-regions (Fig.~\ref{fig:distributionsemanticslayers}). But what categories are found? Is each category mapped to a single unit or are there multiple units for each object class? Can we actually use this information to segment a scene?
303
+
304
+ \subsection{What object classes emerge?}
305
+ \label{sec_what_object_classes}
306
+
307
+ To answer the question of why certain objects emerge from pool5, we tested ImageNet-CNN and Places-CNN on fully annotated images from the SUN database \citep{SUNDBijcv}. The SUN database contains 8220 fully annotated images from the same 205 place categories used to train Places-CNN. There are no duplicate images between SUN and Places. We use SUN instead of COCO~\citep{mscoco} as we need dense object annotations to study what the most informative object classes for scene categorization are, and what the natural object frequencies in scene images are. For this study, we manually mapped the tags given by AMT workers to the SUN categories.
308
+
309
+ Fig.~\ref{fig:comparison}(a) shows the distribution of objects found in pool5 of Places-CNN. Some objects are detected by several units. For instance, there are 15 units that detect buildings. Fig.~\ref{fig:segmentationsSUN} shows some units from the Places-CNN grouped by the type of object class they seem to be detecting. Each row shows the top five images for a particular unit that produce the strongest activations. The segmentation shows the regions of the image for which the unit is above a certain threshold. Each unit seems to be selective to a particular appearance of the object. For instance, there are 6 units that detect lamps, each unit detecting a particular type of lamp providing finer-grained discrimination; there are 9 units selective to people, each one tuned to different scales or people doing different tasks.
310
+
311
+ Fig.~\ref{fig:comparison}(b) shows the distribition of objects found in pool5 of ImageNet-CNN. ImageNet has an abundance of animals among the categories present: in the ImageNet-CNN, out of the 256 units in pool5, there are 15 units devoted to detecting dogs and several more detecting parts of dogs (body, legs, ...). The categories found in pool5 tend to follow the target categories in ImageNet.
312
+
313
+ Why do those objects emerge? One possibility is that the objects that emerge in pool5 correspond to the most frequent ones in the database. Fig.~\ref{fig:taxonomy}(a) shows the sorted distribution of object counts in the SUN database which follows Zipf's law. Fig.~\ref{fig:taxonomy}(b) shows the counts of units found in pool5 for each object class (same sorting as in Fig.~\ref{fig:taxonomy}(a)). The correlation between object frequency in the database and object frequency discovered by the units in pool5 is 0.54. Another possibility is that the objects that emerge are the objects that allow discriminating among scene categories. To measure the set of discriminant objects we used the ground truth in the SUN database to measure the classification performance achieved by each object class for scene classification. Then we count how many times each object class appears as the most informative one. This measures the number of scene categories a particular object class is the most useful for. The counts are shown
314
+ in Fig.~\ref{fig:taxonomy}(c). Note the similarity between Fig.~\ref{fig:taxonomy}(b) and Fig.~\ref{fig:taxonomy}(c). The correlation is 0.84 indicating that the network is automatically identifying the most discriminative object categories to a large extent.
315
+
316
+ Note that there are 115 units in pool5 of Places-CNN not detecting objects. This could be due to incomplete learning or a complementary texture-based or part-based representation of the scenes. Therefore, although objects seem to be a key part of the representation learned by the network, we cannot rule out other representations being used in combination with objects.
317
+
318
+ \begin{figure}[t]
319
+ \begin{center}
320
+ \includegraphics[width=1\textwidth]{figs/comparison_discovered_objects-eps-converted-to}
321
+ \end{center}
322
+ \vspace*{-8mm}
323
+ \caption{Object counts of CNN units discovering each object class for (a) Places-CNN and (b) ImageNet-CNN.}
324
+ \label{fig:comparison}
325
+ \end{figure}
326
+
327
+ \begin{figure}[t]
328
+ \begin{center}
329
+ \includegraphics[width=1\textwidth]{figs/segmentations_small-eps-converted-to}
330
+ \end{center}
331
+ \vspace*{-6mm}
332
+ \caption{Segmentations using pool5 units from Places-CNN. Many classes are encoded by several units covering different object appearances. Each row shows the 5 most confident images for each unit. The number represents the unit number in pool5.}
333
+ \label{fig:segmentationsSUN}
334
+ \end{figure}
335
+
336
+ \subsection{Object Localization within the inner Layers}
337
+
338
+ Places-CNN is trained to do scene classification using the output of the final layer of logistic regression and achieves state-of-the-art performance. From our analysis above, many of the units in the inner layers could perform interpretable object localization. Thus we could use this single Places-CNN with the annotation of units to do both scene recognition and object localization in a single forward-pass. Fig.~\ref{fig:segLayers} shows an example of the output of different layers of the Places-CNN using the tags provided by AMT workers. Bounding boxes are shown around the areas where each unit is activated within its RF above a certain threshold.
339
+
340
+
341
+
342
+ \begin{figure}[t]
343
+ \begin{center}
344
+ \includegraphics[width=1\textwidth]{figs/taxonomy_small3-eps-converted-to}
345
+ \end{center}
346
+ \vspace{-6mm}
347
+ \caption{(a) Object frequency in SUN (only top 50 objects shown), (b) Counts of objects discovered by pool5 in Places-CNN. (c) Frequency of most informative objects for scene classification.}
348
+ \label{fig:taxonomy}
349
+ \end{figure}
350
+
351
+ \begin{figure}[t]
352
+ \begin{center}
353
+ \includegraphics[width=1\textwidth]{figs/detectionResult-eps-converted-to}
354
+ \end{center}
355
+ \vspace{-6mm}
356
+ \caption{Interpretation of a picture by different layers of the Places-CNN using the tags provided by AMT workers. The first shows the final layer output of Places-CNN. The other three show detection results along with the confidence based on the units' activation and the semantic tags.}
357
+ \label{fig:segLayers}
358
+ \end{figure}
359
+
360
+ In Fig.~\ref{fig:segmentationSUN} we provide the segmentation performance of the objects discovered in pool5 using the SUN database. The performance of many units is very high which provides strong evidence that they are indeed detecting those object classes despite being trained for scene classification.
361
+
362
+ \section{Conclusion}
363
+
364
+
365
+
366
+ We find that object detectors emerge as a result of learning to classify scene categories, showing that a single network can support recognition at several levels of abstraction (e.g., edges, textures, objects, and scenes) without needing multiple outputs or networks. While it is common to train a network to do several tasks and to use the final layer as the output, here we show that reliable outputs can be extracted at each layer. As objects are the parts that compose a scene, detectors tuned to the objects that are discriminant between scenes are learned in the inner layers of the network. Note that only informative objects for specific scene recognition tasks will emerge. Future work should explore which other tasks would allow for other object classes to be learned without the explicit supervision of object labels.
367
+
368
+
369
+
370
+ \subsubsection*{Acknowledgments}
371
+
372
+ This work is supported by the National Science Foundation under Grant No. 1016862 to A.O, ONR MURI N000141010933 to A.T, as well as MIT Big Data Initiative at CSAIL, Google and Xerox Awards, a hardware donation from NVIDIA Corporation, to A.O and A.T.
373
+
374
+ \begin{figure}
375
+ \begin{center}
376
+ \includegraphics[width=1\textwidth]{figs/segmentationSUN-eps-converted-to}
377
+ \end{center}
378
+ \vspace{-6mm}
379
+ \caption{(a) Segmentation of images from the SUN database using pool5 of Places-CNN (J = Jaccard segmentation index, AP = average precision-recall.) (b) Precision-recall curves for some discovered objects. (c) Histogram of AP for all discovered object classes.}
380
+ \label{fig:segmentationSUN}
381
+ \end{figure}
382
+
383
+ \bibliography{egbib}
384
+ \bibliographystyle{iclr2015}
385
+
386
+ \end{document}
papers/1412/1412.6980.tex ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[a4paper]{article}
2
+ \pdfoutput=1
3
+ \usepackage{hyperref}
4
+ \hypersetup{
5
+ pdfinfo={
6
+ Title={Adam: A Method for Stochastic Optimization},
7
+ Author={Diederik P. Kingma, Jimmy Lei Ba}
8
+ }
9
+ }
10
+
11
+ \usepackage{pdfpages}
12
+ \begin{document}
13
+ \includepdf[pages=1-last]{0_adam_main.pdf}
14
+ \end{document}
papers/1501/1501.02530.tex ADDED
@@ -0,0 +1,767 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,twocolumn,letterpaper]{article}
2
+
3
+ \usepackage{cvpr}
4
+ \usepackage{times}
5
+ \usepackage{epsfig}
6
+ \usepackage{graphicx}
7
+ \usepackage{amsmath}
8
+ \usepackage{amssymb}
9
+
10
+ \usepackage{caption}
11
+ \usepackage{subcaption}
12
+
13
+ \usepackage{booktabs} \usepackage{tabularx} \usepackage{url}
14
+
15
+
16
+
17
+ \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
18
+
19
+ \makeatletter
20
+ \DeclareRobustCommand\onedot{\futurelet\@let@token\@onedot}
21
+ \def\@onedot{\ifx\@let@token.\else.\null\fi\xspace}
22
+ \def\eg{e.g\onedot} \def\Eg{E.g\onedot}
23
+ \def\ie{i.e\onedot} \def\Ie{I.e\onedot}
24
+ \def\cf{cf\onedot} \def\Cf{Cf\onedot}
25
+ \def\etc{etc\onedot} \def\vs{vs\onedot}
26
+ \def\wrt{w.r.t\onedot} \def\dof{d.o.f\onedot}
27
+ \def\etal{\textit{et~al\onedot}} \def\iid{i.i.d\onedot}
28
+ \def\Fig{Fig\onedot} \def\Eqn{Eqn\onedot} \def\Sec{Sec\onedot}
29
+ \def\vs{vs\onedot}
30
+ \makeatother
31
+
32
+ \DeclareRobustCommand{\figref}[1]{Figure~\ref{#1}}
33
+ \DeclareRobustCommand{\figsref}[1]{Figures~\ref{#1}}
34
+
35
+ \DeclareRobustCommand{\Figref}[1]{Figure~\ref{#1}}
36
+ \DeclareRobustCommand{\Figsref}[1]{Figures~\ref{#1}}
37
+
38
+ \DeclareRobustCommand{\Secref}[1]{Section~\ref{#1}}
39
+ \DeclareRobustCommand{\secref}[1]{Section~\ref{#1}}
40
+
41
+ \DeclareRobustCommand{\Secsref}[1]{Sections~\ref{#1}}
42
+ \DeclareRobustCommand{\secsref}[1]{Sections~\ref{#1}}
43
+
44
+ \DeclareRobustCommand{\Tableref}[1]{Table~\ref{#1}}
45
+ \DeclareRobustCommand{\tableref}[1]{Table~\ref{#1}}
46
+
47
+ \DeclareRobustCommand{\Tablesref}[1]{Tables~\ref{#1}}
48
+ \DeclareRobustCommand{\tablesref}[1]{Tables~\ref{#1}}
49
+
50
+ \DeclareRobustCommand{\eqnref}[1]{Equation~(\ref{#1})}
51
+ \DeclareRobustCommand{\Eqnref}[1]{Equation~(\ref{#1})}
52
+
53
+ \DeclareRobustCommand{\eqnsref}[1]{Equations~(\ref{#1})}
54
+ \DeclareRobustCommand{\Eqnsref}[1]{Equations~(\ref{#1})}
55
+
56
+ \DeclareRobustCommand{\chapref}[1]{Chapter~\ref{#1}}
57
+ \DeclareRobustCommand{\Chapref}[1]{Chapter~\ref{#1}}
58
+
59
+ \DeclareRobustCommand{\chapsref}[1]{Chapters~\ref{#1}}
60
+ \DeclareRobustCommand{\Chapsref}[1]{Chapters~\ref{#1}}
61
+
62
+
63
+
64
+
65
+
66
+ \setlength{\abovecaptionskip}{3mm}
67
+ \setlength{\belowcaptionskip}{3mm}
68
+ \setlength{\textfloatsep}{5mm}
69
+
70
+ \hyphenation{po-si-tive}
71
+ \hyphenation{Loe-wen-platz}
72
+ \newcommand{\nMovies}{72\xspace}
73
+ \newcommand{\nSentences}{50,000\xspace}
74
+ \newcommand{\nMoviesAD}{46\xspace}
75
+ \newcommand{\nMoviesScript}{26\xspace}
76
+ \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}
77
+
78
+
79
+
80
+
81
+ \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
82
+
83
+ \makeatletter
84
+ \DeclareRobustCommand\onedot{\futurelet\@let@token\@onedot}
85
+ \def\@onedot{\ifx\@let@token.\else.\null\fi\xspace}
86
+ \def\eg{e.g\onedot} \def\Eg{E.g\onedot}
87
+ \def\ie{i.e\onedot} \def\Ie{I.e\onedot}
88
+ \def\cf{cf\onedot} \def\Cf{Cf\onedot}
89
+ \def\etc{etc\onedot} \def\vs{vs\onedot}
90
+ \def\wrt{w.r.t\onedot} \def\dof{d.o.f\onedot}
91
+ \def\etal{\textit{et~al\onedot}} \def\iid{i.i.d\onedot}
92
+ \def\Fig{Fig\onedot} \def\Eqn{Eqn\onedot} \def\Sec{Sec\onedot}
93
+ \def\vs{vs\onedot}
94
+ \makeatother
95
+
96
+ \DeclareRobustCommand{\figref}[1]{Figure~\ref{#1}}
97
+ \DeclareRobustCommand{\figsref}[1]{Figures~\ref{#1}}
98
+
99
+ \DeclareRobustCommand{\Figref}[1]{Figure~\ref{#1}}
100
+ \DeclareRobustCommand{\Figsref}[1]{Figures~\ref{#1}}
101
+
102
+ \DeclareRobustCommand{\Secref}[1]{Section~\ref{#1}}
103
+ \DeclareRobustCommand{\secref}[1]{Section~\ref{#1}}
104
+
105
+ \DeclareRobustCommand{\Secsref}[1]{Sections~\ref{#1}}
106
+ \DeclareRobustCommand{\secsref}[1]{Sections~\ref{#1}}
107
+
108
+ \DeclareRobustCommand{\Tableref}[1]{Table~\ref{#1}}
109
+ \DeclareRobustCommand{\tableref}[1]{Table~\ref{#1}}
110
+
111
+ \DeclareRobustCommand{\Tablesref}[1]{Tables~\ref{#1}}
112
+ \DeclareRobustCommand{\tablesref}[1]{Tables~\ref{#1}}
113
+
114
+ \DeclareRobustCommand{\eqnref}[1]{Equation~(\ref{#1})}
115
+ \DeclareRobustCommand{\Eqnref}[1]{Equation~(\ref{#1})}
116
+
117
+ \DeclareRobustCommand{\eqnsref}[1]{Equations~(\ref{#1})}
118
+ \DeclareRobustCommand{\Eqnsref}[1]{Equations~(\ref{#1})}
119
+
120
+ \DeclareRobustCommand{\chapref}[1]{Chapter~\ref{#1}}
121
+ \DeclareRobustCommand{\Chapref}[1]{Chapter~\ref{#1}}
122
+
123
+ \DeclareRobustCommand{\chapsref}[1]{Chapters~\ref{#1}}
124
+ \DeclareRobustCommand{\Chapsref}[1]{Chapters~\ref{#1}}
125
+
126
+
127
+
128
+
129
+
130
+ \setlength{\abovecaptionskip}{3mm}
131
+ \setlength{\belowcaptionskip}{3mm}
132
+ \setlength{\textfloatsep}{5mm}
133
+
134
+ \hyphenation{po-si-tive}
135
+ \hyphenation{Loe-wen-platz}
136
+ \newcommand{\scream}[1]{\textbf{*** #1! ***}}
137
+ \newcommand{\fixme}[1]{\textcolor{red}{\textbf{FiXme}#1}\xspace}
138
+ \newcommand{\hobs}{\textrm{h}_\textrm{obs}}
139
+ \newcommand{\cpad}[1]{@{\hspace{#1mm}}}
140
+ \newcommand{\alg}[1]{\textsc{#1}}
141
+
142
+ \newcommand{\fnrot}[2]{\scriptsize\rotatebox{90}{\begin{minipage}{#1}\flushleft #2\end{minipage}}}
143
+ \newcommand{\chmrk}{{\centering\ding{51}}}
144
+ \newcommand{\eqn}[1]{\begin{eqnarray}\vspace{-1mm}#1\vspace{-1mm}\end{eqnarray}}
145
+ \newcommand{\eqns}[1]{\begin{eqnarray*}\vspace{-1mm}#1\vspace{-1mm}\end{eqnarray*}}
146
+
147
+
148
+ \newcommand{\todo}[1]{\textcolor{red}{ToDo: #1}}
149
+ \newcommand{\myparagraph}[1]{\vspace{-0.25cm} \paragraph{\textbf{\emph{#1}}}}
150
+
151
+ \newcommand{\marcus}[1]{\textcolor{green}{Marcus: #1}}
152
+ \newcommand{\anja}[1]{\textcolor{red}{Anja: #1}}
153
+ \newcommand{\sr}[1]{\textsc{#1}}
154
+ \newcommand{\invisible}[1]{}
155
+
156
+ \newcommand{\figvspace}{\vspace{-.5cm}}
157
+ \newcommand{\secvspace}{\vspace{-.2cm}}
158
+ \newcommand{\subsecvspace}{\vspace{-.2cm}}
159
+ \newcommand{\MpiNew}{MPII Cooking~2\xspace}
160
+ \newcommand{\MpiMultiLevel}{TACoS Multi-Level\xspace}
161
+
162
+ \graphicspath{{./fig/}{./fig/plots/}} \newcommand{\redtext}[1]{\emph{\textcolor{red}{#1}}}
163
+
164
+ \cvprfinalcopy
165
+
166
+ \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
167
+
168
+
169
+
170
+ \ifcvprfinal\pagestyle{empty}\fi
171
+ \begin{document}
172
+
173
+ \title{A Dataset for Movie Description}
174
+ \newcommand{\authSpace}{&}
175
+ \author{\begin{tabular}{cccc}
176
+ Anna Rohrbach$^{1}$ \authSpace Marcus Rohrbach$^{2}$ \authSpace Niket Tandon$^{1}$ \authSpace Bernt Schiele$^{1}$\\
177
+ \end{tabular}\\
178
+ \begin{tabular}{cccc}
179
+ \multicolumn{4}{c}{$^{1}$Max Planck Institute for Informatics, Saarbr{\"u}cken, Germany}\\
180
+ \multicolumn{4}{c}{$^{2}$UC Berkeley EECS and ICSI, Berkeley, CA, United States}\\
181
+ \end{tabular}}
182
+
183
+ \maketitle
184
+ \thispagestyle{plain}
185
+ \pagestyle{plain}
186
+
187
+ \begin{abstract}
188
+ Descriptive video service (DVS) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed DVS, which is temporally aligned to full length HD movies. In addition we also collected the aligned movie scripts which have been used in prior work and compare the two different sources of descriptions. In total the \emph{Movie Description} dataset contains a parallel corpus of over 54,000 sentences and video snippets from 72 HD movies. We characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing DVS to scripts, we find that DVS is far more visual and describes precisely what \emph{is shown} rather than what \emph{should happen} according to the scripts created prior to movie production.
189
+ \end{abstract}
190
+
191
+
192
+
193
+ \section{Introduction\invisible{ - 1.5 pages}}
194
+
195
+ Audio descriptions (DVS - descriptive video service) make movies accessible to millions of blind or visually impaired people\footnote{\label{fn:blind} In this work we refer for simplicity to ``the blind'' to account for all blind and visually impaired people which benefit from DVS, knowing of the variety of visually impaired and that DVS is not accessible to all.}. DVS provides an audio narrative of the ``most important aspects of the visual information'' \cite{salway07corpus}, namely actions, gestures, scenes, and character appearance as can be seen in Figures \ref{fig:teaser1} and \ref{fig:teaser}. DVS is prepared by trained describers and read by professional narrators. More and more movies are audio transcribed, but it may take up to 60 person-hours to describe a 2-hour movie \cite{lakritz06tr}, resulting in the fact that only a small subset of movies and TV programs are available for the blind. Consequently, automating this would be a noble task.
196
+
197
+ In addition to the benefits for the blind, generating descriptions for video is an interesting task in itself requiring to understand and combine core techniques of computer vision and computational linguistics. To understand the visual input one has to reliably recognize scenes, human activities, and participating objects. To generate a good description one has to decide what part of the visual information to verbalize, \ie recognize what is salient.
198
+
199
+ Large datasets of objects \cite{deng09cvpr} and scenes \cite{xiao10cvpr,zhou14nips} had an important impact in the field and significantly improved our ability to recognize objects and scenes in combination with CNNs \cite{krizhevsky12nips}.
200
+ To be able to learn how to generate descriptions of visual content, parallel datasets of visual content paired with descriptions are indispensable~\cite{rohrbach13iccv}. While recently several large datasets have been released which provide images with descriptions \cite{ordonez11nips,flickr30k,coco2014}, video description datasets focus on short video snippets only and are limited in size \cite{chen11acl} or not publicly available \cite{over12tv}.
201
+ TACoS Multi-Level \cite{rohrbach14gcpr} and YouCook \cite{das13cvpr} are exceptions by providing multiple sentence descriptions and longer videos, however they are restricted to the cooking scenario.
202
+ In contrast, the data available with DVS provides realistic, open domain video paired with multiple sentence descriptions. It even goes beyond this by telling a story which means it allows to study how to extract plots and understand long term semantic dependencies and human interactions from the visual and textual data.
203
+
204
+ \begin{figure}[t]
205
+ \scriptsize
206
+ \begin{center}
207
+ \begin{tabular}{@{}p{2.5cm}p{2.5cm}p{2.5cm}}
208
+ \includegraphics[width=\linewidth]{figures/teaser1040/1_.jpg} & \includegraphics[width=\linewidth]{figures/teaser1040/2_.jpg} & \includegraphics[width=\linewidth]{figures/teaser1040/3_.jpg} \\
209
+ \textbf{DVS}: Abby gets in the basket. & Mike leans over and sees how high they are. & Abby clasps her hands around his face and kisses him passionately. \\
210
+ \textbf{Script}: After a moment a frazzled Abby pops up in his place. & Mike looks down to see -- they are now fifteen feet above the ground. & For the first time in her life, she stops thinking and grabs Mike and kisses the hell out of him. \\
211
+ \end{tabular}
212
+ \caption{Audio descriptions (DVS - descriptive video service), movie scripts (scripts) from the movie ``Ugly Truth''.}
213
+ \label{fig:teaser1}
214
+ \end{center}
215
+ \end{figure}
216
+
217
+ \renewcommand{\bottomfraction}{0.8}
218
+ \setcounter{dbltopnumber}{2}
219
+ \renewcommand{\textfraction}{0.07}
220
+ \newcommand{\colwidth}{3.1cm}
221
+ \begin{figure*}[t]
222
+ \scriptsize
223
+ \begin{center}
224
+ \begin{tabular}{@{}p{\colwidth}p{\colwidth}p{\colwidth}p{\colwidth}p{\colwidth}}
225
+ \includegraphics[width=\linewidth]{figures/movie1054/1.jpg} & \includegraphics[width=\linewidth]{figures/movie1054/2.jpg} & \includegraphics[width=\linewidth]{figures/movie1054/4.jpg} & \includegraphics[width=\linewidth]{figures/movie1054/5.jpg} & \includegraphics[width=\linewidth]{figures/movie1054/6.jpg}\\
226
+ \textbf{DVS}: Buckbeak rears and attacks Malfoy. & & & Hagrid lifts Malfoy up. & As Hagrid carries Malfoy away, the hippogriff gently nudges Harry. \\
227
+ \textbf{Script}: In a flash, Buckbeak's steely talons slash down. & Malfoy freezes. & \redtext{Looks down at the blood blossoming on his robes.} & & \redtext{Buckbeak whips around, raises its talons and - seeing Harry - lowers them.}\\
228
+
229
+
230
+ \includegraphics[width=\linewidth]{figures/movie1041/0.jpg} & \includegraphics[width=\linewidth]{figures/movie1041/1.jpg} & \includegraphics[width=\linewidth]{figures/movie1041/2.jpg} & \includegraphics[width=\linewidth]{figures/movie1041/3.jpg} & \includegraphics[width=\linewidth]{figures/movie1041/4.jpg}\\
231
+ \textbf{DVS}: Another room, the wife and mother sits at a window with a towel over her hair. & She smokes a cigarette with a latex-gloved hand. & Putting the cigarette out, she uncovers her hair, removes the glove and pops gum in her mouth. & She pats her face and hands with a wipe, then sprays herself with perfume. & She pats her face and hands with a wipe, then sprays herself with perfume. \\
232
+ \textbf{Script}: Debbie opens a window and sneaks a cigarette. & She holds her cigarette with a yellow dish washing glove. & She puts out the cigarette and goes through an elaborate routine of hiding the smell of smoke. & She \redtext{puts some weird oil in her hair} and uses a wet nap on her neck \redtext{and clothes and brushes her teeth}. & She sprays cologne \redtext{and walks through it.}\\
233
+
234
+ \includegraphics[width=\linewidth]{figures/movie1027/0.jpg} & \includegraphics[width=\linewidth]{figures/movie1027/1.jpg} & \includegraphics[width=\linewidth]{figures/movie1027/2.jpg} & \includegraphics[width=\linewidth]{figures/movie1027/4.jpg} & \includegraphics[width=\linewidth]{figures/movie1027/6.jpg}\\
235
+ \textbf{DVS}: They rush out onto the street. & A man is trapped under a cart. & Valjean is crouched down beside him. & Javert watches as Valjean places his shoulder under the shaft. & Javert's eyes narrow. \\
236
+ \textbf{Script}: Valjean and Javert hurry out across the factory yard and down the muddy track beyond to discover - & A heavily laden cart has toppled onto the cart driver. & Valjean, \redtext{Javert and Javert's assistant} all hurry to help, but they can't get a proper purchase in the spongy ground. & He throws himself under the cart at this higher end, and braces himself to lift it from beneath. & Javert stands back and looks on.\\
237
+
238
+ \end{tabular}
239
+ \caption{Audio descriptions (DVS - descriptive video service), movie scripts (scripts) from the movies ``Harry Potter and the prisoner of azkaban'', ``This is 40'', ``Les Miserables''. Typical mistakes contained in scripts marked with \redtext{red italic}.}
240
+ \label{fig:teaser}
241
+ \end{center}
242
+ \end{figure*}
243
+
244
+ Figures \ref{fig:teaser1} and \ref{fig:teaser} show examples of DVS and compare them to movie scripts. Scripts have been used for various tasks \cite{laptev08cvpr,cour08eccv,marszalek09cvpr,duchenne09iccv,liang11cvpr}, but so far not for the video description. The main reason for this is that automatic alignment frequently fails due to the discrepancy between the movie and the script.
245
+ Even when perfectly aligned to the movie it frequently is not as precise as the DVS because it is typically produced prior to the shooting of the movie. \Eg. in \Figref{fig:teaser} see the mistakes marked with red. A typical case is that part of the sentence is correct, while another part contains irrelevant information.
246
+
247
+
248
+
249
+
250
+
251
+
252
+
253
+
254
+
255
+
256
+
257
+
258
+
259
+
260
+
261
+
262
+
263
+
264
+
265
+ In this work we present a novel dataset which provides transcribed DVS, which is aligned to full length HD movies. For this we retrieve audio streams from blu-ray HD disks, segment out the sections of the DVS audio and transcribe them via a crowd-sourced transcription service \cite{castingwords14}. As the audio descriptions are not fully aligned to the activities in the video, we manually align each sentence to the movie.
266
+ Therefore, in contrast to the (non public) corpus used in \cite{salway07civr, salway07corpus}, our dataset provides alignment to the actions in the video, rather than just to the audio track of the description.
267
+ In addition we also mine existing movie scripts, pre-align them automatically, similar to \cite{laptev08cvpr,cour08eccv} and then manually align the sentences to the movie.
268
+
269
+
270
+
271
+
272
+
273
+
274
+
275
+
276
+
277
+
278
+
279
+
280
+
281
+ We benchmark different approaches to generate descriptions. First are nearest neighbour retrieval using state-of-the-art visual features \cite{wang13iccv,zhou14nips,hoffman14nips} which do not require any additional labels, but retrieve sentences form the training data. Second, we propose to use semantic parsing of the sentence to extract training labels for recently proposed translation approach \cite{rohrbach13iccv} for video description.
282
+
283
+ The main contribution of this work is a novel movie description dataset which provides transcribed and aligned DVS and script data sentences. We will release sentences, alignments, video snippets, and intermediate computed features to foster research in different areas including video description, activity recognition, visual grounding, and understanding of plots.
284
+
285
+ As a first study on this dataset we benchmark several approaches for movie description. Besides sentence retrieval, we adapt the approach of \cite{rohrbach13iccv} by automatically extracting the semantic representation from the sentences using semantic parsing. This approach achieves competitive performance on TACoS Multi-Level corpus \cite{rohrbach14gcpr} without using the annotations and outperforms the retrieval approaches on our novel movie description dataset.
286
+ Additionally we present an approach to semi-automatically collect and align DVS data and analyse the differences between DVS and movie scripts.
287
+
288
+
289
+
290
+
291
+
292
+ % \section{Related Work\invisible{ - 0.5 pages}}
293
+
294
+
295
+
296
+
297
+ We first discuss recent approaches to video description and then the existing works using movie scripts and DVS.
298
+
299
+ In recent years there has been an increased interest in automatically describing images \cite{farhadi10eccv,kulkarni11cvpr,kuznetsova12acl,mitchell12eacl,li11acl,kuznetsova12acl, kuznetsova14tacl,kiros14icml,socher14tacl,fang14arxiv} and videos \cite{kojima02ijcv,gupta09cvpr,barbu12uai,hanckmann12eccvW,khan11iccvw,tan11mm,das13cvpr,guadarrama13iccv,venugopalan14arxiv,rohrbach14gcpr} with natural language. While recent works on image description show impressive results by \emph{learning} the relations between images and sentences and generating novel sentences \cite{kuznetsova14tacl,donahue14arxiv,mao14arXiv,rohrbach13iccv,kiros14arxiv,karpathy14arxiv,vinyals14arxiv,chen14arxiv}, the video description works typically rely on retrieval or templates \cite{das13cvpr,thomason14coling,guadarrama13iccv,gupta09cvpr,kojima02ijcv,kulkarni11cvpr,tan11mm} and frequently use a separate language corpus to model the linguistic statistics. A few exceptions exist: \cite{venugopalan14arxiv} uses a pre-trained model for image-description and adapts it to video description. \cite{rohrbach13iccv,donahue14arxiv} learn a translation model, however, the approaches rely on a strongly annotated corpus with aligned videos, annotations, and sentences.
300
+ The main reason for video description lacking behind image description seems to be a missing corpus to learn and understand the problem of video description. We try to address this limitation by collecting a large, aligned corpus of video snippets and descriptions. To handle the setting of having only videos and sentences without annotations for each video snippet, we propose an approach which adapts \cite{rohrbach13iccv}, by extracting annotations from the sentences. Our extraction of annotations has similarities to \cite{thomason14coling}, but we try to extract the senses of the words automatically by using semantic parsing as discussed in \secref{sec:semantic-parsing}.
301
+
302
+ Movie
303
+ scripts have been used for automatic discovery and annotation of scenes and human actions in videos \cite{laptev08cvpr,marszalek09cvpr, duchenne09iccv}. We rely on the approach presented in \cite{laptev08cvpr} to align movie scripts using the subtitles.
304
+ \cite{bojanowski13iccv} attacks the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. They also rely on a semantic parser (SEMAFOR \cite{das2012acl}) trained on FrameNet database \cite{Baker98acl}, however they limit the recognition only to two frames.
305
+ \cite{bojanowski14eccv} aims to localize individual short actions in longer clips by exploiting the ordering constrains as weak supervision.
306
+
307
+
308
+
309
+
310
+
311
+
312
+
313
+
314
+
315
+
316
+
317
+
318
+
319
+
320
+
321
+
322
+
323
+
324
+
325
+
326
+
327
+
328
+ DVS has so far mainly been studied from a linguistic prospective. \cite{salway07corpus} analyses the language properties on a non-public corpus of DVS from 91 films. Their corpus is based on the original sources to create the DVS and contains different kinds of artifacts not present in actual description, such as dialogs and production notes. In contrast our text corpus is much cleaner as it consists only of the actual DVS.
329
+ With respect to word frequency they identify that especially actions, objects, and scenes, as well as the characters are mentioned. The analysis of our corpus reveals similar statistics to theirs.
330
+
331
+
332
+
333
+ The only work we are aware of, which uses DVS in connection with computer vision is
334
+ \cite{salway07civr}. The authors try to understand which characters interact with each other.
335
+ For this they first segment the video into events by detecting dialogue, exciting, and musical events using audio and visual features. Then they rely on the dialogue transcription and DVS to identify when characters occur together in the same event which allows them to defer interaction patterns.
336
+ In contrast to our dataset their DVS is not aligned and they try to resolve this by a heuristic to move the event which is not quantitatively evaluated. Our dataset will allow to study the quality of automatic alignment approaches, given annotated ground truth alignment.
337
+
338
+ There are some initial works to support DVS productions using scripts as source \cite{lakritz06tr} and automatically finding scene boundaries \cite{gagnon10cvprw}. However, we believe that our dataset will allow learning much more advanced multi-modal models, using recent techniques in visual recognition and natural language processing.
339
+
340
+
341
+
342
+
343
+ Semantic parsing has received much attention in computational linguistics recently, see, for example, the tutorial \cite{Artzi:ACL2013} and references given there. Although aiming at general-purpose applicability, it has so far been successful rather for specific use-cases such as natural-language question answering \cite{Berant:EMNLP2013,Fader:KDD2014} or understanding temporal expressions \cite{Lee:ACL2014}.
344
+
345
+
346
+
347
+ \begin{table*}[t]
348
+ \newcommand{\midruleDVSScripts}{\cmidrule(lr){1-2} \cmidrule(lr){3-3} \cmidrule(lr){4-7}}
349
+ \center
350
+ \begin{tabular}{lrrrrrr}
351
+ \toprule
352
+ & & Before alignment & \multicolumn{4}{c}{After alignment} \\
353
+ &Movies & Words & Words & Sentences & Avg. length & Total length\\
354
+ \midruleDVSScripts
355
+ DVS & 46 & 284,401 & 276,676 & 30,680 & 4.1 sec. & 34.7 h. \\
356
+ Movie script & 31 & 262,155 & 238,889 & 23,396 & 3.4 sec. & 21.7 h. \\
357
+ Total & 72 & 546,556 & 515,565 & 54,076 & 3.8 sec. & 56.5 h. \\
358
+ \bottomrule
359
+ \end{tabular}
360
+ \vspace{-0.2cm}
361
+ \caption{Movie Description dataset statistics. Discussion see Section \ref{sec:datasetStats}.}
362
+ \vspace{-0.4cm}
363
+ \label{tab:DVS-scripts-numbers}
364
+ \end{table*}
365
+
366
+ \section{The Movie Description dataset \invisible{ - 1.5 pages}}
367
+ \label{sec:dataset}
368
+ Despite the potential benefit of DVS for computer vision, it has not been used so far apart from \cite{gagnon10cvprw, lakritz06tr} who study how to automate DVS production. We believe the main reason for this is that it is not available in the text format, \ie transcribed. We tried to get access to DVS transcripts from description services as well as movie and TV production companies, but they were not ready to provide or sell them.
369
+ While script data is easier to obtain, large parts of it do not match the movie, and they have to be ``cleaned up''.
370
+ In the following we describe our semi-automatic approach to obtain DVS and scripts and align them to the video.
371
+
372
+
373
+ \subsection{Collection of DVS}
374
+ We search for the blu-ray movies with DVS in the ``Audio Description'' section of the British Amazon \cite{amazon14} and select a set of \nMoviesAD movies of diverse genres\footnote{\emph{2012, Bad Santa, Body Of Lies, Confessions Of A Shopaholic, Crazy Stupid Love, 27 Dresses, Flight, Gran Torino, Harry Potter and the deathly hallows Disk One, Harry Potter and the Half-Blood Prince, Harry Potter and the order of phoenix, Harry Potter and the philosophers stone, Harry Potter and the prisoner of azkaban, Horrible Bosses, How to Lose Friends and Alienate People, Identity Thief, Juno, Legion, Les Miserables, Marley and me, No Reservations, Pride And Prejudice Disk One, Pride And Prejudice Disk Two, Public Enemies, Quantum of Solace, Rambo, Seven pounds, Sherlock Holmes A Game of Shadows, Signs, Slumdog Millionaire, Spider-Man1, Spider-Man3, Super 8, The Adjustment Bureau, The Curious Case Of Benjamin Button, The Damned united, The devil wears prada, The Great Gatsby, The Help, The Queen, The Ugly Truth, This is 40, TITANIC, Unbreakable, Up In The Air, Yes man}.}.
375
+ As DVS is only available in audio format, we first retrieve audio stream from blu-ray HD disk\footnote{We use \cite{MakeMKV14} to extract a blu-ray in the .mkv file, then \cite{XMediaRecode14} to select and extract the audio streams from it.}. Then we semi-automatically segment out the sections of the DVS audio (which is mixed with the original audio stream) with the approach described below.
376
+ The audio segments are then transcribed by a crowd-sourced transcription service \cite{castingwords14} that also provides us the time-stamps for each spoken sentence.
377
+ As the DVS is added to the original audio stream between the dialogs, there might be a small misalignment between the time of speech and the corresponding visual content. Therefore, we manually align each sentence to the movie in-house.
378
+
379
+ \begin{table*}[t]
380
+ \center
381
+ \begin{tabular}{lrrrrrr}
382
+ \toprule
383
+ Dataset& multi-sentence & domain & sentence source & clips&videos & sentences \\
384
+ \midrule
385
+ YouCook \cite{guadarrama13iccv} & x & cooking & crowd & &88 & 2,668 \\
386
+ TACoS \cite{regneri13tacl,rohrbach13iccv} & x & cooking & crowd &7,206&127&18,227 \\
387
+ TACoS Multi-Level \cite{rohrbach14gcpr}& x & cooking & crowd & 14,105&273 & 52,593\\
388
+ MSVD \cite{chen11acl} & & open & crowd & 1,970& & 70,028\\
389
+ Movie Description (ours) & x & open & professional & 54,076&72 & 54,076 \\
390
+ \bottomrule
391
+ \end{tabular}
392
+ \vspace{-0.2cm}
393
+ \caption{Comparison of video description datasets. Discussion see Section \ref{sec:datasetStats}.}
394
+ \label{tbl:datasets}
395
+ \end{table*}
396
+
397
+ \paragraph{Semi-Automatic segmentation of DVS.}
398
+ We first estimate the temporal alignment difference between the DVS and the original audio (which is part of the DVS), as they might be off a few time frames. The precise alignment is important to compute the similarity of both streams.
399
+ Both steps (alignment and similarity) are computed using the spectograms of the audio stream, which is computed using Fast Fourier Transform (FFT).
400
+ If the difference between both audio streams is larger than a given threshold we assume the DVS contains audio description at that point in time. We smooth this decision over time using a minimum segment length of 1 second.
401
+ The threshold was picked on a few sample movies, but has to be adjusted for each movie due to different mixing of the audio description stream, different narrator voice level, and movie sound.
402
+
403
+ \subsection{Collection of script data}
404
+ \label{subsec:scripts}
405
+ In addition we mine the script web resources\footnote{http://www.weeklyscript.com, http://www.simplyscripts.com, http://www.dailyscript.com, http://www.imsdb.com} and select \nMoviesScript movie scripts\footnote{\emph{Amadeus, American Beauty, As Good As It Gets, Casablanca, Charade, Chinatown, Clerks, Double Indemnity, Fargo, Forrest Gump, Gandhi, Get Shorty, Halloween, It is a Wonderful Life, O Brother Where Art Thou, Pianist, Raising Arizona, Rear Window, The Crying Game, The Graduate, The Hustler, The Lord Of The Rings The Fellowship Of The Ring, The Lord Of The Rings The Return Of The King, The Lost Weekend, The Night of the Hunter, The Princess Bride}.}
406
+ As starting point we use the movies featuring in \cite{marszalek09cvpr} that have highest alignment scores. We are also interested in comparing the two sources (movie scripts and DVS), so we are looking for the scripts labeled as ``Final'', ``Shooting'', or ``Production Draft'' where DVS is also available. We found that the ``overlap" is quite narrow, so we analyze 5 such movies\footnote{\emph{Harry Potter and the prisoner of azkaban, Les Miserables, Signs, The Ugly Truth, This is 40}.} in our dataset. This way we end up with {31} movie scripts in total.
407
+ We follow existing approaches \cite{laptev08cvpr,cour08eccv} to automatically align scripts to movies. First we parse the scripts, extending the method of \cite{laptev08cvpr} to handle scripts which deviate from the default format. Second, we extract the subtitles from the blu-ray disks\footnote{We extract .srt from .mkv with \cite{SubtitleEdit14}. It also allows for subtitle alignment and spellchecking.}. Then we use the dynamic programming method of \cite{laptev08cvpr} to align scripts to subtitles and infer the time-stamps for the description sentences.
408
+ We select the sentences with a reliable alignment score (the ratio of matched words in the near-by monologues) of at least {0.5}. The obtained sentences are then manually aligned to video in-house.
409
+
410
+ \subsection{Statistics and comparison to other datasets}
411
+ \label{sec:datasetStats}
412
+ During the manual alignment we filter out: a) sentences describing the movie introduction/ending (production logo, cast etc); b) texts read from the screen; c) irrelevant sentences describing something not present in the video; d) sentences related to audio/sounds/music.
413
+ Table \ref{tab:DVS-scripts-numbers} presents statistics on the number of words before and after the aligment to video. One can see that for the movie scripts the reduction in number of words is about {8.9\%}, while for DVS it is {2.7\%}. In case of DVS the filtering mainly happens due to inital/ending movie intervals and transcribed dialogs (when shown as text). For the scripts it is mainly attributed to irrelevant sentences. Note, that in cases when the sentences are ``alignable'' but have minor mistakes we still keep them.
414
+
415
+ We end up with the parallel corpus of over 50K video-sentence pairs and a total length over 56 hours. We compare our corpus to other existing parallel corpora in Table \ref{tbl:datasets}.
416
+ The main limitations of existing datasets are single domain \cite{das13cvpr,regneri13tacl,rohrbach14gcpr} or limited number of video clips \cite{guadarrama13iccv}. We fill in the gap with a large dataset featuring realistic open domain videos, which also provides high quality (professional) sentences and allows for multi-sentence description.
417
+
418
+
419
+
420
+
421
+
422
+ \subsection{Visual features}
423
+ \label{subsec:visual_features}
424
+ We extract video snippets from the full movie based on the aligned sentence intervals. We also uniformly extract 10 frames from each video snippet.
425
+ As discussed above DVS and scripts describe activities, object, and scenes (as well as emotions which we do not explicitly handle with these features, but they might still be captured, \eg by the context or activities).
426
+ In the following we briefly introduce the visual features computed on our data which we will also make publicly available.
427
+
428
+
429
+ \textbf{DT}
430
+ We extract the improved dense trajectories compensated for camera motion \cite{wang13iccv}. For each feature (Trajectory, HOG, HOF, MBH) we create a codebook with 4000 clusters and compute the corresponding histograms. We apply L1 normalization to the obtained histograms and use them as features.
431
+
432
+ \textbf{LSDA}
433
+ We use the recent large scale object detection CNN \cite{hoffman14nips} which distinguishes 7604 ImageNet \cite{deng09cvpr} classes. We run the detector on every second extracted frame (due to computational constraints). Within each frame we max-pool the network responses for all classes, then do mean-pooling over the frames within a video snippet and use the result as a feature.
434
+
435
+ \textbf{PLACES and HYBRID}
436
+ Finally, we use the recent scene classification CNNs \cite{zhou14nips} featuring 205 scene classes. We use both available networks: \emph{Places-CNN} and \emph{Hybrid-CNN}, where the first is trained on the Places dataset \cite{zhou14nips} only, while the second is additionally trained on the 1.2 million images of ImageNet (ILSVRC 2012) \cite{ILSVRCarxiv14}. We run the classifiers on all the extracted frames of our dataset.
437
+ We mean-pool over the frames of each video snippet, using the result as a feature.
438
+ \section{Approaches to video description \invisible{ - 0.5 page}}
439
+ \label{sec:approaches}
440
+ In this section we describe the approaches to video description that we benchmark on our proposed dataset.
441
+
442
+ \textbf{Nearest neighbor}
443
+ We retrieve the closest sentence from the training corpus using the L1-normalized visual features introduced in Section \ref{subsec:visual_features} and the intersection distance.
444
+
445
+ \textbf{SMT}
446
+ We adapt the two-step translation approach of \cite{rohrbach13iccv} which uses an intermediate semantic representation (SR), modeled as a tuple, \eg $\langle cut,knive,tomato \rangle$. As the first step it learns a mapping from the visual input to the semantic representation (SR), modeling pairwise dependencies in a CRF using visual classifiers as unaries. The unaries are trained using an SVM on dense trajectories \cite{wang13ijcv}.
447
+ In the second step \cite{rohrbach13iccv} translates the SR to a sentence using Statistical Machine Translation (SMT)~\cite{koehn07acl}. For this the approach concatenates SR as input language, \eg \emph{cut knife tomato}, and the natural sentence pairs as output language, \eg \emph{The person slices the tomato.} While we cannot rely on an annotated SR as in \cite{rohrbach13iccv}, we automatically mine the SR from sentences using semantic parsing which we introduce in the next section.
448
+ In addition to dense trajectories we use the features described in \secref{subsec:visual_features}.
449
+
450
+
451
+ \textbf{SMT Visual words}
452
+ As an alternative on potentially noisy labels extracted from the sentences, we try to directly translate visual classifiers and visual words to a sentence.
453
+ We model the essential components by relying on activity, object, and scene recognition. For objects and scenes we rely on the pre-trained models LSDA and PLACES. For activities we rely on the state-of-the-art activity recognition feature DT. We cluster the DT histograms to 300 visual words using k-means. The index of the closest cluster center from our activity category is chosen as label.
454
+ To build our tuple we obtain the highest scoring class labels
455
+ of the object detector and scene classifier. More specifically for the object detector we consider two highest scoring classes: for subject and object.
456
+ Thus we obtain the tuple $\langle SUBJECT, ACTIVITY, OBJECT, SCENE \rangle = \langle argmax(LSDA), DT_{i},argmax2(LSDA),$ $ argmax(PLACES)\rangle $, for which we learn translation to a natural sentence using the SMT approach discussed above.
457
+ \newcommand{\q}[1] {``\textit{#1}''}
458
+ \newcommand{\qp}[1] {\textit{#1}}
459
+ \newcommand{\lbl}[1] {\texttt{\small #1}}
460
+ \newcommand{\ignore}[1] {}
461
+
462
+ \section{Semantic parsing}
463
+ \label{sec:semantic-parsing}
464
+
465
+ \begin{table}
466
+ \center
467
+ \small
468
+ \begin{tabular}{p{1.8cm} p{1cm} p{1.7cm} p{2.1cm}}
469
+ \toprule
470
+ Phrase & WordNet & VerbNet & Expected \\
471
+ & Mapping & Mapping & Frame \\
472
+ \midrule
473
+ the man & man\#1 & Agent.animate & Agent: man\#1\\
474
+ \cmidrule(lr){1-4}
475
+ begin to shoot& shoot\#2 & shoot\#vn\#2 & Action: shoot\#2\\
476
+ \cmidrule(lr){1-4}
477
+ a video & video\#1 & Patient.solid & Patient: video\#1\\
478
+ \cmidrule(lr){1-4}
479
+ in & in & PP.in & \\
480
+ \cmidrule(lr){1-4}
481
+ the moving bus& bus\#1 & NP.Location. solid & Location: moving bus\#1\\
482
+ \bottomrule
483
+ \end{tabular}
484
+ \caption{Semantic parse for \q{He began to shoot a video in the moving bus}. Discussion see Section \ref{sec:semparsingAppraoch}}
485
+ \label{tab:semantic-parse-expected-output}
486
+ \end{table}
487
+
488
+
489
+ Learning from a parallel corpus of videos and sentences without having annotations is challenging.
490
+ In this section we introduce our approach to exploit the sentences using semantic parsing. The proposed method aims to extract annotations from the natural sentences and make it possible to avoid the tedious annotation task.
491
+ Later in the section we perform the evaluation of our method on a corpus where annotations are available in context of a video description task.
492
+
493
+ \subsection{Semantic parsing approach}
494
+ \label{sec:semparsingAppraoch}
495
+
496
+
497
+ We lift the words in a sentence to a semantic space of roles and WordNet \cite{pedersen2004wordnet,Fellbaum1998} senses by performing SRL (Semantic Role Labeling) and WSD (Word Sense Disambiguation). For an example, refer to Table \ref{tab:semantic-parse-expected-output}, the expected outcome of semantic parsing on the input sentence \q{He shot a video in the moving bus} is ``\lbl{Agent: man, Action: shoot, Patient: video, Location: bus}''. Additionally, the role fillers are disambiguated.
498
+
499
+
500
+
501
+
502
+ We use the ClausIE tool \cite{clauseIE} to decompose sentences into their respective clauses. For example, \q{he shot and modified the video} is split into two phrases \q{he shot the video} and \q{the modified the video}). We then use the OpenNLP tool suite\footnote{http://opennlp.sourceforge.net/} for chunking
503
+ the text of each clause. In order to provide the linking of words in the sentence to their WordNet sense mappings, we rely on a state-of-the-art WSD system, IMS \cite{ims-wsd}. The WSD system, however, works at a word level. We enable it to work at a phrase level. For every noun phrase, we identify and disambiguate its head word (\eg \lbl{the moving bus} to \q{bus\#1}, where \q{bus\#1} refers to the first sense of the word \lbl{bus}). We link verb phrases to the proper sense of its head word in WordNet (\eg \lbl{begin to shoot} to \q{shoot\#2}).
504
+
505
+
506
+
507
+ In order to obtain word role labels, we link verbs to VerbNet \cite{verbnet-2009,verbnet-2006}, a manually curated high-quality linguistic resource for English verbs. VerbNet is already mapped to WordNet, thus we map to VerbNet via WordNet. We perform two levels of matches in order to obtain role labels. First is the syntactic match. Every VerbNet verb sense comes with a syntactic frame \eg for \lbl{shoot}, the syntactic frame is \lbl{NP V NP}. We first match the sentence's verb against the VerbNet frames. These become candidates for the next step.
508
+ Second we perform the semantic match: VerbNet also provides a role restriction on the arguments of the roles \eg for \lbl{shoot} (sense killing), the role restriction is \lbl{Agent.animate V Patient.\textbf{animate} PP Instrument.solid}. The other sense for \lbl{shoot} (sense snap), the semantic restriction is \lbl{Agent.animate V Patient.\textbf{solid}}. We only accept candidates from the syntactic match that satisfy the semantic restriction.
509
+
510
+ \begin{figure}[t]
511
+ \begin{center}
512
+ \begin{subfigure}[b]{\linewidth}
513
+ \includegraphics[width=0.9\linewidth]{figures/semanticparsing1}
514
+ \caption[labelInTOC]{Semantic representation extracted from a sentence.}
515
+ \label{fig:semp1}
516
+ \end{subfigure}
517
+ \begin{subfigure}[b]{\linewidth}
518
+ \includegraphics[width=0.8\linewidth]{figures/semanticparsing2}
519
+ \caption[labelInTOC]{Same verb, different senses.}
520
+ \label{fig:semp2}
521
+ \end{subfigure}
522
+ \begin{subfigure}[b]{\linewidth}
523
+ \includegraphics[width=0.8\linewidth]{figures/semanticparsing3}
524
+ \caption[labelInTOC]{Different verbs, same sense.}
525
+ \label{fig:semp3}
526
+ \end{subfigure}
527
+ \caption[labelInTOC]{Semantic parsing example, see Section \ref{sec:semparsingAppraoch}}
528
+ \end{center}
529
+ \end{figure}
530
+
531
+
532
+
533
+ VerbNet contains over 20 roles and not all of them are general or can be recognized reliably. Therefore, we further group them to get the SUBJECT, VERB, OBJECT and LOCATION roles.
534
+ We explore two approaches to obtaining the labels based on the output of the semantic parser. First is to use the extracted text chunks directly as labels. Second is to use the corresponding senses as a labels (and therefore group multiple text labels). In the following we refer to these as \emph{text-} and \emph{sense-labels}.
535
+ Thus from each sentence we extract a semantic representation in a form of (SUBJECT, VERB, OBJECT, LOCATION), see Figure~\ref{fig:semp1} for example.
536
+ Using the WSD allows to identify different senses (WordNet synsets) for the same verb
537
+ (Figure~\ref{fig:semp2}) and the same sense for different verbs (Figure~\ref{fig:semp3}).
538
+
539
+
540
+ \subsection{Applying parsing to TACoS Multi-Level corpus}
541
+ \label{sec:applyTACOS}
542
+
543
+ \newcommand{\midruleResTrans}{\cmidrule(lr){1-1}\cmidrule(lr){2-2}} \begin{table}[t]
544
+ \center
545
+ \begin{tabular}{lrr}
546
+ \toprule
547
+ Approach & BLEU \\\midruleResTrans
548
+ SMT \cite{rohrbach13iccv} & 24.9 \\SMT \cite{rohrbach14gcpr} & 26.9 \\SMT with our text-labels & 22.3 \\SMT with our sense-labels & 24.0 \\\bottomrule
549
+ \end{tabular}
550
+ \caption{BLEU@4 in \% on sentences of Detailed~Descriptions of the TACoS Multi-Level \cite{rohrbach14gcpr} corpus, see Section \ref{sec:applyTACOS}.}
551
+ \label{tbl:res:detailed}
552
+ \figvspace
553
+ \end{table}
554
+
555
+
556
+
557
+ We apply the proposed semantic parsing to the TACoS Multi-Level \cite{rohrbach14gcpr} parallel corpus. We extract the SR from the sentences as described above and use those as annotations. Note, that this corpus is annotated with the tuples (ACTIVITY, OBJECT, TOOL, SOURCE, TARGET) and the subject is always the person. Therefore we drop the SUBJECT role and only use (VERB, OBJECT, LOCATION) as our SR.
558
+ Then, similar to \cite{rohrbach14gcpr}, we train the visual classifiers for our labels (proposed by the parser), we only use the ones that appear at least 30 times. Next we train a CRF with 3 nodes for verbs, objects and locations, using the visual classifier responses as unaries. We follow the translation approach of \cite{rohrbach13iccv} and train the SMT on the Detailed Descriptions part of the corpus using our labels. Finally, we translate the SR predicted by our CRF to generate the sentences.
559
+ Table \ref{tbl:res:detailed} shows the results comparing our method to \cite{rohrbach13iccv} and \cite{rohrbach14gcpr} who use manual annotations to train their models. As we can see the sense-labels perform better than the text-labels as they provide better grouping of the labels. Our method produces competitive result which is only {0.9\%} below the result of \cite{rohrbach13iccv}. At the same time \cite{rohrbach14gcpr} uses more training data, additional color Sift features and recognizes the dish prepared in the video. All these points, if added to our approach, would also improve the performance.
560
+
561
+
562
+ \newcommand{\midrulestat}{\cmidrule(lr){1-1}\cmidrule(lr){2-6}}
563
+
564
+ \begin{table}[t]
565
+ \center
566
+ \begin{tabular}{p{2.5cm} p{0.8cm} p{0.3cm} p{0.7cm} p{0.7cm} p{0.7cm} }
567
+ \toprule
568
+
569
+ Annotations & {activity} & {tool} & {object} & {source} & {target} \\
570
+ \midrulestat
571
+ Manual \cite{rohrbach14gcpr} & 78 & 53 & 138 & 69 & 49 \\
572
+ \midrule
573
+ & {verb} & \multicolumn{2}{c}{object} & \multicolumn{2}{c}{location} \\
574
+ \cmidrule(lr){2-6}
575
+ Our text-labels & 145 & \multicolumn{2}{c}{260} & \multicolumn{2}{c}{85} \\
576
+ Our sense-labels & 158 & \multicolumn{2}{c}{215} & \multicolumn{2}{c}{85} \\
577
+ \bottomrule
578
+ \end{tabular}
579
+ \caption{Label statistics from our semantic parser on TACoS Multi-Level \cite{rohrbach14gcpr} corpus, see Section \ref{sec:applyTACOS}.}
580
+ \label{tbl:crfnodes}
581
+ \figvspace
582
+ \end{table}
583
+
584
+ We analyze the labels selected by our method in Table \ref{tbl:crfnodes}. It is clear that our labels are still imperfect, \ie different labels might be assigned to similar concepts. However the number of extracted labels is quite close to the number of manual labels. Note, that the annotations were created prior to the sentence collection, so some verbs used by humans in sentences might not be present in the annotations.
585
+
586
+
587
+
588
+
589
+
590
+
591
+ From this experiment we conclude that the output of our automatic parsing approach can serve as a replacement of manual annotations and allows to achieve competitive results. In the following we apply this approach to our movie description dataset.
592
+
593
+
594
+
595
+
596
+ \section{Evaluation \invisible{ - 1.2 pages}}
597
+ In this section we provide more insights about our movie description dataset. First we compare DVS to movie script and then we benchmark the approaches to video description introduced in Section \ref{sec:approaches}.
598
+
599
+ \subsection{Comparison DVS vs script data}
600
+ \label{sec:comparisionDVS}
601
+ We compare the DVS and script data using {5} movies from our dataset where both are available (see Section \ref{subsec:scripts}).
602
+ For these movies we select the overlapping time intervals with the intersection over union overlap of at least {75\%}, which results in {126} sentence pairs. We ask humans via Amazon Mechanical Turk (AMT) to compare the sentences with respect to their correctness and relevance to the video, using both video intervals as a reference (one at a time, resulting in 252 tasks). Each task was completed by 3 different human subjects.
603
+ Table \ref{tab:DVS-scripts} presents the results of this evaluation. DVS is ranked as more correct and relevant in over {60\%} of the cases, which supports our intuition that scrips contain mistakes and irrelevant content even after being cleaned up and manually aligned.
604
+
605
+
606
+
607
+ \begin{table}[t]
608
+ \center
609
+ \begin{tabular}{lll}
610
+ \toprule
611
+ & Correctness & Relevance \\
612
+ \midrule
613
+ DVS & 63.0 & 60.7 \\
614
+ Movie scripts & 37.0 & 39.3 \\
615
+ \bottomrule
616
+ \end{tabular}
617
+ \caption{Human evaluation of DVS and movie scripts: which sentence is more correct/relevant with respect to the video, in \%. Discussion in Section \ref{sec:comparisionDVS}.}
618
+ \label{tab:DVS-scripts}
619
+ \end{table}
620
+
621
+ \begin{table}
622
+ \centering
623
+ \begin{tabular}{@{\ }ll@{\ \ }l@{\ \ \ }l@{\ \ }l@{\ }}
624
+ \toprule
625
+ Corpus &Clause&NLP&Labels&WSD \\
626
+ \midrule
627
+ TACoS Multi-Level \cite{rohrbach14gcpr} & 0.96 & 0.86 & 0.91 & 0.75 \\
628
+ Movie Description (ours) & 0.89 & 0.62 & 0.86 & 0.7 \\
629
+ \bottomrule
630
+ \end{tabular}
631
+ \caption{Semantic parser accuracy for TACoS Multi-Level and our new corpus. Discussion in Section \ref{sec:semanticParserEval}.}
632
+ \label{tab:semantic-parse-accuracy-per-source-detailed}
633
+ \end{table}
634
+
635
+
636
+ \subsection{Semantic parser evaluation}
637
+ \label{sec:semanticParserEval}
638
+ Table \ref{tab:semantic-parse-accuracy-per-source-detailed} reports the accuracy of the different components of the semantic parsing pipeline. The components are clause splitting (Clause), POS tagging and chunking (NLP), semantic role labeling (Labels) and word sense disambiguation (WSD). We manually evaluate the correctness on a randomly sampled set of sentences using human judges. It is evident that the poorest performing parts are the NLP and the WSD components. Some of the NLP mistakes arise due to incorrect POS tagging. WSD is considered a hard problem and when the dataset contains less frequent words, the performance is severely affected. Overall we see that the movie description corpus is more challanging than TACoS Multi-Level but the drop in performance is reasonable compared to the siginificantly larger variability.
639
+
640
+
641
+
642
+
643
+
644
+ \begin{table}[t]
645
+ \center
646
+ \begin{tabular}{p{2.8cm} p{1.5cm} p{1.2cm} p{1.3cm}}
647
+ \toprule
648
+ & Correctness & Grammar & Relevance \\
649
+ \midrule
650
+ Nearest neighbor\\
651
+ DT &7.6 &5.1 &7.5 \\
652
+ LSDA &7.2 &4.9 &7.0 \\
653
+ PLACES &7.0 &5.0 &7.1 \\
654
+ HYBRID &6.8 &4.6 &7.1 \\
655
+ \midrule
656
+ SMT Visual words: &7.6 &8.1 &7.5 \\
657
+ \midrule
658
+ \multicolumn{4}{l}{SMT with our text-labels}\\
659
+ DT 30 &6.9 &8.1 &6.7 \\
660
+ DT 100 &5.8 &6.8 &5.5 \\
661
+ All 100 &4.6 &5.0 &4.9 \\
662
+ \midrule
663
+ \multicolumn{4}{l}{SMT with our sense-labels}\\
664
+ DT 30 &6.3 &6.3 &5.8 \\
665
+ DT 100 &4.9 &5.7 &5.1 \\
666
+ All 100 &5.5 &5.7 &5.5 \\
667
+ \midrule
668
+ Movie script/DVS &2.9 &4.2 &3.2 \\
669
+ \bottomrule
670
+ \end{tabular}
671
+ \caption{Comparison of approaches. Mean Ranking (1-12). Lower is better. Discussion in Section \ref{sec:VideoDescription}.}
672
+ \label{tab:humaneval}
673
+ \end{table}
674
+
675
+
676
+
677
+
678
+
679
+
680
+
681
+
682
+ \subsection{Video description}
683
+ \label{sec:VideoDescription}
684
+
685
+
686
+ As the collected text data comes from the movie context, it contains a lot of information specific to the plot, such as names of the characters. We pre-process each sentence in the corpus, transforming the names and other person related information (such as ``a young woman'') to ``someone'' or ``people''. The transformed version of the corpus is used in all the experiments below. We will release the transformed and the original corpus.
687
+
688
+ We use the {5} movies mentioned before (see Section \ref{subsec:scripts}) as a test set for the video description task, while all the others (67) are used for training.
689
+ Human judges were asked to rank multiple sentence outputs with respect to their correctness, grammar and relevance to the video.
690
+
691
+ Table \ref{tab:humaneval} summarizes results of the human evaluation from 250 randomly selected test video snippets, showing the mean rank, where lower is better.
692
+ In the top part of the table we show the nearest neighbor results based on multiple visual features. When comparing the different features, we notice that the pre-trained features (LSDA, PLACES, HYBRID) perform better than DT, where HYBRID performing best. Next is the translation approach with the visual words as labels, performing overall worst of all approaches. The next two blocks correspond to the translation approach when using the labels from our semantic parser. After extracting the labels we select the ones which appear at least 30 or 100 times as our visual attributes. As 30 results in a much higher number of attributes (see Table~\ref{tbl:crfnodes_movies}) predicting the SR turns into a more difficult recognition task, resulting in worse mean rankings. ``All 100'' refers to combining all the visual features as unaries in the CRF. Finally, the last ``Movie script/DVS'' block refers to the actual test sentences from the corpus and not surprisingly ranks best.
693
+
694
+ Overall we can observe three main tendencies: (1) Using our parsing with SMT outperforms nearest neighbor baselines and SMT Visual words. (2) In contrast to the kitchen dataset, the sense labels perform slightly worse than the text labels, which we attribute to the errors made in the WSD. (3) The actual movie script/DVS are ranked on average significantly better than any of the automatic approaches. These tendencies are also reflected in Figure \ref{fig:qual}, showing example outputs of all the evaluated approaches for a single movie snippet. Examining more qualitative examples which we provide on our web page indicates that it is possible to learn relevant information from this corpus.
695
+
696
+
697
+ \begin{table}[t]
698
+ \center
699
+ \begin{tabular}{p{2.3cm} p{1cm} p{1cm} p{1cm} p{1cm} }
700
+ \toprule
701
+ Annotations & {subject} & {verb} & {object} & {location} \\
702
+ \midrule
703
+ text-labels 30 & 24 & 380 & 137 & 71 \\
704
+ sense-labels 30 & 47 & 440 & 244 & 110 \\
705
+ text-labels 100 & 8 & 121 & 26 & 8 \\
706
+ sense-labels 100 & 8 & 143 & 51 & 37 \\
707
+ \bottomrule
708
+ \end{tabular}
709
+ \caption{Label statistics from our semantic parser on the movie description corpus. 30 and 100 indicate the minimum number of label occurrences in the corpus, see Section \ref{sec:VideoDescription}.}
710
+ \label{tbl:crfnodes_movies}
711
+ \figvspace
712
+ \end{table}
713
+
714
+
715
+
716
+ \begin{figure*}[t]
717
+ \begin{center}
718
+ \begin{tabular}{cccc}
719
+ \includegraphics[width=4cm]{figures/result_1054/0005_.jpg} &
720
+ \includegraphics[width=4cm]{figures/result_1054/0008_.jpg} &
721
+ \includegraphics[width=4cm]{figures/result_1054/0010_.jpg} &
722
+ \includegraphics[width=4cm]{figures/result_1054/0011_.jpg}\\
723
+ \end{tabular}
724
+ \begin{tabular}{ll}
725
+ Nearest neighbor\\
726
+ DT & People stand with a happy group, including someone.\\
727
+ LSDA & The hovering Dementors chase the group into the lift.\\
728
+ HYBRID & Close by, a burly fair-haired someone in an orange jumpsuit runs down a dark street.\\
729
+ PLACES & Someone is on his way to look down the passage way between the houses.\\
730
+ \midrule
731
+ SMT Visual words & Someone in the middle of the car pulls up ahead\\
732
+ \midrule
733
+ \multicolumn{2}{l}{SMT with our text-labels}\\
734
+ DT 30 & Someone opens the door to someone\\
735
+ DT 100 & Someone, the someone, and someone enters the room \\
736
+ All 100 & Someone opens the door and shuts the door, someone and his someone \\
737
+ \midrule
738
+ \multicolumn{2}{l}{SMT with our sense-labels}\\
739
+ DT 30 & Someone, the someone, and someone enters the room \\
740
+ DT 100 & Someone goes over to the door\\
741
+ All 100 & Someone enters the room\\
742
+ \midrule
743
+ Movie script/DVS & Someone follows someone into the leaky cauldron \\
744
+ \end{tabular}
745
+ \end{center}
746
+ \caption{Qualitative comparison of different video description methods. Discussion in Section \ref{sec:VideoDescription}. More examples on our web page.}
747
+ \label{fig:qual}
748
+ \end{figure*}
749
+ \section{Conclusions \invisible{ - 0.3 pages}}
750
+
751
+
752
+ In this work we presented a novel dataset of movies with aligned descriptions sourced from movie scripts and DVS (audio descriptions for the blind).
753
+ We present first experiments on this dataset using state-of-the art visual features, combined with a recent movie description approach from \cite{rohrbach13iccv}. We adapt the approach for this dataset to work without annotations, but rely on semantic parsing of labels. We show competitive performance on the TACoS Multi-Level dataset and promising results on our movie description data.
754
+ We compare DVS with previously used script data and find that DVS tends to be more correct and relevant to the movie than script sentences.
755
+ Beyond our first study on single sentences, the dataset opens new possibilities to understand stories and plots across multiple sentences in an open domain scenario on large scale. Something no other video nor image description dataset can offer as of now.
756
+
757
+
758
+ \section{Acknowledgements}
759
+ Marcus Rohrbach was supported by a fellowship within the FITweltweit-Program of the German Academic Exchange Service (DAAD).
760
+
761
+
762
+ \small
763
+ \bibliographystyle{ieee}
764
+ \bibliography{biblioLong,rohrbach,rohrbach15cvpr}
765
+
766
+
767
+ \end{document}
papers/1501/1501.04560.tex ADDED
@@ -0,0 +1,1545 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,english,journal,compsoc]{IEEEtran}
2
+ \usepackage[T1]{fontenc}
3
+ \usepackage[latin9]{inputenc}
4
+ \usepackage{color}
5
+ \usepackage{array}
6
+ \usepackage{bm}
7
+ \usepackage{multirow}
8
+ \usepackage{amsmath}
9
+ \usepackage{amssymb}
10
+ \usepackage{graphicx}
11
+
12
+ \makeatletter
13
+
14
+ \providecommand{\tabularnewline}{\\}
15
+
16
+ \usepackage{graphicx}
17
+ \usepackage{amsmath,amssymb}
18
+
19
+ \usepackage{color}
20
+ \usepackage{array}
21
+ \usepackage{multirow}
22
+ \usepackage{amsmath}
23
+ \usepackage{graphicx}
24
+ \usepackage{color}
25
+ \usepackage{url}
26
+
27
+ \makeatother
28
+
29
+ \usepackage{babel}
30
+ \begin{document}
31
+
32
+
33
+ \title{Transductive Multi-view Zero-Shot Learning}
34
+ \author{Yanwei~Fu,~\IEEEmembership{}
35
+ Timothy~M.~Hospedales,~ Tao~Xiang~ and Shaogang~Gong
36
+ \IEEEcompsocitemizethanks{ \IEEEcompsocthanksitem Yanwei Fu is with Disney Research, Pittsburgh, PA, 15213, USA.
37
+ \IEEEcompsocthanksitem Timothy~M.~Hospedales, Tao~Xiang, and Shaogang~Gong are with the School of Electronic Engineering and Computer Science, Queen Mary University of London, E1 4NS, UK.\protect\\
38
+ Email: \{y.fu,t.hospedales,t.xiang,s.gong\}@qmul.ac.uk
39
+ }\thanks{} }
40
+
41
+ \IEEEcompsoctitleabstractindextext{
42
+ \begin{abstract}
43
+ Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset/domain are biased when applied directly to the target dataset/domain. We call this problem the \textit{projection domain shift} problem and propose a novel framework, {\em transductive multi-view embedding}, to solve it. The second limitation is the \textit{prototype sparsity} problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.
44
+
45
+ \end{abstract}
46
+ \begin{keywords}
47
+ Transducitve learning, multi-view Learning, transfer Learning, zero-shot Learning, heterogeneous hypergraph.
48
+ \end{keywords} } \maketitle
49
+
50
+
51
+ \section{Introduction}
52
+
53
+ Humans can distinguish 30,000 basic object classes \cite{object_cat_1987}
54
+ and many more subordinate ones (e.g.~breeds of dogs). They can also
55
+ create new categories dynamically from few examples or solely based
56
+ on high-level description. In contrast, most existing computer vision
57
+ techniques require hundreds of labelled samples for each object class
58
+ in order to learn a recognition model. Inspired by humans' ability
59
+ to recognise without seeing samples, and motivated by the prohibitive
60
+ cost of training sample collection and annotation, the research area
61
+ of \emph{learning to learn} or \emph{lifelong learning} \cite{PACbound2014ICML,chen_iccv13}
62
+ has received increasing interests. These studies aim to intelligently
63
+ apply previously learned knowledge to help future recognition tasks.
64
+ In particular, a major and topical challenge in this area is to build
65
+ recognition models capable of recognising novel visual categories
66
+ without labelled training samples, i.e.~zero-shot learning (ZSL).
67
+
68
+ The key idea underpinning ZSL approaches is to exploit knowledge
69
+ transfer via an intermediate-level semantic representation. Common
70
+ semantic representations include binary vectors of visual attributes
71
+ \cite{lampert2009zeroshot_dat,liu2011action_attrib,yanweiPAMIlatentattrib}
72
+ (e.g. 'hasTail' in Fig.~\ref{fig:domain-shift:Low-level-feature-distribution})
73
+ and continuous word vectors \cite{wordvectorICLR,DeviseNIPS13,RichardNIPS13}
74
+ encoding linguistic context. In ZSL, two datasets with disjoint classes
75
+ are considered: a labelled auxiliary set where a semantic representation
76
+ is given for each data point, and a target dataset to be classified
77
+ without any labelled samples. The semantic representation is assumed
78
+ to be shared between the auxiliary/source and target/test
79
+ dataset. It can thus be re-used for knowledge transfer between the source and
80
+ target sets: a projection function mapping
81
+ low-level features to the semantic representation is learned from
82
+ the auxiliary data by classifier or regressor.
83
+ This projection is then applied to map each unlabelled
84
+ target class instance into the same semantic space.
85
+ In this space, a `prototype' of each target class is specified, and each projected target instance is classified
86
+ by measuring similarity to the class prototypes.
87
+ Depending on the semantic space, the class prototype could be a binary attribute
88
+ vector listing class properties (e.g., 'hasTail') \cite{lampert2009zeroshot_dat}
89
+ or a word vector describing the linguistic context of the textual
90
+ class name \cite{DeviseNIPS13}.
91
+
92
+ Two inherent problems exist in this conventional zero-shot learning
93
+ approach. The first problem is the \textbf{projection domain shift
94
+ problem}. Since the two datasets have different and potentially unrelated
95
+ classes, the underlying data distributions of the classes differ,
96
+ so do the `ideal' projection functions between the low-level feature
97
+ space and the semantic spaces. Therefore, using the projection functions
98
+ learned from the auxiliary dataset/domain without any adaptation to
99
+ the target dataset/domain causes an unknown shift/bias. We call it
100
+ the \textit{projection domain shift} problem. This is illustrated
101
+ in Fig.~\ref{fig:domain-shift:Low-level-feature-distribution}, which
102
+ shows two object classes from the Animals with Attributes (AwA) dataset
103
+ \cite{lampert13AwAPAMI}: Zebra is one of the 40 auxiliary classes
104
+ while Pig is one of 10 target classes. Both of them share the same
105
+ `hasTail' semantic attribute, but the visual appearance of their tails
106
+ differs greatly (Fig.~\ref{fig:domain-shift:Low-level-feature-distribution}(a)).
107
+ Similarly, many other attributes of Pig are visually different from
108
+ the corresponding attributes in the auxiliary classes. Figure \ref{fig:domain-shift:Low-level-feature-distribution}(b)
109
+ illustrates the projection domain shift problem by plotting (in 2D
110
+ using t-SNE \cite{tsne}) an 85D attribute space representation of
111
+ the image feature projections and class prototypes (85D binary attribute
112
+ vectors). A large discrepancy can be seen between the Pig prototype
113
+ in the semantic attribute space and the projections of its class member
114
+ instances, but not for the auxiliary Zebra class. This discrepancy
115
+ is caused when the projections learned from the 40 auxiliary classes
116
+ are applied directly to project the Pig instances -- what `hasTail'
117
+ (as well as the other 84 attributes) visually means is different now.
118
+ Such a discrepancy will inherently degrade the effectiveness of zero-shot
119
+ recognition of the Pig class because the target class instances are
120
+ classified according to their similarities/distances to those prototypes.
121
+ To our knowledge, this problem has neither been identified nor addressed
122
+ in the zero-shot learning literature.
123
+
124
+ \begin{figure}[t]
125
+ \centering{}\includegraphics[scale=0.26]{idea_illustration}\caption{\label{fig:domain-shift:Low-level-feature-distribution}An illustration
126
+ of the projection domain shift problem. Zero-shot prototypes are shown
127
+ as red stars and predicted semantic attribute projections
128
+ (defined in Sec.~3.2) shown in blue.}
129
+ \end{figure}
130
+
131
+
132
+ The second problem is the \textbf{prototype sparsity problem}: for
133
+ each target class, we only have a single prototype which is insufficient
134
+ to fully represent what that class looks like. As shown in Figs.~\ref{fig:t-SNE-visualisation-of}(b)
135
+ and (c), there often exist large intra-class variations and inter-class
136
+ similarities. Consequently, even if the single prototype is centred
137
+ among its class instances in the semantic representation space, existing
138
+ zero-shot classifiers will still struggle to assign correct class
139
+ labels -- one prototype per class is not enough to represent the intra-class
140
+ variability or help disambiguate class overlap \cite{Eleanor1977}.
141
+
142
+ In addition to these two problems, conventional approaches to
143
+ zero-shot learning are also limited in \textbf{exploiting multiple
144
+ intermediate semantic representations}. Each representation (or semantic
145
+ `view') may contain complementary information -- useful for
146
+ distinguishing different classes in different ways. While both visual attributes \cite{lampert2009zeroshot_dat,farhadi2009attrib_describe,liu2011action_attrib,yanweiPAMIlatentattrib}
147
+ and linguistic semantic representations such as word vectors \cite{wordvectorICLR,DeviseNIPS13,RichardNIPS13}
148
+ have been independently exploited successfully, it remains unattempted
149
+ and non-trivial to synergistically exploit multiple semantic
150
+ views. This is because they are often of very different dimensions
151
+ and types and each suffers from different domain shift effects discussed
152
+ above.
153
+
154
+
155
+
156
+
157
+ In this paper, we propose to solve the projection domain shift problem
158
+ using transductive multi-view embedding. The
159
+ transductive setting means using the unlabelled test data to improve
160
+ generalisation accuracy. In our framework, each unlabelled
161
+ target class instance is represented by multiple views: its low-level
162
+ feature view and its (biased) projections in multiple semantic spaces
163
+ (visual attribute space and word space in this work). To rectify the
164
+ projection domain shift between auxiliary and target datasets, we
165
+ introduce a multi-view semantic space alignment process to correlate
166
+ different semantic views and the low-level feature view by projecting
167
+ them onto a common latent embedding space learned using multi-view Canonical
168
+ Correlation Analysis (CCA) \cite{multiviewCCAIJCV}. The intuition is that when the biased target data projections (semantic representations) are correlated/aligned with their (unbiased) low-level feature representations, the bias/projection domain shift is alleviated. The effects of this process
169
+ on projection domain shift are illustrated by Fig.~\ref{fig:domain-shift:Low-level-feature-distribution}(c),
170
+ where after alignment, the target Pig class prototype is much closer
171
+ to its member points in this embedding space. Furthermore, after exploiting
172
+ the complementarity of different low-level feature and semantic views
173
+ synergistically in the common embedding space, different target classes
174
+ become more compact and more separable (see Fig.~\ref{fig:t-SNE-visualisation-of}(d)
175
+ for an example), making the subsequent zero-shot recognition a much
176
+ easier task.
177
+
178
+ Even with the proposed transductive multi-view embedding framework,
179
+ the prototype sparsity problem remains -- instead of one prototype
180
+ per class, a handful are now available depending on how many views
181
+ are embedded, which are still sparse. Our solution
182
+ is to pose this as a semi-supervised learning \cite{zhu2007sslsurvey}
183
+ problem: prototypes in each view are treated as labelled `instances',
184
+ and we exploit the manifold structure of the unlabelled data distribution
185
+ in each view in the embedding space via label propagation on a graph.
186
+ To this end, we introduce a novel transductive multi-view hypergraph
187
+ label propagation (TMV-HLP) algorithm for recognition. The
188
+ core in our TMV-HLP algorithm is a new {\emph{distributed
189
+ representation} of graph structure termed heterogeneous
190
+ hypergraph which allows us to exploit the complementarity of different
191
+ semantic and low-level feature views, as well as the manifold structure
192
+ of the target data to compensate for the impoverished supervision
193
+ available from the sparse prototypes. Zero-shot learning is then
194
+ performed by semi-supervised label propagation from the prototypes
195
+ to the target data points within and across the graphs. The whole
196
+ framework is illustrated in Fig.~\ref{fig:The-pipeline-of}.
197
+
198
+ By combining our transductive embedding framework and the TMV-HLP
199
+ zero-shot recognition algorithm, our approach generalises seamlessly
200
+ when none (zero-shot), or few (N-shot) samples of the target classes
201
+ are available. Uniquely it can also synergistically exploit zero +
202
+ N-shot (i.e., both prototypes and labelled samples) learning. Furthermore,
203
+ the proposed method enables a number of novel cross-view annotation
204
+ tasks including \textit{zero-shot class description} and \textit{zero
205
+ prototype learning}.
206
+
207
+ \noindent \textbf{Our contributions}\quad{}Our contributions are
208
+ as follows: (1) To our knowledge, this is the first attempt to investigate
209
+ and provide a solution to the projection domain shift problem in zero-shot
210
+ learning. (2) We propose a transductive multi-view embedding space
211
+ that not only rectifies the projection shift, but also exploits the
212
+ complementarity of multiple semantic representations of visual data.
213
+ (3) A novel transductive multi-view heterogeneous hypergraph label
214
+ propagation algorithm is developed to improve both zero-shot and N-shot
215
+ learning tasks in the embedding space and overcome the prototype sparsity
216
+ problem. (4) The learned embedding space enables a number of novel
217
+ cross-view annotation tasks. Extensive experiments are carried
218
+ out and the results show that our approach significantly outperforms
219
+ existing methods for both zero-shot and N-shot recognition on three
220
+ image and video benchmark datasets.
221
+
222
+
223
+ \section{Related Work}
224
+
225
+ \noindent \textbf{Semantic spaces for zero-shot learning}\quad{}To
226
+ address zero-shot learning, attribute-based semantic representations have
227
+ been explored for images \cite{lampert2009zeroshot_dat,farhadi2009attrib_describe}
228
+ and to a lesser extent videos \cite{liu2011action_attrib,yanweiPAMIlatentattrib}.
229
+ Most existing studies \cite{lampert2009zeroshot_dat,hwang2011obj_attrib,palatucci2009zero_shot,parikh2011relativeattrib,multiattrbspace,Yucatergorylevel,labelembeddingcvpr13}
230
+ assume that an exhaustive ontology of attributes has been manually
231
+ specified at either the class or instance level. However, annotating
232
+ attributes scales poorly as ontologies tend to be domain specific.
233
+ This is despite efforts exploring augmented data-driven/latent attributes
234
+ at the expense of name-ability \cite{farhadi2009attrib_describe,liu2011action_attrib,yanweiPAMIlatentattrib}.
235
+ To address this, semantic representations using existing ontologies
236
+ and incidental data have been proposed \cite{marcuswhathelps,RohrbachCVPR12}.
237
+ Recently, \emph{word vector} approaches based on distributed language
238
+ representations have gained popularity. In this case a word space
239
+ is extracted from linguistic knowledge bases e.g.,~Wikipedia by natural
240
+ language processing models such as \cite{NgramNLP,wordvectorICLR}.
241
+ The language model is then used to project each class'
242
+ textual name into this space. These projections can be used as prototypes for zero-shot learning
243
+ \cite{DeviseNIPS13,RichardNIPS13}. Importantly, regardless of the
244
+ semantic spaces used, existing methods focus on either designing better
245
+ semantic spaces or how to best learn the projections. The former is
246
+ orthogonal to our work -- any semantic spaces can be used in our framework
247
+ and better ones would benefit our model. For the latter, no existing
248
+ work has identified or addressed the projection domain shift problem.
249
+
250
+ \noindent \textbf{Transductive zero-shot learning}
251
+ was considered by Fu et al.~\cite{fu2012attribsocial,yanweiPAMIlatentattrib}
252
+ who introduced a generative model to for user-defined and latent
253
+ attributes. A simple transductive zero-shot learning algorithm is
254
+ proposed: averaging the prototype's k-nearest neighbours to exploit
255
+ the test data attribute distribution. Rohrbach
256
+ et al.~\cite{transferlearningNIPS}
257
+ proposed a more elaborate transductive strategy, using graph-based
258
+ label propagation to exploit the manifold structure of the test data.
259
+ These studies effectively transform the ZSL task into a transductive
260
+ semi-supervised learning task \cite{zhu2007sslsurvey} with prototypes
261
+ providing the few labelled instances. Nevertheless,
262
+ these studies and this paper (as with most previous work \cite{lampert13AwAPAMI,lampert2009zeroshot_dat,RohrbachCVPR12})
263
+ only consider recognition among the novel classes: unifying zero-shot
264
+ with supervised learning remains an open challenge \cite{RichardNIPS13}.
265
+
266
+ \noindent \textbf{Domain adaptation}\quad{}Domain adaptation methods
267
+ attempt to address the domain shift problems that occur when the assumption
268
+ that the source and target instances are drawn from the same distribution
269
+ is violated. Methods have been derived for both classification \cite{fernando2013unsupDAsubspace,duan2009transfer}
270
+ and regression \cite{storkey2007covariateShift}, and both with \cite{duan2009transfer}
271
+ and without \cite{fernando2013unsupDAsubspace} requiring label information
272
+ in the target task. Our zero-shot learning problem means that most
273
+ of supervised domain adaptation methods are inapplicable. Our projection
274
+ domain shift problem differs from the conventional domain shift problems
275
+ in that (i) it is indirectly observed in terms of the projection
276
+ shift rather than the feature distribution shift, and (ii) the source
277
+ domain classes and target domain classes are completely different
278
+ and could even be unrelated. Consequently our domain adaptation method
279
+ differs significantly from the existing unsupervised ones such as
280
+ \cite{fernando2013unsupDAsubspace} in that our method relies on correlating different representations of the unlabelled
281
+ target data in a multi-view embedding space.
282
+
283
+ \noindent \textbf{Learning multi-view embedding spaces}\quad{}Relating
284
+ low-level feature and semantic views of data has been exploited in
285
+ visual recognition and cross-modal retrieval. Most existing work \cite{SocherFeiFeiCVPR2010,multiviewCCAIJCV,HwangIJCV,topicimgannot}
286
+ focuses on modelling images/videos with associated text (e.g. tags
287
+ on Flickr/YouTube). Multi-view CCA is often exploited to provide unsupervised
288
+ fusion of different modalities. However, there are two fundamental
289
+ differences between previous multi-view embedding work and ours: (1)
290
+ Our embedding space is transductive, that is, learned from unlabelled
291
+ target data from which all semantic views are estimated by projection
292
+ rather than being the original views. These projected views thus have
293
+ the projection domain shift problem that the previous work does not
294
+ have. (2) The objectives are different: we aim to rectify the projection
295
+ domain shift problem via the embedding in order to perform better
296
+ recognition and annotation while previous studies target primarily
297
+ cross-modal retrieval. Note that although in this work, the popular CCA model is adopted for multi-view embedding, other models \cite{Rosipal2006,DBLP:conf/iccv/WangHWWT13}
298
+ could also be considered.}
299
+
300
+ \noindent \textbf{Graph-based label propagation}\quad{}In most previous
301
+ zero-shot learning studies (e.g., direct attribute prediction (DAP)
302
+ \cite{lampert13AwAPAMI}), the available knowledge (a single
303
+ prototype per target class) is very limited. There has therefore been
304
+ recent interest in additionally exploiting the unlabelled target data
305
+ distribution by transductive learning \cite{transferlearningNIPS,yanweiPAMIlatentattrib}.
306
+ However, both \cite{transferlearningNIPS} and \cite{yanweiPAMIlatentattrib}
307
+ suffer from the projection domain shift problem, and are unable to
308
+ effectively exploit multiple semantic representations/views. In contrast, after
309
+ embedding, our framework synergistically
310
+ integrates the low-level feature and semantic representations by transductive
311
+ multi-view hypergraph label propagation (TMV-HLP). Moreover, TMV-HLP
312
+ generalises beyond zero-shot to N-shot learning if labelled instances
313
+ are available for the target classes.
314
+
315
+ In a broader context, graph-based label propagation \cite{zhou2004graphLabelProp}
316
+ in general, and classification on multi-view graphs (C-MG) in particular
317
+ are well-studied in semi-supervised learning. Most
318
+ C-MG solutions are based on the seminal work of Zhou \emph{et al}
319
+ \cite{Zhou2007ICML} which generalises spectral clustering from a
320
+ single graph to multiple graphs by defining a mixture of random walks
321
+ on multiple graphs. In the embedding space, instead
322
+ of constructing local neighbourhood graphs for each view independently
323
+ (e.g.~TMV-BLP \cite{embedding2014ECCV}), this paper proposes a {\emph{distributed
324
+ representation} of pairwise similarity using heterogeneous
325
+ hypergraphs. Such a distributed heterogeneous hypergraph representation
326
+ can better explore the higher-order relations between any two nodes
327
+ of different complementary views, and thus give rise to a more robust pairwise similarity
328
+ graph and lead to better classification performance than previous
329
+ multi-view graph methods \cite{Zhou2007ICML,embedding2014ECCV}.
330
+ Hypergraphs have been used as an effective tool to align multiple
331
+ data/feature modalities in data mining \cite{Li2013a}, multimedia
332
+ \cite{fu2010summarize} and computer vision \cite{DBLP:journals/corr/LiLSDH13,Hong:2013:MHL:2503901.2503960}
333
+ applications. A hypergraph is the generalisation of a 2-graph with
334
+ edges connecting many nodes/vertices, versus connecting two nodes
335
+ in conventional 2-graphs. This makes it cope better with noisy nodes
336
+ and thus achieve better performance than conventional graphs \cite{videoObjHypergraph,ImgRetrHypergraph,fu2010summarize}.
337
+ The only existing work considering hypergraphs for multi-view data
338
+ modelling is \cite{Hong:2013:MHL:2503901.2503960}. Different from
339
+ the multi-view hypergraphs proposed in \cite{Hong:2013:MHL:2503901.2503960}
340
+ which are homogeneous, that is, constructed in each view independently,
341
+ we construct a multi-view heterogeneous hypergraph: using the nodes
342
+ from one view as query nodes to compute hyperedges in another view.
343
+ This novel graph structure better exploits the complementarity of
344
+ different views in the common embedding space.
345
+
346
+
347
+ \section{Learning a Transductive Multi-View Embedding Space}
348
+ A schematic overview of our framework is given in
349
+ Fig.~\ref{fig:The-pipeline-of}. We next introduce some notation
350
+ and assumptions, followed by the details of how to map image features
351
+ into each semantic space, and how to map multiple spaces into a common
352
+ embedding space.
353
+
354
+
355
+ \subsection{Problem setup \label{sub:Problem-setup}}
356
+
357
+ We have $c_{S}$ source/auxiliary classes with $n_{S}$ instances
358
+ $S=\{X_{S},Y_{S}^{i},\mathbf{z}_{S}\}$ and $c_{T}$ target classes
359
+ $T=\left\{ X_{T},Y_{T}^{i},\mathbf{z}_{T}\right\} $ with $n_{T}$
360
+ instances. $X_{S} \in \Bbb{R}^{n_{s}\times t}$ and $X_{T}\in \Bbb{R}^{n_{T}\times t}$ denote the $t-$dimensional low-level feature vectors of auxiliary and target instances respectively.
361
+ $\mathbf{z}_{S}$ and $\mathbf{z}_{T}$ are the auxiliary and target
362
+ class label vectors. We assume the auxiliary and target classes are
363
+ disjoint: $\mathbf{z}_{S}\cap\mathbf{z}_{T}=\varnothing$. We have
364
+ $I$ different types of semantic representations; $Y_{S}^{i}$
365
+ and $Y_{T}^{i}$ represent the $i$-th type of $m_{i}$-dimensional
366
+ semantic representation for the auxiliary and target datasets respectively;
367
+ so $Y_{S}^{i}\in \Bbb{R}^{n_{S}\times m_{i}}$ and $Y_{T}^{i}\in \Bbb{R}^{n_{T}\times m_{i}}$.
368
+ Note that for the auxiliary dataset, $Y_{S}^{i}$ is given as each
369
+ data point is labelled. But for the target dataset, $Y_{T}^{i}$ is
370
+ missing, and its prediction $\hat{Y}_{T}^{i}$ from $X_{T}$ is used
371
+ instead. As we shall see, this is obtained using a projection
372
+ function learned from the auxiliary dataset. The problem of zero-shot
373
+ learning is to estimate $\mathbf{z}_{T}$ given $X_{T}$ and $\hat{Y}_{T}^{i}$.
374
+
375
+ Without any labelled data for the target classes, external knowledge
376
+ is needed to represent what each target class looks like, in the form
377
+ of class prototypes. Specifically, each target class $c$ has a pre-defined
378
+ class-level semantic prototype $\mathbf{y}_{c}^{i}$ in each semantic
379
+ view $i$. In this paper, we consider two types of intermediate semantic
380
+ representation (i.e.~$I=2$) -- attributes and word vectors, which
381
+ represent two distinct and complementary sources of information. We
382
+ use $\mathcal{X}$, $\mathcal{A}$ and $\mathcal{V}$ to denote the
383
+ low-level feature, attribute and word vector spaces respectively.
384
+ The attribute space $\mathcal{A}$ is typically manually defined using
385
+ a standard ontology. For the word vector space $\mathcal{V}$, we
386
+ employ the state-of-the-art skip-gram neural network model \cite{wordvectorICLR}
387
+ trained on all English Wikipedia articles\footnote{To 13 Feb. 2014, it includes 2.9 billion words from a 4.33 million-words
388
+ vocabulary (single and bi/tri-gram words).}. Using this learned model, we can project the textual name of any
389
+ class into the $\mathcal{V}$ space to get its word vector representation.
390
+ Unlike semantic attributes, it is a `free' semantic representation
391
+ in that this process does not need any human annotation. We next address
392
+ how to project low-level features into these two spaces.
393
+
394
+ \begin{figure*}
395
+ \begin{centering}
396
+ \includegraphics[scale=0.45]{framework_illustration3}
397
+ \par\end{centering}
398
+
399
+ \caption{\label{fig:The-pipeline-of}The pipeline of our framework illustrated on the task of classifying unlabelled target data into two classes.}
400
+ \end{figure*}
401
+
402
+
403
+
404
+ \subsection{Learning the projections of semantic spaces }
405
+
406
+ Mapping images and videos into semantic space $i$ requires a projection
407
+ function $f^{i}:\mathcal{X}\to\mathcal{Y}^{i}$. This is typically
408
+ realised by classifier \cite{lampert2009zeroshot_dat} or regressor
409
+ \cite{RichardNIPS13}. In this paper, using the auxiliary set $S$,
410
+ we train support vector classifiers $f^{\mathcal{A}}(\cdot)$ and
411
+ support vector regressors $f^{\mathcal{V}}(\cdot)$ for each dimension\footnote{Note that methods for learning projection functions for all dimensions
412
+ jointly exist (e.g.~\cite{DeviseNIPS13}) and can be adopted in our
413
+ framework.} of the auxiliary class attribute and word vectors respectively. Then the target class instances $X_{T}$ have the semantic projections:
414
+ $\hat{Y}_{T}^{\mathcal{A}}=f^{\mathcal{A}}(X_{T})$ and $\hat{Y}_{T}^{\mathcal{V}}=f^{\mathcal{V}}(X_{T})$.
415
+ However, these predicted intermediate semantics have the projection
416
+ domain shift problem illustrated in Fig.~\ref{fig:domain-shift:Low-level-feature-distribution}.
417
+ To address this, we learn a transductive multi-view semantic embedding
418
+ space to align the semantic projections with the low-level features
419
+ of target data.
420
+
421
+
422
+ \subsection{Transductive multi-view embedding }
423
+
424
+ We introduce a multi-view semantic alignment (i.e.
425
+ transductive multi-view embedding) process to correlate target instances
426
+ in different (biased) semantic view projections with their low-level
427
+ feature view. This process alleviates the projection domain shift
428
+ problem, as well as providing a common space in which heterogeneous
429
+ views can be directly compared, and their complementarity exploited
430
+ (Sec.~\ref{sec:Recognition-by-Multi-view}). To this end, we employ multi-view
431
+ Canonical Correlation Analysis (CCA) for $n_{V}$ views, with the
432
+ target data representation in view $i$ denoted ${\Phi}^{i}$, a $n_{T}\times m_{i}$
433
+ matrix. Specifically, we project three views of each
434
+ target class instance $f^{\mathcal{A}}(X_{T})$, $f^{\mathcal{V}}(X_{T})$
435
+ and $X_{T}$ (i.e.~$n_{V}=I+1=3$) into a shared embedding space.
436
+ The three projection functions $W^{i}$ are learned by
437
+ \begin{eqnarray}
438
+ \mathrm{\underset{\left\{ W^{i}\right\} _{i=1}^{n_{V}}}{min}} & \sum_{i,j=1}^{n_{V}} & Trace(W^{i}\Sigma_{ij}W^{j})\nonumber \\
439
+ = & \sum_{i,j=1}^{n_{V}} & \parallel{\Phi}^{i}W^{i}-{\Phi}^{j}W^{j}\parallel_{F}^{2}\nonumber \\
440
+ \mathrm{s.t.} & \left[W^{i}\right]^{T}\Sigma_{ii}W^{i}=I & \left[\mathbf{w}_{k}^{i}\right]^{T}\Sigma_{ij}\mathbf{w}_{l}^{j}=0\nonumber \\
441
+ i\neq j,k\neq l & i,j=1,\cdots,n_{V} & k,l=1,\cdots,n_{T}\label{eq:multi-viewCCA}
442
+ \end{eqnarray}
443
+ where $W^{i}$ is the projection matrix which maps the view ${\Phi}^{i}$
444
+ ($\in \Bbb{R}^{n_{T}\times m_{i}}$) into the embedding space and $\mathbf{w}_{k}^{i}$
445
+ is the $k$th column of $W^{i}$\textcolor{black}{.} $\Sigma_{ij}$
446
+ is the covariance matrix between ${\Phi}^{i}$ and ${\Phi}^{j}$.
447
+ The optimisation problem above is multi-convex as long as $\Sigma_{ii}$
448
+ are non-singular. The local optimum can be easily found by iteratively
449
+ maximising over each $W^{i}$ given the current values of the other
450
+ coefficients as detailed in \cite{CCAoverview}.
451
+
452
+ The dimensionality $m_{e}$ of the embedding space
453
+ is the sum of the input view dimensions, i.e.~$m_{e}=\sum_{i=1}^{n_{V}}m_{i}$,
454
+ so $W^{i}\in \Bbb{R}^{m_{i}\times m_{e}}$. Compared to the classic approach
455
+ to CCA \cite{CCAoverview} which projects to a lower dimension space,
456
+ this retains all the input information including uncorrelated dimensions
457
+ which may be valuable and complementary. Side-stepping the task of
458
+ explicitly selecting a subspace dimension, we use a more stable and
459
+ effective soft-weighting strategy to implicitly emphasise significant
460
+ dimensions in the embedding space. This can be seen as a generalisation
461
+ of standard dimension reducing approaches to CCA, which implicitly
462
+ define a binary weight vector that activates a subset of dimensions
463
+ and deactivates others. Since the importance of each dimension is
464
+ reflected by its corresponding eigenvalue \cite{CCAoverview,multiviewCCAIJCV},
465
+ we use the eigenvalues to weight the dimensions and define a \emph{weighted
466
+ embedding space} $\Gamma$:
467
+ \begin{equation}
468
+ {\Psi}^{i}={\Phi}^{i}W^{i}\left[D^{i}\right]^{\lambda}={\Phi}^{i}W^{i}\tilde{D}^{i},\label{eq:ccamapping}
469
+ \end{equation}
470
+ where $D^{i}$ is a diagonal matrix with its diagonal elements set
471
+ to the eigenvalues of each dimension in the embedding space, $\lambda$
472
+ is a power weight of $D^{i}$ and empirically set to $4$ \cite{multiviewCCAIJCV},
473
+ and ${\Psi}^{i}$ is the final representation of the target data from
474
+ view $i$ in $\Gamma$. We index the $n_{V}=3$ views as $i\in\{\mathcal{X},\mathcal{V},\mathcal{A}\}$
475
+ for notational convenience. The same formulation can be used if more
476
+ views are available.
477
+
478
+ \noindent \textbf{Similarity in the embedding space}\quad{}The choice
479
+ of similarity metric is important for high-dimensional embedding spaces. For the subsequent recognition and annotation
480
+ tasks, we compute cosine distance in $\Gamma$ by $l_{2}$ normalisation:
481
+ normalising any vector $\bm{\psi}_{k}^{i}$ (the $k$-th row of ${\Psi}^{i}$) to unit length (i.e.~$\parallel\bm{\psi}_{k}^{i}\parallel_{2}=1$).
482
+ Cosine similarity is given by the inner product of any two vectors
483
+ in $\Gamma$.
484
+
485
+
486
+ \section{Recognition by Multi-view Hypergraph Label Propagation \label{sec:Recognition-by-Multi-view}}
487
+
488
+ For zero-shot recognition, each target class $c$ to be recognised
489
+ has a semantic prototype $\mathbf{y}_{c}^{i}$ in each view $i$.
490
+ Similarly, we have three views of each unlabelled instance $f^{\mathcal{A}}(X_{T})$,
491
+ $f^{\mathcal{V}}(X_{T})$ and $X_{T}$. The class prototypes are expected
492
+ to be the mean of the distribution of their class in semantic space,
493
+ since the projection function $f^{i}$ is trained to map instances
494
+ to their class prototype in each semantic view. To exploit the learned
495
+ space $\Gamma$ to improve recognition, we project both the unlabelled
496
+ instances and the prototypes into the embedding space\textcolor{red}{}\footnote{Before being projected into $\Gamma$, the prototypes
497
+ are updated by semi-latent zero shot learning algorithm in~\cite{yanweiPAMIlatentattrib}.}}. The prototypes $\mathbf{y}_{c}^{i}$ for views $i\in\{\mathcal{A},\mathcal{V}\}$
498
+ are projected as $\bm{\psi}_{c}^{i}=\mathbf{y}_{c}^{i}W^{i}\tilde{D}^{i}$.
499
+ So we have $\bm{\psi}_{c}^{\mathcal{A}}$ and $\bm{\psi}_{c}^{\mathcal{\mathcal{V}}}$
500
+ for the attribute and word vector prototypes of each target class
501
+ $c$ in $\Gamma$. In the absence of a prototype for the (non-semantic)
502
+ low-level feature view $\mathcal{X}$, we synthesise it as $\bm{\psi}_{c}^{\mathcal{X}}=(\bm{\psi}_{c}^{\mathcal{A}}+\bm{\psi}_{c}^{\mathcal{\mathcal{V}}})/2$.
503
+ If labelled data is available (i.e., N-shot case), these are also projected
504
+ into the space. Recognition could now be achieved using NN classification
505
+ with the embedded prototypes/N-shots as labelled data. However, this
506
+ does not effectively exploit the multi-view complementarity, and suffers
507
+ from labelled data (prototype) sparsity. To solve this problem, we next introduce a unified
508
+ framework to fuse the views and transductively exploit the manifold
509
+ structure of the unlabelled target data to perform zero-shot and N-shot
510
+ learning.
511
+
512
+ Most or all of the target instances are unlabelled, so classification
513
+ based on the sparse prototypes is effectively a semi-supervised learning
514
+ problem \cite{zhu2007sslsurvey}. We leverage graph-based semi-supervised
515
+ learning to exploit the manifold structure of the unlabelled data
516
+ transductively for classification. This differs from the conventional
517
+ approaches such as direct attribute prediction (DAP) \cite{lampert13AwAPAMI}
518
+ or NN, which too simplistically assume that the data distribution
519
+ for each target class is Gaussian or multinomial. However, since
520
+ our embedding space contains multiple projections of the target data
521
+ and prototypes, it is hard to define a single graph that synergistically
522
+ exploits the manifold structure of all views. We therefore construct
523
+ multiple graphs within and across views in a transductive multi-view
524
+ hypergraph label propagation (TMV-HLP) model. Specifically, we construct the
525
+ heterogeneous hypergraphs across views to combine/align the different
526
+ manifold structures so as to enhance the robustness and exploit the
527
+ complementarity of different views. Semi-supervised learning is then
528
+ performed by propagating the labels from the sparse prototypes
529
+ (zero-shot) and/or the few labelled target instances (N-shot) to the
530
+ unlabelled data using random walk on the graphs.
531
+
532
+ \begin{figure*}[t]
533
+ \centering{}\includegraphics[scale=0.4]{illustrate_hypergraph}
534
+ \caption{\label{fig:Outliers-illustrations}An example of constructing heterogeneous
535
+ hypergraphs. Suppose in the embedding space, we have 14 nodes belonging
536
+ to 7 data points $A$, $B$, $C$, $D$, $E$, $F$ and $G$ of two
537
+ views -- view $i$ (rectangle) and view $j$ (circle). Data points
538
+ $A$,$B$,$C$ and $D$,$E$,$F$,$G$ belong to two different classes
539
+ -- red and green respectively. The multi-view semantic embedding maximises
540
+ the correlations (connected by black dash lines) between the two views
541
+ of the same node. Two hypergraphs are shown ($\mathcal{G}^{ij}$ at
542
+ the left and $\mathcal{G}^{ji}$ at the right) with the heterogeneous
543
+ hyperedges drawn with red/green dash ovals for the nodes of red/green
544
+ classes. Each hyperedge consists of two most similar nodes to the
545
+ query node. }
546
+ \end{figure*}
547
+
548
+ \subsection{Constructing heterogeneous hypergraphs}
549
+
550
+ \label{sub:Heterogenous-sub-hypergraph} \textbf{Pairwise node similarity}\quad{}The
551
+ key idea behind a hypergraph based method is to group similar data
552
+ points, represented as vertices/nodes on a graph, into hyperedges, so that the subsequent computation is less sensitive to individual noisy nodes.
553
+ With the hyperedges, the pairwise similarity between two data points
554
+ are measured as the similarity between the two hyperedges that they
555
+ belong to, instead of that between the two nodes only. For both forming
556
+ hyperedges and computing the similarity between two hyperedges, pairwise
557
+ similarity between two graph nodes needs to be defined. In our embedding
558
+ space $\Gamma$, each data point in each view defines a node, and
559
+ the similarity between any pair of nodes is:
560
+ \begin{equation}
561
+ \omega(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j})=\exp(\frac{<\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}>^{2}}{\varpi})\label{eq:sim_graph}
562
+ \end{equation}
563
+ where $<\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}>^{2}$ is the square of
564
+ inner product between the $i$th and $j$th projections of nodes $k$
565
+ and $l$ with a bandwidth parameter $\varpi$\footnote{Most previous work \cite{transferlearningNIPS,Zhou2007ICML} sets
566
+ $\varpi$ by cross-validation. Inspired by \cite{lampertTutorial},
567
+ a simpler strategy for setting $\varpi$ is adopted: $\varpi\thickapprox\underset{k,l=1,\cdots,n}{\mathrm{median}}<\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}>^{2}$
568
+ in order to have roughly the same number of similar and dissimilar
569
+ sample pairs. This makes the edge weights from different pairs of
570
+ nodes more comparable.}. Note that Eq (\ref{eq:sim_graph}) defines the pairwise similarity
571
+ between any two nodes within the same view ($i=j$) or across different
572
+ views ($i\neq j$).
573
+
574
+ \noindent \textbf{Heterogeneous hyperedges}\quad{}Given the multi-view
575
+ projections of the target data, we aim to construct a set of across-view
576
+ heterogeneous hypergraphs
577
+ \begin{equation}
578
+ \mathcal{G}^{c}=\left\{ \mathcal{G}^{ij}\mid i,j\in\left\{ \mathcal{X},\mathcal{V},\mathcal{A}\right\} ,i\neq j\right\} \label{eq:hypergraph_def}
579
+ \end{equation}
580
+ where $\mathcal{G}^{ij}=\left\{ \Psi^{i},E^{ij},\Omega^{ij}\right\} $
581
+ denotes the cross-view heterogeneous hypergraph from view $i$ to
582
+ $j$ (in that order) and $\Psi^{i}$ is the node
583
+ set in view $i$; $E^{ij}$ is the hyperedge set and $\Omega^{ij}$
584
+ is the pairwise node similarity set for the hyperedges. Specifically,
585
+ we have the hyperedge set $E^{ij}=\left\{ e_{\bm{\psi}_{k}^{i}}^{ij}\mid i\neq j,\: k=1,\cdots n_{T}+c_{T}\right\} $
586
+ where each hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$ includes the nodes\footnote{Both the unlabelled samples and the prototypes are
587
+ nodes.} in view $j$ that are the most similar to node $\bm{\psi}_{k}^{i}$
588
+ in view $i$ and the similarity set $\Omega^{ij}=\left\{ \Delta_{\bm{\psi}_{k}^{i}}^{ij}=\left\{ \omega\left(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}\right)\right\} \mid i\neq j,\bm{\psi}_{l}^{j}\in e_{\bm{\psi}_{k}^{i}}^{ij}\: k=1,\cdots n_{T}+c_{T}\right\} $
589
+ where $\omega\left(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}\right)$ is
590
+ computed using Eq (\ref{eq:sim_graph}).
591
+
592
+ \noindent We call $\bm{\psi}_{k}^{i}$ the query
593
+ node for hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$, since the hyperedge
594
+ $e_{\bm{\psi}_{k}^{i}}^{ij}$ intrinsically groups all nodes in view
595
+ $j$ that are most similar to node $\bm{\psi}_{k}^{i}$ in view $i$.
596
+ Similarly, $\mathcal{G}^{ji}$ can be constructed by using nodes
597
+ from view $j$ to query nodes in view $i$. Therefore given three
598
+ views, we have six across view/heterogeneous hypergraphs. Figure \ref{fig:Outliers-illustrations}
599
+ illustrates two heterogeneous hypergrahs constructed from two views.
600
+ Interestingly, our way of defining hyperedges naturally corresponds
601
+ to the star expansion \cite{hypergraphspectral} where the query node
602
+ (i.e.~$\bm{\psi}_{k}^{i}$) is introduced to connect each node in
603
+ the hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$.
604
+
605
+ \noindent \textbf{Similarity strength of hyperedge}\quad{}For each hyperedge
606
+ $e_{\bm{\psi}_{k}^{i}}^{ij}$, we measure its similarity strength
607
+ by using its query nodes $\bm{\psi}_{k}^{i}$. Specifically,
608
+ we use the weight $\delta_{\bm{\psi}_{k}^{i}}^{ij}$ to indicate the
609
+ similarity strength of nodes connected within the hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$.
610
+ Thus, we define $\delta_{\bm{\psi}_{k}^{i}}^{ij}$ based on the mean
611
+ similarity of the set $\Delta_{\bm{\psi}_{k}^{i}}^{ij}$ for the hyperedge
612
+ \begin{align}
613
+ \delta_{\bm{\psi}_{k}^{i}}^{ij} & =\frac{1}{\mid e_{\bm{\psi}_{k}^{i}}^{ij}\mid}\sum_{\omega\left(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}\right)\in\Delta_{\bm{\psi}_{k}^{i}}^{ij},\bm{\psi}_{l}^{j}\in e_{\bm{\psi}_{k}^{i}}^{ij}}\omega\left(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}\right),\label{eq:heterogenous_similarity_weight}
614
+ \end{align}
615
+ where $\mid e_{\bm{\psi}_{k}^{i}}^{ij}\mid$ is the cardinality of
616
+ hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$.
617
+
618
+ \noindent In the embedding space $\Gamma$, similarity
619
+ sets $\Delta_{\bm{\psi}_{k}^{i}}^{ij}$ and $\Delta_{\bm{\psi}_{l}^{i}}^{ij}$
620
+ can be compared. Nevertheless, these sets come from heterogeneous views
621
+ and have varying scales. Thus some normalisation steps are necessary
622
+ to make the two similarity sets more comparable and the subsequent
623
+ computation more robust. Specifically, we extend zero-score normalisation
624
+ to the similarity sets: (a) We assume $\forall\Delta_{\bm{\psi}_{k}^{i}}^{ij}\in\Omega^{ij}$
625
+ and $\Delta_{\bm{\psi}_{k}^{i}}^{ij}$ should follow Gaussian distribution.
626
+ Thus, we enforce zero-score normalisation to $\Delta_{\bm{\psi}_{k}^{i}}^{ij}$.
627
+ (b) We further assume that the retrieved similarity set $\Omega^{ij}$
628
+ between all the queried nodes $\bm{\psi}_{k}^{i}$ ($l=1,\cdots n_{T})$
629
+ from view $i$ and $\bm{\psi}_{l}^{j}$ should also follow Gaussian
630
+ distributions. So we again enforce Gaussian distribution to the pairwise
631
+ similarities between $\bm{\psi}_{l}^{j}$ and all query nodes from
632
+ view $i$ by zero-score normalisation. (c) We select the first $K$
633
+ highest values from $\Delta_{\bm{\psi_{k}^{i}}}^{ij}$ as new similarity
634
+ set $\bar{\Delta}_{\bm{\psi}_{k}^{i}}^{ij}$ for hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$.
635
+ $\bar{\Delta}_{\bm{\psi}_{k}^{i}}^{ij}$ is then used in Eq (\ref{eq:heterogenous_similarity_weight})
636
+ in place of ${\Delta}_{\bm{\psi}_{k}^{i}}^{ij}$. These normalisation
637
+ steps aim to compute a more robust similarity between each pair of
638
+ hyperedges.
639
+
640
+ \noindent \textbf{Computing similarity between
641
+ hyperedges} \quad{} With the hypergraph, the similarity between two nodes is computed
642
+ using their hyperedges $e_{\bm{\psi}_{k}^{i}}^{ij}$.
643
+ Specifically, for each hyperedge there is an associated incidence
644
+ matrix $H^{ij}=\left(h\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)\right)_{(n_{T}+c_{T})\times\mid E^{ij}\mid}$
645
+ where
646
+ \begin{equation}
647
+ h\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)=\begin{cases}
648
+ \begin{array}[t]{cc}
649
+ 1 & if\:\bm{\psi}_{l}^{j}\in e_{\bm{\psi}_{k}^{i}}^{ij}\\
650
+ 0 & otherwise
651
+ \end{array}\end{cases}\label{eq:heterogenous_hard_incidence_matrix}
652
+ \end{equation}
653
+ To take into consideration the similarity strength between hyperedge
654
+ and query node, we extend the binary valued hyperedge incidence matrix
655
+ $H^{ij}$ to soft-assigned incidence matrix $SH^{ij}=\left(sh\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)\right)_{(n_{T}+c_{T})\times\mid E^{ij}\mid}$
656
+ as follows
657
+ \begin{equation}
658
+ sh\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)=\delta_{\bm{\psi}_{k}^{i}}^{ij}\cdot\omega\left(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}\right)\cdot h\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)\label{eq:soft_incident_matrix}
659
+ \end{equation}
660
+ This soft-assigned incidence matrix is the product of three components:
661
+ (1) the weight $\delta_{\bm{\psi}_{k}^{i}}$ for hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$;
662
+ (2) the pairwise similarity computed using queried node $\bm{\psi}_{k}^{i}$;
663
+ (3) the binary valued hyperedge incidence matrix element $h\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)$.
664
+ To make the values of $SH^{ij}$ comparable among the different heterogeneous
665
+ views, we apply $l_{2}$ normalisation to the soft-assigned incidence
666
+ matrix values for all node incident to each hyperedge.
667
+
668
+ Now for each heterogeneous hypergraph, we can finally define the pairwise
669
+ similarity between any two nodes or hyperedges. Specifically for $\mathcal{G}^{ij}$,
670
+ the similarity between the $o$-th and $l$-th nodes is
671
+ \begin{equation}
672
+ \omega_{c}^{ij}\left(\bm{\psi}_{o}^{j},\bm{\psi}_{l}^{j}\right)=\sum_{e_{\bm{\psi}_{k}^{i}}^{ij}\in E^{ij}}sh\left(\bm{\psi}_{o}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)\cdot sh\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right).\label{eq:hyperedge_weights}
673
+ \end{equation}
674
+
675
+
676
+ With this pairwise hyperedge similarity, the hypergraph definition
677
+ is now complete. Empirically, given a node, other nodes on the graph that have very low similarities will have very limited effects on its label.
678
+ Thus, to reduce computational cost, we only use the K-nearest-neighbour
679
+ (KNN)\footnote{$K=30$. It can be varied from $10\sim50$ with
680
+ little effect in our experiments.} nodes of each node~\cite{zhu2007sslsurvey} for the subsequent label propagation step.
681
+
682
+
683
+ \noindent \textbf{The advantages of heterogeneous hypergraphs}\quad{}We
684
+ argue that the pairwise similarity of heterogeneous hypergraphs is
685
+ a distributed representation \cite{Bengio:2009:LDA:1658423.1658424}. To explain it, we can use star
686
+ extension \cite{hypergraphspectral} to extend a hypergraph into a
687
+ 2-graph. For each hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$, the query
688
+ node $\bm{\psi}_{k}^{i}$ is used to compute the pairwise similarity
689
+ $\Delta_{\bm{\psi}_{k}^{i}}^{ij}$ of all the nodes in view $j$.
690
+ Each hyperedge can thus define a hyper-plane by categorising the nodes
691
+ in view $j$ into two groups: strong and weak similarity group regarding
692
+ to query node $\bm{\psi}_{k}^{i}$. In other words, the hyperedge
693
+ set $E^{ij}$ is multi-clustering with linearly separated regions
694
+ (by each hyperplane) per classes. Since the final pairwise similarity
695
+ in Eq (\ref{eq:hyperedge_weights}) can be represented by a set of
696
+ similarity weights computed by hyperedge, and such weights are not
697
+ mutually exclusive and are statistically independent, we consider
698
+ the heterogeneous hypergraph a distributed representation. The advantage
699
+ of having a distributed representation has been studied by Watts and
700
+ Strogatz~\cite{Watts-Colective-1998,Watts.2004} which shows that
701
+ such a representation gives rise to better convergence rates and better
702
+ clustering abilities. In contrast, the homogeneous hypergraphs adopted
703
+ by previous work \cite{ImgRetrHypergraph,fu2010summarize,Hong:2013:MHL:2503901.2503960}
704
+ does not have this property which makes them less robust against noise.
705
+ In addition, fusing different views in the early stage of graph construction
706
+ potentially can lead to better exploitation of the complementarity
707
+ of different views. However,
708
+ it is worth pointing out that (1) The reason we can query nodes across
709
+ views to construct heterogeneous hypergraph is because we have projected
710
+ all views in the same embedding space in the first place. (2) Hypergraphs
711
+ typically gain robustness at the cost of losing discriminative power
712
+ -- it essentially blurs the boundary of different clusters/classes
713
+ by taking average over hyperedges. A typical solution is to
714
+ fuse hypergraphs with 2-graphs~\cite{fu2010summarize,Hong:2013:MHL:2503901.2503960,Li2013a},
715
+ which we adopt here as well.
716
+
717
+
718
+ \subsection{Label propagation by random walk}
719
+
720
+ Now we have two types of graphs: heterogeneous hypergraphs $\mathcal{G}^{c}=\left\{ \mathcal{G}^{ij}\right\} $
721
+ and 2-graphs\footnote{That is the K-nearest-neighbour graph of each view
722
+ in $\Gamma$ \cite{embedding2014ECCV}.}$\mathcal{G}^{p}=\left\{ \mathcal{G}^{i}\right\} $.
723
+ Given three views ($n_{V}=3$), we thus have nine graphs in total
724
+ (six hypergraphs and three 2-graphs). To classify
725
+ the unlabelled nodes, we need to propagate label information from the prototype
726
+ nodes across the graph. Such semi-supervised label propagation \cite{Zhou2007ICML,zhu2007sslsurvey}
727
+ has a closed-form solution and is explained as a random walk. A random walk requires pairwise transition probability
728
+ for nodes $k$ and $l$. We obtain this by aggregating the information
729
+ from all graphs $\mathcal{G}=\left\{ \mathcal{G}^{p};\mathcal{G}^{c}\right\} $,
730
+ \begin{align}
731
+ p\left(k\rightarrow l\right) & =\sum_{i\in\left\{ \mathcal{X},\mathcal{V},\mathcal{A}\right\} }p\left(k\rightarrow l\mid\mathcal{G}^{i}\right)\cdot p\left(\mathcal{G}^{i}\mid k\right)+\label{eq:transition_probability}\\
732
+ & \sum_{i,j\in\left\{ \mathcal{X},\mathcal{V},\mathcal{A}\right\} ,i\ne j}p\left(k\rightarrow l\mid\mathcal{G}^{ij}\right)\cdot p\left(\mathcal{G}^{ij}\mid k\right)\nonumber
733
+ \end{align}
734
+ where
735
+ \begin{equation}
736
+ p\left(k\rightarrow l\mid\mathcal{G}^{i}\right)=\frac{\omega_{p}^{i}(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{i})}{\sum_{o}\omega_{p}^{i}(\bm{\psi}_{k}^{i},\bm{\psi}_{o}^{i})},\label{eq:prob_graphs-1}
737
+ \end{equation}
738
+ and
739
+ \[
740
+ p\left(k\rightarrow l\mid\mathcal{G}^{ij}\right)=\frac{\omega_{c}^{ij}(\bm{\psi}_{k}^{j},\bm{\psi}_{l}^{j})}{\sum_{o}\omega_{c}^{ij}(\bm{\psi}_{k}^{j},\bm{\psi}_{o}^{j})}
741
+ \]
742
+ and then the posterior probability to choose graph $\mathcal{G}^{i}$
743
+ at projection/node $\bm{\psi}_{k}^{i}$ will be:
744
+ \begin{align}
745
+ p(\mathcal{G}^{i}|k) & =\frac{\pi(k|\mathcal{G}^{i})p(\mathcal{G}^{i})}{\sum_{i}\pi(k|\mathcal{G}^{i})p(\mathcal{G}^{i})+\sum_{ij}\pi(k|\mathcal{G}^{ij})p(\mathcal{G}^{ij})}\label{eq:post_prob_i}\\
746
+ p(\mathcal{G}^{ij}|k) & =\frac{\pi(k|\mathcal{G}^{ij})p(\mathcal{G}^{ij})}{\sum_{i}\pi(k|\mathcal{G}^{i})p(\mathcal{G}^{i})+\sum_{ij}\pi(k|\mathcal{G}^{ij})p(\mathcal{G}^{ij})}
747
+ \end{align}
748
+ \noindent where $p(\mathcal{G}^{i})$ and $p(\mathcal{G}^{ij})$ are
749
+ the prior probability of graphs $\mathcal{G}^{i}$ and $\mathcal{G}^{ij}$
750
+ in the random walk. This probability expresses prior expectation about
751
+ the informativeness of each graph. The same\emph{ }Bayesian model
752
+ averaging \cite{embedding2014ECCV} can be used here to estimate these
753
+ prior probabilities. However, the computational cost is combinatorially
754
+ increased with the number of views; and it turns out the prior is
755
+ not critical to the results of our framework. Therefore, uniform prior
756
+ is used in our experiments.
757
+
758
+ The stationary probabilities for node $k$ in $\mathcal{G}^{i}$ and
759
+ $\mathcal{G}^{ij}$ are
760
+ \begin{align}
761
+ \pi(k|\mathcal{G}^{i}) & =\frac{\sum_{l}\omega_{p}^{i}(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{i})}{\sum_{o}\sum_{l}\omega_{p}^{i}(\bm{\psi}_{k}^{i},\bm{\psi}_{o}^{i})}\label{eq:stati_prob_i}\\
762
+ \pi(k|\mathcal{G}^{ij}) & =\frac{\sum_{l}\omega_{c}^{ij}(\bm{\psi}_{k}^{j},\bm{\psi}_{l}^{j})}{\sum_{k}\sum_{o}\omega_{c}^{ij}(\bm{\psi}_{k}^{j},\bm{\psi}_{o}^{j})}\label{eq:heterogeneous_stationary_prob}
763
+ \end{align}
764
+
765
+
766
+ Finally, the stationary probability across the multi-view hypergraph
767
+ is computed as:
768
+ \begin{align}
769
+ \pi(k) & =\sum_{i\in\left\{ \mathcal{X},\mathcal{V},\mathcal{A}\right\} }\pi(k|\mathcal{G}^{i})\cdot p(\mathcal{G}^{i})+\label{eq:stat_prob}\\
770
+ & \sum_{i,j\in\left\{ \mathcal{X},\mathcal{V},\mathcal{A}\right\} ,i\neq j}\pi(k|\mathcal{G}^{ij})\cdot p(\mathcal{G}^{ij})
771
+ \end{align}
772
+
773
+
774
+ \noindent Given the defined graphs and random walk process, we can
775
+ derive our label propagation algorithm (TMV-HLP). Let $P$ denote
776
+ the transition probability matrix defined by Eq (\ref{eq:transition_probability})
777
+ and $\Pi$ the diagonal matrix with the elements $\pi(k)$ computed
778
+ by Eq (\ref{eq:stat_prob}). The Laplacian matrix $\mathcal{L}$ combines
779
+ information of different views and is defined as: $\mathcal{L}=\Pi-\frac{\Pi P+P^{T}\Pi}{2}.$
780
+ The label matrix $Z$ for labelled N-shot data or zero-shot prototypes
781
+ is defined as:
782
+ \begin{equation}
783
+ Z(q_{k},c)=\begin{cases}
784
+ \begin{array}{c}
785
+ 1\\
786
+ -1\\
787
+ 0
788
+ \end{array} & \begin{array}{c}
789
+ q_{k}\in class\, c\\
790
+ q_{k}\notin class\, c\\
791
+ unknown
792
+ \end{array}\end{cases}\label{eq:initial_label}
793
+ \end{equation}
794
+ Given the label matrix $Z$ and Laplacian $\mathcal{L}$, label propagation
795
+ on multiple graphs has the closed-form solution \cite{Zhou2007ICML}: $\hat{Z}=\eta(\eta\Pi+\mathcal{L})^{-1}\Pi Z$ where $\eta$ is
796
+ a regularisation parameter\footnote{It can be varied from $1-10$ with little effects in our experiments.}. Note that in our framework, both labelled target class instances
797
+ and prototypes are modelled as graph nodes. Thus the difference between
798
+ zero-shot and N-shot learning lies only on the initial labelled instances:
799
+ Zero-shot learning has the prototypes as labelled nodes; N-shot has
800
+ instances as labelled nodes; and a new condition exploiting both prototypes
801
+ and N-shot together is possible. This unified recognition framework
802
+ thus applies when either or both of prototypes and labelled instances
803
+ are available. The computational cost of our TMV-HLP
804
+ is $\mathcal{O}\left((c_{T}+n_{T})^{2}\cdot n_{V}^{2}+(c_{T}+n_{T})^{3}\right)$,
805
+ where $K$ is the number of nearest neighbours in the KNN graphs,
806
+ and $n_{V}$ is the number of views. It costs $\mathcal{O}((c_{T}+n_{T})^{2}\cdot n_{V}^{2})$
807
+ to construct the heterogeneous graph, while the inverse matrix of Laplacian
808
+ matrix $\mathcal{L}$ in label propagation step will take $\mathcal{O}((c_{T}+n_{T})^{3})$
809
+ computational time, which however can be further reduced to $\mathcal{O}(c_{T}n_{T}t)$ using the recent
810
+ work of Fujiwara et al.~\cite{FujiwaraICML2014efficientLP}, where $t$ is an iteration parameter
811
+ in their paper and $t\ll n_{T}$.
812
+
813
+
814
+ \section{Annotation and Beyond\label{sec:Annotation-and-Beyond}}
815
+
816
+ Our multi-view embedding space $\Gamma$ bridges the semantic gap
817
+ between low-level features $\mathcal{X}$ and semantic representations
818
+ $\mathcal{A}$ and $\mathcal{V}$. Leveraging this cross-view mapping,
819
+ annotation \cite{hospedales2011video_tags,topicimgannot,multiviewCCAIJCV}
820
+ can be improved and applied in novel ways. We consider three annotation
821
+ tasks here:
822
+
823
+ \noindent \textbf{Instance level annotation}\quad{}Given a new instance
824
+ $u$, we can describe/annotate it by predicting its attributes. The
825
+ conventional solution is directly applying $\hat{\mathbf{y}}_{u}^{\mathcal{A}}=f^{\mathcal{A}}(\mathbf{x}_{u})$
826
+ for test data $\mathbf{x}_{u}$ \cite{farhadi2009attrib_describe,multiviewCCAIJCV}.
827
+ However, as analysed before, this suffers from the projection domain
828
+ shift problem. To alleviate this, our multi-view embedding space aligns
829
+ the semantic attribute projections with the low-level features of
830
+ each unlabelled instance in the target domain. This alignment can
831
+ be used for image annotation in the target domain. Thus, with our
832
+ framework, we can now infer attributes for any test instance via the
833
+ learned embedding space $\Gamma$ as $\hat{\mathbf{y}}_{u}^{A}=\mathbf{x}_{u}W^{\mathcal{X}}\tilde{D}^{\mathcal{X}}\left[W^{\mathcal{A}}\tilde{D}^{\mathcal{A}}\right]^{-1}$.
834
+
835
+ \noindent \textbf{Zero-shot class description}\quad{}From a broader
836
+ machine intelligence perspective, one might be interested to ask what
837
+ are the attributes of an unseen class, based solely on the class name.
838
+ Given our multi-view embedding space, we
839
+ can infer the semantic attribute description of a novel class. This
840
+ \textit{zero-shot class description} task could be useful, for example,
841
+ to hypothesise the zero-shot attribute prototype of a class instead
842
+ of defining it by experts \cite{lampert2009zeroshot_dat} or ontology
843
+ \cite{yanweiPAMIlatentattrib}. Our transductive embedding enables
844
+ this task by connecting semantic word space (i.e.~naming) and discriminative
845
+ attribute space (i.e.~describing). Given the prototype $\mathbf{y}_{c}^{\mathcal{V}}$
846
+ from the name of a novel class $c$, we compute $\hat{\mathbf{y}}_{c}^{\mathcal{A}}=\mathbf{y}_{c}^{\mathcal{V}}W^{\mathcal{V}}\tilde{D}^{\mathcal{V}}\left[W^{\mathcal{A}}\tilde{D}^{\mathcal{A}}\right]^{-1}$
847
+ to generate the class-level attribute description.
848
+
849
+ \noindent \textbf{Zero prototype learning}\quad{}This task is the
850
+ inverse of the previous task -- to infer the name of class given a
851
+ set of attributes. It could be useful, for example, to validate or
852
+ assess a proposed zero-shot attribute prototype, or to provide an
853
+ automated semantic-property based index into a dictionary or database.
854
+ To our knowledge, this is the first attempt to evaluate the quality
855
+ of a class attribute prototype because no previous work has directly
856
+ and systematically linked linguistic knowledge space with visual attribute
857
+ space. Specifically given an attribute prototype $\mathbf{y}_{c}^{\mathcal{A}}$,
858
+ we can use $\hat{\mathbf{y}}_{c}^{\mathcal{V}}=\hat{\mathbf{y}}_{c}^{\mathcal{A}}W^{\mathcal{A}}\tilde{D}^{\mathcal{A}}\left[W^{\mathcal{V}}\tilde{D}^{\mathcal{V}}\right]^{-1}$
859
+ to name the corresponding class and perform retrieval on dictionary
860
+ words in $\mathcal{V}$ using $\hat{\mathbf{y}}_{c}^{\mathcal{V}}$.
861
+
862
+
863
+ \section{Experiments}
864
+
865
+
866
+ \subsection{Datasets and settings }
867
+
868
+ We evaluate our framework on three widely used image/video
869
+ datasets: Animals with Attributes (AwA), Unstructured Social Activity
870
+ Attribute (USAA), and Caltech-UCSD-Birds (CUB). \textbf{AwA} \cite{lampert2009zeroshot_dat}
871
+ consists of $50$ classes of animals ($30,475$ images) and $85$ associated
872
+ class-level attributes. It has a standard source/target split for
873
+ zero-shot learning with $10$ classes and $6,180$ images held out
874
+ as the target dataset. We use the same `hand-crafted' low-level features
875
+ (RGB colour histograms, SIFT, rgSIFT, PHOG, SURF and local self-similarity
876
+ histograms) released with the dataset (denoted as $\mathcal{H}$);
877
+ and the same multi-kernel learning (MKL) attribute classifier from
878
+ \cite{lampert2009zeroshot_dat}.\textbf{ USAA} is a video dataset
879
+ \cite{yanweiPAMIlatentattrib} with $69$ instance-level attributes
880
+ for $8$ classes of complex (unstructured) social group activity videos
881
+ from YouTube. Each class has around $100$ training and test videos
882
+ respectively. USAA provides the instance-level attributes since there
883
+ are significant intra-class variations. We use the thresholded mean
884
+ of instances from each class to define a binary attribute prototype
885
+ as in \cite{yanweiPAMIlatentattrib}. The same setting in \cite{yanweiPAMIlatentattrib}
886
+ is adopted: $4$ classes as source and $4$ classes as target data.
887
+ We use exactly the same SIFT, MFCC and STIP low-level features for
888
+ USAA as in \cite{yanweiPAMIlatentattrib}. \textbf{CUB-200-2011} \cite{WahCUB_200_2011}
889
+ contains $11,788$ images of $200$ bird classes. This is more challenging
890
+ than AwA -- it is designed for fine-grained recognition and
891
+ has more classes but fewer images. Each class is annotated with $312$
892
+ binary attributes derived from a bird species ontology. We use $150$
893
+ classes as auxiliary data, holding out $50$ as test data. We extract
894
+ $128$ dimensional SIFT and colour histogram descriptors from regular
895
+ grid of multi-scale and aggregate them into image-level feature Fisher
896
+ Vectors ($\mathcal{F}$) by using $256$ Gaussians, as in \cite{labelembeddingcvpr13}.
897
+ Colour histogram and PHOG features are also used to extract global
898
+ color and texture cues from each image. Due to the recent progress
899
+ on deep learning based representations, we also extract OverFeat
900
+ ($\mathcal{O}$) \cite{sermanet-iclr-14}\footnote{We use the trained model of OverFeat in \cite{sermanet-iclr-14}.} from AwA and CUB as an alternative to $\mathcal{H}$ and $\mathcal{F}$
901
+ respectively. In addition, DeCAF ($\mathcal{D}$) \cite{decaf}
902
+ is also considered for AwA.
903
+
904
+ We report absolute classification accuracy on USAA and mean accuracy
905
+ for AwA and CUB for direct comparison to published results. The word
906
+ vector space is trained by the model in \cite{wordvectorICLR} with
907
+ $1,000$ dimensions.
908
+
909
+ \begin{table*}[ht]
910
+ \begin{centering}
911
+ \begin{tabular}{c|c|c|c|c|c|c}
912
+ \hline
913
+ Approach & \multicolumn{1}{c|}{AwA ($\mathcal{H}$ \cite{lampert2009zeroshot_dat})} & AwA ($\mathcal{O}$) & AwA $\left(\mathcal{O},\mathcal{D}\right)$ & USAA & CUB ($\mathcal{O}$) & CUB ($\mathcal{F}$) \tabularnewline
914
+ \hline
915
+ DAP & 40.5(\cite{lampert2009zeroshot_dat}) / 41.4(\textcolor{black}{\cite{lampert13AwAPAMI})
916
+ / 38.4{*}} & 51.0{*} & 57.1{*} & 33.2(\cite{yanweiPAMIlatentattrib}) / 35.2{*} & 26.2{*} & 9.1{*}\tabularnewline
917
+ IAP & 27.8(\cite{lampert2009zeroshot_dat}) / 42.2(\textcolor{black}{\cite{lampert13AwAPAMI})} & -- & -- & -- & -- & --\tabularnewline
918
+ M2LATM \cite{yanweiPAMIlatentattrib} {*}{*}{*} & 41.3 & -- & -- & 41.9 & -- & --\tabularnewline
919
+ ALE/HLE/AHLE \cite{labelembeddingcvpr13} & 37.4/39.0/43.5 & -- & -- & -- & -- & 18.0{*}\tabularnewline
920
+ Mo/Ma/O/D \cite{marcuswhathelps} & 27.0 / 23.6 / 33.0 / 35.7 & -- & -- & -- & -- & --\tabularnewline
921
+ PST \cite{transferlearningNIPS} {*}{*}{*} & 42.7 & 54.1{*} & 62.9{*} & 36.2{*} & 38.3{*} & 13.2{*}\tabularnewline
922
+ \cite{Yucatergorylevel} & 48.3{*}{*} & -- & -- & -- & -- & --\tabularnewline
923
+ TMV-BLP \cite{embedding2014ECCV}{*}{*}{*} & 47.7 & 69.9 & 77.8 & 48.2 & 45.2 & 16.3\tabularnewline
924
+ \hline
925
+ TMV-HLP {*}{*}{*} & \textbf{49.0} & \textbf{73.5} & \textbf{80.5} & \textbf{50.4} & \textbf{47.9} & \textbf{19.5}\tabularnewline
926
+ \hline
927
+ \end{tabular}
928
+ \par\end{centering}
929
+
930
+ \noindent \caption{\label{tab:Comparison-with-stateofart}Comparison with the state-of-the-art
931
+ on zero-shot learning on AwA, USAA and CUB. Features $\mathcal{H}$,
932
+ $\mathcal{O}$, $\mathcal{D}$ and $\mathcal{F}$ represent hand-crafted, OverFeat, DeCAF,
933
+ and Fisher Vector respectively. Mo, Ma, O and D represent the highest
934
+ results by the mined object class-attribute associations, mined attributes,
935
+ objectness as attributes and direct similarity methods used in \cite{marcuswhathelps}
936
+ respectively. `--': no result reported. {*}: our implementation. {*}{*}:
937
+ requires additional human annotations.{*}{*}{*}:
938
+ requires unlabelled data, i.e.~a transductive setting. }
939
+ \end{table*}
940
+
941
+
942
+
943
+ \subsection{Recognition by zero-shot learning }
944
+
945
+
946
+ \subsubsection{Comparisons with state-of-the-art}
947
+
948
+ We compare our method (TMV-HLP) with the recent state-of-the-art
949
+ models that report results or can be re-implemented by us on the three
950
+ datasets in Table~\ref{tab:Comparison-with-stateofart}. They cover
951
+ a wide range of approaches on utilising semantic intermediate representation
952
+ for zero-shot learning. They can be roughly categorised according
953
+ to the semantic representation(s) used: DAP and IAP (\cite{lampert2009zeroshot_dat},
954
+ \cite{lampert13AwAPAMI}), M2LATM \cite{yanweiPAMIlatentattrib},
955
+ ALE \cite{labelembeddingcvpr13}, \cite{transferlearningNIPS} and
956
+ \cite{unifiedProbabICCV13} use attributes only; HLE/AHLE \cite{labelembeddingcvpr13}
957
+ and Mo/Ma/O/D \cite{marcuswhathelps} use both attributes and linguistic
958
+ knowledge bases (same as us); \cite{Yucatergorylevel} uses attribute
959
+ and some additional human manual annotation. Note that our linguistic
960
+ knowledge base representation is in the form of word vectors, which
961
+ does not incur additional manual annotation. Our method also does
962
+ not exploit data-driven attributes such as M2LATM \cite{yanweiPAMIlatentattrib}
963
+ and Mo/Ma/O/D \cite{marcuswhathelps}.
964
+
965
+ Consider first the results on the most widely used AwA. Apart
966
+ from the standard hand-crafted feature ($\mathcal{H}$),
967
+ we consider the more powerful OverFeat deep feature ($\mathcal{O}$),
968
+ and a combination of OverFeat and DeCAF $\left(\mathcal{O},\mathcal{D}\right)$\footnote{With these two low-level feature views, there are six views in total
969
+ in the embedding space.}. Table~\ref{tab:Comparison-with-stateofart} shows that (1) with
970
+ the same experimental settings and the same feature ($\mathcal{H}$),
971
+ our TMV-HLP ($49.0\%$) outperforms the best result reported so far
972
+ (48.3\%) in \cite{Yucatergorylevel} which, unlike ours, requires additional human
973
+ annotation to relabel the similarities between auxiliary and target
974
+ classes.
975
+ (2) With the more powerful OverFeat feature, our method achieves $73.5\%$
976
+ zero-shot recognition accuracy. Even more remarkably, when both the
977
+ OverFeat and DeCAF features are used in our framework, the result
978
+ (see the AwA $\left(\mathcal{O},\mathcal{D}\right)$ column) is $80.5\%$.
979
+ Even with only 10 target classes, this is an extremely good result given
980
+ that we do not have any labelled samples from the target classes. Note
981
+ that this good result is not solely due to the feature strength, as
982
+ the margin between the conventional DAP and our TMV-HLP is much bigger
983
+ indicating that our TMV-HLP plays a critical role in achieving this
984
+ result.
985
+ (3) Our method is also superior to the AHLE method in \cite{labelembeddingcvpr13}
986
+ which also uses two semantic spaces: attribute and WordNet hierarchy.
987
+ Different from our embedding framework, AHLE simply concatenates the
988
+ two spaces. (4) Our method also outperforms the other alternatives
989
+ of either mining other semantic knowledge bases (Mo/Ma/O/D \cite{marcuswhathelps})
990
+ or exploring data-driven attributes (M2LATM \cite{yanweiPAMIlatentattrib}).
991
+ (5) Among all compared methods, PST \cite{transferlearningNIPS} is
992
+ the only one except ours that performs label propagation based transductive
993
+ learning. It yields better results than DAP in all the experiments
994
+ which essentially does nearest neighbour in the semantic space. TMV-HLP
995
+ consistently beats PST in all the results shown in Table~\ref{tab:Comparison-with-stateofart}
996
+ thanks to our multi-view embedding. (6) Compared to our TMV-BLP model \cite{embedding2014ECCV}, the superior results of TMV-HLP shows that the proposed heterogeneous hypergraph is more effective than the homogeneous 2-graphs used in TMV-BLP for zero-shot learning.
997
+
998
+ Table \ref{tab:Comparison-with-stateofart} also shows that on two
999
+ very different datasets: USAA video activity, and CUB fine-grained,
1000
+ our TMV-HLP significantly outperforms the state-of-the-art alternatives.
1001
+ In particular, on the more challenging CUB, 47.9\% accuracy is achieved
1002
+ on 50 classes (chance level 2\%) using the OverFeat feature. Considering
1003
+ the fine-grained nature and the number of classes, this is even more
1004
+ impressive than the 80.5\% result on AwA.
1005
+
1006
+
1007
+ \subsubsection{Further evaluations\label{sec:further eva}}
1008
+
1009
+ \begin{figure}
1010
+ \begin{centering}
1011
+ \includegraphics[scale=0.33]{CCA_combined}
1012
+ \par\end{centering}
1013
+
1014
+ \protect\caption{\label{fig:CCA-validation} (a) Comparing soft and hard dimension weighting of CCA for AwA. (b) Contributions of CCA and label propagation on AwA. $\Psi^\mathcal{A}$ and $\Psi^\mathcal{V}$ indicate the subspaces of target data from view $\mathcal{A}$ and $\mathcal{V}$ in $\Gamma$ respectively. Hand-crafted features are used in both experiments. }
1015
+ \end{figure}
1016
+
1017
+ \noindent \textbf{Effectiveness of soft weighting for CCA embedding}\quad{}
1018
+ In this experiment,
1019
+ we compare the soft-weighting (Eq (\ref{eq:ccamapping})) of CCA embedding space $\Gamma$ (a strategy adopted in this work) with the conventional hard-weighting strategy of selecting the number of dimensions for CCA projection.
1020
+ Fig.~\ref{fig:CCA-validation}(a) shows that the performance of the hard-weighting CCA depends on the number of projection dimensions selected (blue curve). In contrast, our soft-weighting strategy uses all dimensions weighted by the CCA eigenvalues, so that the important dimensions are automatically weighted more highly. The result shows that this strategy is clearly better and it is not very sensitive to the weighting parameter $\lambda$, with choices of $\lambda>2$ all working well.
1021
+
1022
+
1023
+ \noindent \textbf{Contributions of individual components}\quad{}
1024
+ There are two major components in our ZSL framework: CCA embedding and label propagation. In this experiment we investigate whether both of them contribute to the strong performance. To this end, we compare the ZSL results on AwA with label propagation and without (nearest neighbour) before and after CCA embedding. In Fig.~\ref{fig:CCA-validation}(b), we can see that: (i) Label propagation always helps regardless whether the views have been embedded using CCA, although its effects are more pronounced after embedding. (ii) Even without label propagation, i.e.~using nearest neighbour for classification, the performance is improved by the CCA embedding. However, the improvement is bigger with label propagation. This result thus suggests that both CCA embedding and label propagation are useful, and our ZSL framework works the best when both are used.
1025
+
1026
+
1027
+ \begin{figure*}[t]
1028
+ \begin{centering}
1029
+ \includegraphics[width=0.8\textwidth]{effectivenessOfAwAUSAA}
1030
+ \par\end{centering}
1031
+ \caption{\label{fig:zero-shot-learning-on}Effectiveness of transductive multi-view
1032
+ embedding. (a) zero-shot learning on AwA using only hand-crafted features;
1033
+ (b) zero-shot learning on AwA using hand-crafted and deep features
1034
+ together; (c) zero-shot learning on USAA. $[\mathcal{V},\mathcal{A}]$
1035
+ indicates the concatenation of semantic word and attribute space vectors.
1036
+ $\Gamma(\mathcal{X}+\mathcal{V})$ and $\Gamma(\mathcal{X}+\mathcal{A})$
1037
+ mean using low-level+semantic word spaces and low-level+attribute
1038
+ spaces respectively to learn the embedding. $\Gamma(\mathcal{X}+\mathcal{V}+\mathcal{A})$
1039
+ indicates using all $3$ views to learn the embedding. }
1040
+ \end{figure*}
1041
+
1042
+
1043
+ \noindent \textbf{Transductive multi-view embedding}\quad{}To further
1044
+ validate the contribution of our transductive multi-view embedding
1045
+ space, we split up different views with and without embedding and the
1046
+ results are shown in Fig.~\ref{fig:zero-shot-learning-on}. In Figs.~\ref{fig:zero-shot-learning-on}(a)
1047
+ and (c), the hand-crafted feature $\mathcal{H}$ and SIFT, MFCC and
1048
+ STIP low-level features are used for AwA and USAA respectively, and
1049
+ we compare $\mathcal{V}$ vs.~$\Gamma(\mathcal{X}+\mathcal{V}$), $\mathcal{A}$ vs.~$\Gamma(\mathcal{X}+\mathcal{A})$ and $[\mathcal{V},\mathcal{A}]$
1050
+ vs.~$\Gamma(\mathcal{X}+\mathcal{V}+\mathcal{A})$ (see the caption
1051
+ of Fig.~\ref{fig:zero-shot-learning-on} for definitions). We use
1052
+ DAP for $\mathcal{A}$ and nearest neighbour for $\mathcal{V}$ and
1053
+ $[\mathcal{V},\mathcal{A}]$, because the prototypes of $\mathcal{V}$
1054
+ are not binary vectors so DAP cannot be applied. We use TMV-HLP for
1055
+ $\Gamma(\mathcal{X}+\mathcal{V})$ and $\Gamma(\mathcal{X}+\mathcal{A})$
1056
+ respectively. We highlight the following observations: (1) After transductive
1057
+ embedding, $\Gamma(\mathcal{X}+\mathcal{V}+\mathcal{A})$, $\Gamma(\mathcal{X}+\mathcal{V})$
1058
+ and $\Gamma(\mathcal{X}+\mathcal{A})$ outperform $[\mathcal{V},\mathcal{A}]$,
1059
+ $\mathcal{V}$ and $\mathcal{A}$ respectively. This means that the
1060
+ transductive embedding is helpful whichever semantic space is used
1061
+ in rectifying the projection domain shift problem by aligning the
1062
+ semantic views with low-level features. (2) The results of $[\mathcal{V},\mathcal{A}]$
1063
+ are higher than those of $\mathcal{A}$ and $\mathcal{V}$ individually,
1064
+ showing that the two semantic views are indeed complementary even
1065
+ with simple feature level fusion. Similarly, our TMV-HLP on all views
1066
+ $\Gamma(\mathcal{X}+\mathcal{V}+\mathcal{A})$ improves individual
1067
+ embeddings $\Gamma(\mathcal{X}+\mathcal{V})$ and $\Gamma(\mathcal{X}+\mathcal{A})$.
1068
+
1069
+ \begin{figure*}[ht]
1070
+ \begin{centering}
1071
+ \includegraphics[scale=0.4]{visualization_fusing_mutiple_view}
1072
+ \par\end{centering}
1073
+ \caption{\label{fig:t-SNE-visualisation-of}t-SNE Visualisation of (a) OverFeat
1074
+ view ($\mathcal{X}_{\mathcal{O}}$), (b) attribute view ($\mathcal{A}_{\mathcal{O}}$),
1075
+ (c) word vector view ($\mathcal{V}_{\mathcal{O}}$), and (d) transition
1076
+ probability of pairwise nodes computed by Eq (\ref{eq:transition_probability})
1077
+ of TMV-HLP in ($\Gamma(\mathcal{X}+\mathcal{A}+\mathcal{V})_{\mathcal{O},\mathcal{D}}$).
1078
+ The unlabelled target classes are much more separable in (d).}
1079
+ \end{figure*}
1080
+
1081
+
1082
+ \noindent \textbf{Embedding deep learning feature views also helps}\quad{}In Fig.~\ref{fig:zero-shot-learning-on}(b)
1083
+ three different low-level features are considered for AwA: hand-crafted
1084
+ ($\mathcal{H}$), OverFeat ($\mathcal{O}$) and DeCAF features ($\mathcal{D}$).
1085
+ The zero-shot learning results of each individual space are indicated
1086
+ as $\mathcal{V}_{\mathcal{H}}$, $\mathcal{A}_{\mathcal{H}}$, $\mathcal{V}_{\mathcal{O}}$,
1087
+ $\mathcal{A}_{\mathcal{O}}$, $\mathcal{V}_{\mathcal{D}}$, $\mathcal{A}_{\mathcal{D}}$
1088
+ in Fig.~\ref{fig:zero-shot-learning-on}(b) and we observe that $\mathcal{V}_{\mathcal{O}}>\mathcal{V}_{\mathcal{D}}>\mathcal{V}_{\mathcal{H}}$
1089
+ and $\mathcal{A}_{\mathcal{O}}>\mathcal{A}_{\mathcal{D}}>\mathcal{A}_{\mathcal{H}}$.
1090
+ That is OverFeat $>$ DeCAF $>$ hand-crafted features. It is widely
1091
+ reported that deep features have better performance than `hand-crafted'
1092
+ features on many computer vision benchmark datasets \cite{2014arXiv1405.3531C,sermanet-iclr-14}.
1093
+ What is interesting to see here is that OverFeat $>$ DeCAF since
1094
+ both are based on the same Convolutional Neural Network (CNN) model
1095
+ of \cite{KrizhevskySH12}. Apart from implementation details, one
1096
+ significant difference is that DeCAF is pre-trained by ILSVRC2012
1097
+ while OverFeat by ILSVRC2013 which contains more animal classes meaning
1098
+ better (more relevant) features can be learned. It is also worth pointing
1099
+ out that: (1) With both OverFeat and DeCAF features, the number of
1100
+ views to learn an embedding space increases from $3$ to $9$; and our
1101
+ results suggest that the more views, the better chance to solve the
1102
+ domain shift problem and the data become more separable as different
1103
+ views contain complementary information. (2) Figure~\ref{fig:zero-shot-learning-on}(b) shows that when all
1104
+ 9 available views ($\mathcal{X}_{\mathcal{H}}$, $\mathcal{V}_{\mathcal{H}}$,
1105
+ $\mathcal{A}_{\mathcal{H}}$, $\mathcal{X}_{\mathcal{D}}$, $\mathcal{V}_{\mathcal{D}}$,
1106
+ $\mathcal{A}_{\mathcal{D}}$, $\mathcal{X}_{\mathcal{O}}$, $\mathcal{V}_{\mathcal{O}}$
1107
+ and $\mathcal{A}_{\mathcal{O}}$) are used for embedding, the result
1108
+ is significantly better than those from each individual view. Nevertheless,
1109
+ it is lower than that obtained by embedding views ($\mathcal{X}_{\mathcal{D}}$,
1110
+ $\mathcal{V}_{\mathcal{D}}$, $\mathcal{A}_{\mathcal{D}}$, $\mathcal{X}_{\mathcal{O}}$,
1111
+ $\mathcal{V}_{\mathcal{O}}$ and $\mathcal{A}_{\mathcal{O}}$). This
1112
+ suggests that view selection may be required when a large number of
1113
+ views are available for learning the embedding space.
1114
+
1115
+
1116
+
1117
+
1118
+ \noindent \textbf{Embedding makes target classes more separable}\quad{}We
1119
+ employ t-SNE \cite{tsne} to visualise the space $\mathcal{X}_{\mathcal{O}}$,
1120
+ $\mathcal{V}_{\mathcal{O}}$, $\mathcal{A}_{\mathcal{O}}$ and $\Gamma(\mathcal{X}+\mathcal{A}+\mathcal{V})_{\mathcal{O},\mathcal{D}}$
1121
+ in Fig.~\ref{fig:t-SNE-visualisation-of}. It shows that even in
1122
+ the powerful OverFeat view, the 10 target classes are heavily overlapped
1123
+ (Fig.~\ref{fig:t-SNE-visualisation-of}(a)). It gets better in the
1124
+ semantic views (Figs.~\ref{fig:t-SNE-visualisation-of}(b) and (c)).
1125
+ However, when all 6 views are embedded, all classes are clearly separable
1126
+ (Fig.~\ref{fig:t-SNE-visualisation-of}(d)).
1127
+
1128
+ \noindent \textbf{Running time}\quad{}In practice, for the AwA dataset
1129
+ with hand-crafted features, our pipeline takes less than $30$ minutes
1130
+ to complete the zero-shot classification task (over $6,180$ images)
1131
+ using a six core $2.66$GHz CPU platform. This includes the time for
1132
+ multi-view CCA embedding and label propagation using our heterogeneous
1133
+ hypergraphs.
1134
+
1135
+
1136
+
1137
+ \subsection{Annotation and beyond}
1138
+
1139
+ In this section we evaluate our multi-view embedding space for the
1140
+ conventional and novel annotation tasks introduced in Sec.~\ref{sec:Annotation-and-Beyond}.
1141
+
1142
+ \noindent \textbf{Instance annotation by attributes}\quad{}To quantify
1143
+ the annotation performance, we predict attributes/annotations for
1144
+ each target class instance for USAA, which has the largest instance
1145
+ level attribute variations among the three datasets. We employ two
1146
+ standard measures: mean average precision (mAP) and F-measure (FM)
1147
+ between the estimated and true annotation list. Using our multi-view
1148
+ embedding space, our method (FM: $0.341$, mAP: $0.355$) outperforms
1149
+ significantly the baseline of directly estimating $\mathbf{y}_{u}^{\mathcal{A}}=f^{\mathcal{A}}(\mathbf{x}_{u})$
1150
+ (FM: $0.299$, mAP: $0.267$).
1151
+
1152
+ \begin{table}[ht]
1153
+ \begin{centering}
1154
+ \begin{tabular}{|c|c|c|}
1155
+ \hline
1156
+ AwA & & Attributes\tabularnewline
1157
+ \hline
1158
+ \hline
1159
+ \multirow{2}{*}{pc} & T-5 & active, \textbf{furry, tail, paws, ground.}\tabularnewline
1160
+ \cline{2-3}
1161
+ & B-5 & swims, hooves, long neck, horns, arctic\tabularnewline
1162
+ \hline
1163
+ \multirow{2}{*}{hp} & T-5 & \textbf{old world, strong, quadrupedal}, fast, \textbf{walks}\tabularnewline
1164
+ \cline{2-3}
1165
+ & B-5 & red, plankton, skimmers, stripes, tunnels\tabularnewline
1166
+ \hline
1167
+ \multirow{2}{*}{lp} & T-5 & \textbf{old world, active, fast, quadrupedal, muscle}\tabularnewline
1168
+ \cline{2-3}
1169
+ & B-5 & plankton, arctic, insects, hops, tunnels\tabularnewline
1170
+ \hline
1171
+ \multirow{2}{*}{hw} & T-5 & \textbf{fish}, \textbf{smart, fast, group, flippers}\tabularnewline
1172
+ \cline{2-3}
1173
+ & B-5 & hops, grazer, tunnels, fields, plains\tabularnewline
1174
+ \hline
1175
+ \multirow{2}{*}{seal} & T-5 & \textbf{old world, smart}, \textbf{fast}, chew teeth, \textbf{strong}\tabularnewline
1176
+ \cline{2-3}
1177
+ & B-5 & fly, insects, tree, hops, tunnels\tabularnewline
1178
+ \hline
1179
+ \multirow{2}{*}{cp} & T-5 & \textbf{fast, smart, chew teeth, active}, brown\tabularnewline
1180
+ \cline{2-3}
1181
+ & B-5 & tunnels, hops, skimmers, fields, long neck\tabularnewline
1182
+ \hline
1183
+ \multirow{2}{*}{rat} & T-5 & \textbf{active, fast, furry, new world, paws}\tabularnewline
1184
+ \cline{2-3}
1185
+ & B-5 & arctic, plankton, hooves, horns, long neck\tabularnewline
1186
+ \hline
1187
+ \multirow{2}{*}{gp} & T-5 & \textbf{quadrupedal}, active, \textbf{old world}, \textbf{walks},
1188
+ \textbf{furry}\tabularnewline
1189
+ \cline{2-3}
1190
+ & B-5 & tunnels, skimmers, long neck, blue, hops\tabularnewline
1191
+ \hline
1192
+ \multirow{2}{*}{pig} & T-5 & \textbf{quadrupedal}, \textbf{old world}, \textbf{ground}, furry,
1193
+ \textbf{chew teeth}\tabularnewline
1194
+ \cline{2-3}
1195
+ & B-5 & desert, long neck, orange, blue, skimmers\tabularnewline
1196
+ \hline
1197
+ \multirow{2}{*}{rc} & T-5 & \textbf{fast}, \textbf{active}, \textbf{furry}, \textbf{quadrupedal},
1198
+ \textbf{forest}\tabularnewline
1199
+ \cline{2-3}
1200
+ & B-5 & long neck, desert, tusks, skimmers, blue\tabularnewline
1201
+ \hline
1202
+ \end{tabular}
1203
+ \par\end{centering}
1204
+
1205
+ \caption{Zero-shot description of 10 AwA target classes. $\Gamma$ is learned
1206
+ using 6 views ($\mathcal{X}_{\mathcal{D}}$, $\mathcal{V}_{\mathcal{D}}$,
1207
+ $\mathcal{A}_{\mathcal{D}}$, $\mathcal{X}_{\mathcal{O}}$, $\mathcal{V}_{\mathcal{O}}$
1208
+ and $\mathcal{A}_{\mathcal{O}}$). The true positives are highlighted
1209
+ in bold. pc, hp, lp, hw, cp, gp, and rc are short for Persian cat,
1210
+ hippopotamus, leopard, humpback whale, chimpanzee, giant panda, and
1211
+ raccoon respectively. T-5/B-5 are the top/bottom 5 attributes predicted
1212
+ for each target class.}
1213
+ \label{fig:ZeroShotDescription}
1214
+ \end{table}
1215
+
1216
+ \begin{table*}[ht]
1217
+ \begin{centering}
1218
+ \begin{tabular}{|c|c|c|}
1219
+ \hline
1220
+ (a) Query by GT attributes of & Query via embedding space & Query attribute words in word space\tabularnewline
1221
+ \hline
1222
+ \hline
1223
+ graduation party & \textbf{party}, \textbf{graduation}, audience, caucus & cheering, proudly, dressed, wearing\tabularnewline
1224
+ \hline
1225
+ music\_performance & \textbf{music}, \textbf{performance}, musical, heavy metal & sing, singer, sang, dancing\tabularnewline
1226
+ \hline
1227
+ wedding\_ceremony & \textbf{wedding\_ceremony}, wedding, glosses, stag & nun, christening, bridegroom, \textbf{wedding\_ceremony}\tabularnewline
1228
+ \hline
1229
+ \end{tabular}
1230
+ \par\end{centering}
1231
+
1232
+ \begin{centering}
1233
+ \begin{tabular}{|c|c|}
1234
+ \hline
1235
+ (b) Attribute query & Top ranked words\tabularnewline
1236
+ \hline
1237
+ \hline
1238
+ wrapped presents & music; performance; solo\_performances; performing\tabularnewline
1239
+ \hline
1240
+ +small balloon & wedding; wedding\_reception; birthday\_celebration; birthday\tabularnewline
1241
+ \hline
1242
+ +birthday song +birthday caps & \textbf{birthday\_party}; prom; wedding reception\tabularnewline
1243
+ \hline
1244
+ \end{tabular}
1245
+ \par\end{centering}
1246
+ \caption{\label{fig:ZAL_Task}Zero prototype learning on USAA. (a) Querying
1247
+ classes by groundtruth (GT) attribute definitions of the specified
1248
+ classes. (b) An incrementally constructed attribute query for the
1249
+ birthday\_party class. Bold indicates true positive.}
1250
+ \end{table*}
1251
+
1252
+ \noindent \textbf{Zero-shot description}\textit{\quad{}}\textit{\emph{In
1253
+ this}} task, we explicitly infer the attributes corresponding to a
1254
+ specified novel class, given only the textual name of that class without
1255
+ seeing any visual samples. Table~\ref{fig:ZeroShotDescription} illustrates
1256
+ this for AwA. Clearly most of the top/bottom $5$ attributes predicted
1257
+ for each of the 10 target classes are meaningful (in the ideal case,
1258
+ all top $5$ should be true positives and all bottom $5$ true negatives).
1259
+ Predicting the top-$5$ attributes for each class gives an F-measure of $0.236$.
1260
+ In comparison, if we directly
1261
+ select the $5$ nearest attribute name projection to the class name
1262
+ projection (prototype) in the word space, the F-measure is
1263
+ $0.063$, demonstrating the importance of learning the multi-view
1264
+ embedding space. In addition to providing a method to automatically
1265
+ -- rather than manually -- generate an attribute ontology, this task
1266
+ is interesting because even a human could find it very challenging
1267
+ (effectively a human has to list the attributes of a class which he
1268
+ has never seen or been explicitly taught about, but has only seen
1269
+ mentioned in text).
1270
+
1271
+ \noindent \textbf{Zero prototype learning}\quad{}In this task we
1272
+ attempt the reverse of the previous experiment: inferring a class
1273
+ name given a list of attributes. Table \ref{fig:ZAL_Task} illustrates
1274
+ this for USAA. Table \ref{fig:ZAL_Task}(a) shows queries by the groundtruth
1275
+ attribute definitions of some USAA classes and the top-4 ranked
1276
+ list of classes returned. The estimated class names of each attribute
1277
+ vector are reasonable -- the top-4 words are either the class name
1278
+ or related to the class name. A baseline is to use the textual names
1279
+ of the attributes projected in the word space (summing their
1280
+ word vectors) to search for the nearest classes in word space, instead
1281
+ of the embedding space. Table \ref{fig:ZAL_Task}(a) shows that the
1282
+ predicted classes in this case are reasonable, but significantly
1283
+ worse than querying via the embedding space. To quantify this we evaluate
1284
+ the average rank of the true name for each USAA class when queried
1285
+ by its attributes. For querying by embedding space, the average rank
1286
+ is an impressive $2.13$ (out of $4.33$M words
1287
+ with a chance-level rank of $2.17$M), compared with the average rank
1288
+ of $110.24$ by directly querying word space \cite{wordvectorICLR}
1289
+ with textual descriptions of the attributes. Table \ref{fig:ZAL_Task}(b)
1290
+ shows an example of ``incremental'' query using the ontology definition
1291
+ of birthday party \cite{yanweiPAMIlatentattrib}. We first query the
1292
+ `wrapped presents' attribute only, followed by adding `small
1293
+ balloon' and all other attributes (`birthday songs and `birthday
1294
+ caps'). The changing list of top ranked retrieved words intuitively
1295
+ reflects the expectation of the combinatorial meaning of the attributes.
1296
+
1297
+
1298
+
1299
+
1300
+
1301
+
1302
+
1303
+ \section{Conclusions}
1304
+
1305
+ We identified the challenge of projection domain shift in zero-shot
1306
+ learning and presented a new framework to solve it by rectifying the
1307
+ biased projections in a multi-view embedding space. We also proposed
1308
+ a novel label-propagation algorithm TMV-HLP based on heterogeneous
1309
+ across-view hypergraphs. TMV-HLP synergistically exploits multiple
1310
+ intermediate semantic representations, as well as the manifold structure
1311
+ of unlabelled target data to improve recognition in a unified way
1312
+ for zero shot, N-shot and zero+N shot learning tasks. As a result
1313
+ we achieved state-of-the-art performance on the challenging AwA, CUB
1314
+ and USAA datasets. Finally, we demonstrated that our framework enables
1315
+ novel tasks of relating textual class names and their semantic attributes.
1316
+
1317
+ A number of directions have been identified for future work. First,
1318
+ we employ CCA for learning the embedding space. Although
1319
+ it works well, other embedding frameworks can be considered (e.g.~\cite{DBLP:conf/iccv/WangHWWT13}).
1320
+ In the current pipeline, low-level features are first
1321
+ projected onto different semantic views before embedding.
1322
+ It should be possible to develop a unified embedding framework to combine these
1323
+ two steps. Second, under a realistic lifelong learning
1324
+ setting \cite{chen_iccv13}, an unlabelled data point could either
1325
+ belong to a seen/auxiliary category or
1326
+ an unseen class. An ideal framework should be able to classify both seen and unseen classes \cite{RichardNIPS13}.
1327
+ Finally, our results suggest that more views, either manually defined
1328
+ (attributes), extracted from a linguistic corpus (word space),
1329
+ or learned from visual data (deep features), can potentially give
1330
+ rise to better embedding space. More investigation is needed on how to systematically design and select semantic views for
1331
+ embedding.
1332
+
1333
+ \bibliographystyle{abbrv}
1334
+ \bibliography{ref-phd1}
1335
+
1336
+
1337
+
1338
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Fu.png}}]{Yanwei Fu} received the PhD degree from Queen Mary University of London in 2014, and the MEng degree in the Department of Computer Science \& Technology at Nanjing University in 2011, China. He is a Post-doctoral researcher with Leonid Sigal in Disney Research, Pittsburgh, which is co-located with Carnegie Mellon University. His research interests include attribute learning for image and video understanding, robust learning to rank and large-scale video summarization.
1339
+ \end{IEEEbiography}
1340
+
1341
+ \begin{IEEEbiography}
1342
+ [{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{bio_tim.png}}]{Timothy M. Hospedales} received the PhD degree in neuroinformatics from the University of Edinburgh in 2008. He is currently a lecturer (assistant professor) of computer science at Queen Mary University of London. His research interests include probabilistic modelling and machine learning applied variously to problems in computer vision, data mining, interactive learning, and neuroscience. He has published more than 20 papers in major international journals and conferences. He is a member of the IEEE.
1343
+ \end{IEEEbiography}
1344
+
1345
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{bio_tony.png}}]{Tao Xiang} received the PhD degree in electrical and computer engineering from the National University of Singapore in 2002. He is currently a reader (associate professor) in the School of Electronic Engineering and Computer Science, Queen Mary University of London. His research interests include computer vision and machine learning. He has published over 100 papers in international journals and conferences and co-authored a book, Visual Analysis of Behaviour: From Pixels to Semantics.
1346
+ \end{IEEEbiography}
1347
+
1348
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Gong}}]{Shaogang Gong} is Professor of Visual Computation at Queen Mary University of London, a Fellow of the Institution of Electrical Engineers and a Fellow of the British Computer Society. He received his D.Phil in computer vision from Keble College, Oxford University in 1989. His research interests include computer vision, machine learning and video analysis.
1349
+ \end{IEEEbiography}
1350
+
1351
+
1352
+
1353
+ \section*{Supplementary Material}
1354
+
1355
+
1356
+ \section{Further Evaluations on Zero-Shot Learning}
1357
+
1358
+ \begin{figure}[h]
1359
+ \centering{}\includegraphics[width=0.5\textwidth]{comparing_alternative_LP}
1360
+ \caption{\label{fig:Comparing-alternative-LP}Comparing alternative label propagation
1361
+ methods using different graphs before and after transductive embedding (T-embed). The methods
1362
+ are detailed in text.}
1363
+ \end{figure}
1364
+
1365
+
1366
+
1367
+ \subsection{\noindent Heterogeneous hypergraph vs. other graphs\quad{}}
1368
+
1369
+ \noindent Apart from transductive multi-view embedding, another major
1370
+ contribution of this paper is a novel label propagation method based
1371
+ on heterogeneous hypergraph. To evaluate the effectiveness of our
1372
+ hypergraph label propagation, we compare with a number of alternative
1373
+ label propagation methods using other graph models. More specifically,
1374
+ within each view, two alternative graphs can be constructed: 2-graphs
1375
+ which are used in the classification on multiple graphs (C-MG) model
1376
+ \cite{Zhou2007ICML}, and conventional homogeneous hypergraph formed
1377
+ in each single view \cite{Zhou06learningwith,fu2010summarize,DBLP:journals/corr/LiLSDH13}.
1378
+ Since hypergraphs are typically combined with 2-graphs, we have 5
1379
+ different multi-view graph models: \emph{2-gr} (2-graph in each view),
1380
+ \emph{Homo-hyper} (homogeneous hypergraph in each view), \emph{Hete-hyper}
1381
+ (our heterogeneous hypergraph across views), \emph{Homo-hyper+2-gr}
1382
+ (homogeneous hypergraph combined with 2-graph in each view), and \emph{Hete-hyper+2-gr}
1383
+ (our heterogeneous hypergraph combined with 2-graph, as in our TMV-HLP).
1384
+ In our experiments, the same random walk label propagation algorithm
1385
+ is run on each graph in AwA and USAA before and after transductive
1386
+ embedding to compare these models.
1387
+
1388
+ From the results in Fig.~\ref{fig:Comparing-alternative-LP}, we
1389
+ observe that: (1) The graph model used in our TMV-HLP (\emph{Hete-hyper+2-gr})
1390
+ yields the best performance on both datasets. (2) All graph models
1391
+ benefit from the embedding. In particular, the performance of our
1392
+ heterogeneous hypergraph degrades drastically without embedding. This
1393
+ is expected because before embedding, nodes in different views are
1394
+ not aligned; so forming meaningful hyperedges across views is not
1395
+ possible. (3) Fusing hypergraphs with 2-graphs helps -- one has the
1396
+ robustness and the other has the discriminative power, so it makes
1397
+ sense to combine the strengths of both. (4) After embedding, on its
1398
+ own, heterogeneous graphs are the best while homogeneous hypergraphs
1399
+ (\emph{Homo-hyper}) are worse than 2-gr indicating that the discriminative
1400
+ power by 2-graphs over-weighs the robustness of homogeneous hypergraphs.
1401
+
1402
+
1403
+
1404
+
1405
+ \begin{figure}[tph]
1406
+ \centering{}\includegraphics[scale=0.45]{comparing_PST_LP_AwA}\caption{\label{fig:Comparing-C-MG-and}Comparing our method with the alternative C-MG and PST methods before and after
1407
+ transductive embedding}
1408
+ \end{figure}
1409
+
1410
+ \subsection{Comparing with other transductive methods}
1411
+
1412
+ Rohrbarch et al.~\cite{transferlearningNIPS} employed an alternative transductive method, termed PST, for zero-shot learning. We compare
1413
+ with PST for zero-shot learning here and N-shot learning
1414
+ in Sec.~\ref{sec:n-shot}.
1415
+
1416
+ Specifically, we
1417
+ use AwA dataset with hand-crafted features, semantic word vector $\mathcal{V}$,
1418
+ and semantic attribute $\mathcal{A}$. We compare with PST
1419
+ \cite{transferlearningNIPS} as well as the graph-based
1420
+ semi-supervised learning methods C-MG \cite{Zhou2007ICML} (not originally designed but can be used for zero-shot learning) before and after our transductive embedding
1421
+ (T-embed). We use equal weights for each graph for C-MG and the same
1422
+ parameters from \cite{transferlearningNIPS} for PST. From Fig.~\ref{fig:Comparing-C-MG-and},
1423
+ we make the following conclusions: (1) TMV-HLP in our embedding space
1424
+ outperforms both alternatives. (2) The embedding also improves both
1425
+ C-MG and PST, due to alleviated projection domain shift via aligning
1426
+ the semantic projections and low-level features.
1427
+
1428
+ The reasons for TMV-HLP outperforming PST include: (1) Using multiple semantic
1429
+ views (PST is defined only on one), (2) Using hypergraph-based label
1430
+ propagation (PST uses only conventional 2-graphs), (3) Less dependence
1431
+ on good quality initial labelling heuristics required by PST -- our
1432
+ TMV-HLP uses the trivial initial labels (each prototype labelled according
1433
+ to its class as in Eq (17) in the main manuscript).
1434
+
1435
+
1436
+
1437
+
1438
+ \subsection{How many target samples are needed for learning multi-view embedding?}
1439
+
1440
+ In our paper, we use all the target class samples to construct the transductive
1441
+ embedding CCA space. Here we investigate how many samples are required to construct a reasonable
1442
+ embedding. We use hand-crafted features (dimension:
1443
+ $10,925$) of the AwA dataset with semantic word vector $\mathcal{V}$
1444
+ (dimension: $1,000$) and semantic attribute $\mathcal{A}$ (dimension:
1445
+ $85$) to construct the CCA space. We randomly select $1\%$, $3\%$,
1446
+ $5\%$, $20\%$, $40\%$, $60\%$, and $80\%$ of the unlabelled target class
1447
+ instances to construct the CCA space for zero-shot learning using our TMV-HLP.
1448
+ Random sampling is repeated $10$ times. The results shown in Fig.~\ref{fig:samples for cca} below demonstrate that only 5\% of the full set of samples
1449
+ (300 in the case of AwA) are sufficient to learn a good embedding
1450
+ space.
1451
+ \begin{figure}[tbph]
1452
+ \centering{}\includegraphics[scale=0.5]{CCA_some_portion}\caption{Influence of varying the number of unlabelled target class samples used to learn the CCA space.}
1453
+ \label{fig:samples for cca}
1454
+ \end{figure}
1455
+
1456
+
1457
+
1458
+ \subsection{Can the embedding space be learned using the auxiliary dataset? }
1459
+
1460
+ Since we aim to rectify the projection domain shift problem for the target data, the multi-view embedding space is learned transductively using the target dataset. One may ask whether the multi-view embedding space learned using the auxiliary dataset can be of any use for the zero-shot classification of the target class samples. To answer this question,
1461
+ we conduct experiments by using the hand-crafted features (dimension:
1462
+ $10,925$) of AwA dataset with semantic word vector $\mathcal{V}$
1463
+ (dimension: $1,000$) and semantic attribute $\mathcal{A}$ (dimension:
1464
+ $85$). Auxiliary data from AwA are now used to learn the multi-view
1465
+ CCA, and we then project the testing data into this CCA space.
1466
+ We compare the results of our TMV-HLP on AwA using the CCA spaces learned
1467
+ from the auxiliary dataset versus unlabelled target dataset in Fig.~\ref{fig:CCA-space-trained}. It can be seen that the CCA embedding space learned using the auxiliary dataset gives
1468
+ reasonable performance; however, it does not perform as well as our
1469
+ method which learned the multi-view embedding space transductively using target class samples. This
1470
+ is likely due to not observing, and thus not being able to learn to rectify,
1471
+ the projection domain shift.
1472
+
1473
+ \begin{figure}[tbh]
1474
+ \begin{centering}
1475
+ \includegraphics[scale=0.4]{CCA_learned_with_training_data}
1476
+ \par\end{centering}
1477
+
1478
+ \caption{\label{fig:CCA-space-trained}Comparing ZSL performance using CCA
1479
+ learned from auxiliary data versus unlabelled target data. }
1480
+ \end{figure}
1481
+
1482
+ \subsection{Qualitative results}
1483
+
1484
+ Figure \ref{fig:QualitativeZSLAwA} shows some qualitative results for
1485
+ zero-shot learning on AwA in terms of top 5 most likely classes predicted
1486
+ for each image. It shows that our TMV-HLP produces more reasonable ranked list of classes
1487
+ for each image, comparing to DAP and PST.
1488
+
1489
+
1490
+ \begin{figure}[h!]
1491
+ \begin{centering}
1492
+ \includegraphics[scale=0.5]{visualization_example}
1493
+ \par\end{centering}
1494
+ \caption{\label{fig:QualitativeZSLAwA}Qualitative results for zero-shot learning
1495
+ on AwA. Bold indicates correct class names.}
1496
+ \end{figure}
1497
+
1498
+ \begin{figure*}[ht!]
1499
+ \begin{centering}
1500
+ \includegraphics[scale=0.4]{Nshot_AwAUSAA}
1501
+ \par\end{centering}
1502
+
1503
+ \caption{\label{fig:N-shot-learning-for}N-shot learning results with (+) and
1504
+ without (-) additional prototype information.}
1505
+ \end{figure*}
1506
+
1507
+ \section{N-Shot learning}
1508
+ \label{sec:n-shot}
1509
+
1510
+ N-shot learning experiments are carried out on the three datasets
1511
+ with the number of target class instances labelled (N) ranging from
1512
+ 0 (zero-shot) to 20. We also consider the situation \cite{transferlearningNIPS}
1513
+ where both a few training examples \emph{and} a zero-shot prototype
1514
+ may be available (denoted with suffix $+$), and contrast it to the
1515
+ conventional N-shot learning setting where solely labelled data and
1516
+ no prototypes are available (denoted with suffix $-$). For comparison,
1517
+ PST+ is the method in \cite{transferlearningNIPS} which uses prototypes
1518
+ for the initial label matrix. SVM+ and M2LATM- are the SVM and M2LATM
1519
+ methods used in \cite{lampert13AwAPAMI} and \cite{yanweiPAMIlatentattrib}
1520
+ respectively. For fair comparison, we modify the SVM- used in \cite{lampert13AwAPAMI}
1521
+ into SVM+ (i.e., add the prototype to the pool of SVM training data).
1522
+ Note that our TMV-HLP can be used in both conditions but the PST method
1523
+ \cite{transferlearningNIPS} only applies to the $+$ condition. All
1524
+ experiments are repeated for $10$ rounds with the average results
1525
+ reported. Evaluation is done on the remaining unlabelled target data.
1526
+ From the results shown in Fig.~\ref{fig:N-shot-learning-for}, it
1527
+ can be seen that: (1) TMV-HLP+ always achieves the best performance,
1528
+ particularly given few training examples. (2) The methods that explore
1529
+ transductive learning via label propagation (TMV-HLP+, TMV-HLP-, and
1530
+ PST+) are clearly superior to those that do not (SMV+ and M2LATM-).
1531
+ (3) On AwA, PST+ outperforms TMV-HLP- with less than 3 instances per
1532
+ class. Because PST+ exploits the prototypes, this suggests that a
1533
+ single good prototype is more informative than a few labelled instances
1534
+ in N-shot learning. This also explains why sometimes the N-shot learning
1535
+ results of TMV-HLP+ are worse than its zero-shot learning results
1536
+ when only few training labels are observed (e.g.~on AwA, the TMV-HLP+
1537
+ accuracy goes down before going up when more labelled instances are
1538
+ added). Note that when more labelled instances are available, TMV-HLP-
1539
+ starts to outperform PST+, because it combines the different views
1540
+ of the training instances, and the strong effect of the prototypes
1541
+ is eventually outweighed.
1542
+
1543
+
1544
+
1545
+ \end{document}
papers/1502/1502.03044.tex ADDED
@@ -0,0 +1,972 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ \documentclass{article}
4
+
5
+ \usepackage{times}
6
+ \usepackage{graphicx} \usepackage{subfigure}
7
+ \usepackage{amsmath}
8
+ \usepackage{amsfonts}
9
+
10
+ \usepackage{stfloats}
11
+
12
+ \usepackage{url}
13
+
14
+ \usepackage{natbib}
15
+
16
+ \usepackage{algorithm}
17
+ \usepackage{algorithmic}
18
+
19
+ \usepackage{hyperref}
20
+
21
+ \newcommand{\theHalgorithm}{\arabic{algorithm}}
22
+
23
+
24
+
25
+ \usepackage[accepted]{arXiv}
26
+
27
+
28
+ \graphicspath{ {./figures/} }
29
+
30
+
31
+
32
+
33
+ \newcommand{\obs}{\text{obs}}
34
+ \newcommand{\mis}{\text{mis}}
35
+
36
+ \newcommand{\qt}[1]{\left<#1\right>}
37
+ \newcommand{\ql}[1]{\left[#1\right]}
38
+ \newcommand{\hess}{\mathbf{H}}
39
+ \newcommand{\jacob}{\mathbf{J}}
40
+ \newcommand{\hl}{HL}
41
+ \newcommand{\cost}{\mathcal{L}}
42
+ \newcommand{\lout}{\mathbf{r}}
43
+ \newcommand{\louti}{r}
44
+ \newcommand{\outi}{y}
45
+ \newcommand{\out}{\mathbf{y}}
46
+ \newcommand{\gauss}{\mathbf{G_N}}
47
+ \newcommand{\eye}{\mathbf{I}}
48
+ \newcommand{\softmax}{\phi}
49
+ \newcommand{\targ}{\mathbf{t}}
50
+ \newcommand{\metric}{\mathbf{G}}
51
+ \newcommand{\sample}{\mathbf{z}}
52
+ \newcommand{\bmx}[0]{\begin{bmatrix}}
53
+ \newcommand{\emx}[0]{\end{bmatrix}}
54
+ \newcommand{\qexp}[1]{\left<#1\right>}
55
+ \newcommand{\vect}[1]{\mathbf{#1}}
56
+ \newcommand{\vects}[1]{\boldsymbol{#1}}
57
+ \newcommand{\matr}[1]{\mathbf{#1}}
58
+ \newcommand{\var}[0]{\operatorname{Var}}
59
+ \newcommand{\std}[0]{\operatorname{std}}
60
+ \newcommand{\cov}[0]{\operatorname{Cov}}
61
+ \newcommand{\diag}[0]{\operatorname{diag}}
62
+ \newcommand{\matrs}[1]{\boldsymbol{#1}}
63
+ \newcommand{\va}[0]{\vect{a}}
64
+ \newcommand{\vb}[0]{\vect{b}}
65
+ \newcommand{\vc}[0]{\vect{c}}
66
+ \newcommand{\ve}[0]{\vect{e}}
67
+ \newcommand{\vh}[0]{\vect{h}}
68
+ \newcommand{\vv}[0]{\vect{v}}
69
+ \newcommand{\vx}[0]{\vect{x}}
70
+ \newcommand{\vn}[0]{\vect{n}}
71
+ \newcommand{\vz}[0]{\vect{z}}
72
+ \newcommand{\vw}[0]{\vect{w}}
73
+ \newcommand{\vs}[0]{\vect{s}}
74
+ \newcommand{\vf}[0]{\vect{f}}
75
+ \newcommand{\vi}[0]{\vect{i}}
76
+ \newcommand{\vo}[0]{\vect{o}}
77
+ \newcommand{\vy}[0]{\vect{y}}
78
+ \newcommand{\vg}[0]{\vect{g}}
79
+ \newcommand{\vm}[0]{\vect{m}}
80
+ \newcommand{\vu}[0]{\vect{u}}
81
+ \newcommand{\vL}[0]{\vect{L}}
82
+ \newcommand{\vr}[0]{\vect{r}}
83
+ \newcommand{\mW}[0]{\matr{W}}
84
+ \newcommand{\mG}[0]{\matr{G}}
85
+ \newcommand{\mX}[0]{\matr{X}}
86
+ \newcommand{\mQ}[0]{\matr{Q}}
87
+ \newcommand{\mU}[0]{\matr{U}}
88
+ \newcommand{\mV}[0]{\matr{V}}
89
+ \newcommand{\vE}[0]{\matr{E}}
90
+ \newcommand{\mA}{\matr{A}}
91
+ \newcommand{\mD}{\matr{D}}
92
+ \newcommand{\mS}{\matr{S}}
93
+ \newcommand{\mI}{\matr{I}}
94
+ \newcommand{\td}[0]{\text{d}}
95
+ \newcommand{\TT}[0]{\vects{\theta}}
96
+ \newcommand{\vsig}[0]{\vects{\sigma}}
97
+ \newcommand{\valpha}[0]{\vects{\alpha}}
98
+ \newcommand{\vmu}[0]{\vects{\mu}}
99
+ \newcommand{\vzero}[0]{\vect{0}}
100
+ \newcommand{\tf}[0]{\text{m}}
101
+ \newcommand{\tdf}[0]{\text{dm}}
102
+ \newcommand{\grad}[0]{\nabla}
103
+ \newcommand{\alert}[1]{\textcolor{red}{#1}}
104
+ \newcommand{\N}[0]{\mathcal{N}}
105
+ \newcommand{\LL}[0]{\mathcal{L}}
106
+ \newcommand{\HH}[0]{\mathcal{H}}
107
+ \newcommand{\RR}[0]{\mathbb{R}}
108
+ \newcommand{\II}[0]{\mathbb{I}}
109
+ \newcommand{\Scal}[0]{\mathcal{S}}
110
+ \newcommand{\sigmoid}{\sigma}
111
+ \newcommand{\E}[0]{\mathbb{E}}
112
+ \newcommand{\enabla}[0]{\ensuremath{\overset{\raisebox{-0.3ex}[0.5ex][0ex]{\ensuremath{\scriptscriptstyle e}}}{\nabla}}}
113
+ \newcommand{\enhnabla}[0]{\nabla_{\hspace{-0.5mm}e}\,}
114
+ \newcommand{\tred}[1]{\textcolor{red}{#1}}
115
+ \newcommand{\tgreen}[1]{\textcolor{green}{#1}}
116
+ \newcommand{\tblue}[1]{\textcolor{blue}{#1}}
117
+ \newcommand{\todo}[1]{{\Large\textcolor{red}{#1}}}
118
+ \newcommand{\done}[1]{{\Large\textcolor{green}{#1}}}
119
+ \newcommand{\dd}[1]{\ensuremath{\mbox{d}#1}}
120
+
121
+ \DeclareMathOperator*{\argmax}{\arg \max}
122
+ \DeclareMathOperator*{\argmin}{\arg \min}
123
+ \newcommand{\newln}{\\&\quad\quad{}}
124
+
125
+ \newcommand{\Ax}{\mathcal{A}_x}
126
+ \newcommand{\Ay}{\mathcal{A}_y}
127
+ \newcommand{\ola}{\overleftarrow}
128
+ \newcommand{\ora}{\overrightarrow}
129
+ \newcommand{\ov}{\overline}
130
+ \newcommand{\ts}{\rule{0pt}{2.6ex}} \newcommand{\ms}{\rule{0pt}{0ex}} \newcommand{\bs}{\rule[-1.2ex]{0pt}{0pt}} \newcommand{\specialcell}[2][c]{\begin{tabular}[#1]{@{}c@{}}#2\end{tabular}}
131
+
132
+ \newcommand{\bx}{\textbf{x}}
133
+ \newcommand{\by}{\textbf{y}}
134
+ \newcommand{\bW}{\textbf{W}}
135
+
136
+ \icmltitlerunning{Neural Image Caption Generation with Visual Attention}
137
+
138
+ \begin{document}
139
+
140
+ \twocolumn[
141
+ \icmltitle{Show, Attend and Tell: Neural Image Caption\\Generation with Visual Attention}
142
+
143
+ \icmlauthor{Kelvin Xu}{kelvin.xu@umontreal.ca}
144
+ \icmlauthor{Jimmy Lei Ba}{jimmy@psi.utoronto.ca}
145
+ \icmlauthor{Ryan Kiros}{rkiros@cs.toronto.edu}
146
+ \icmlauthor{Kyunghyun Cho}{kyunghyun.cho@umontreal.ca}
147
+ \icmlauthor{Aaron Courville}{aaron.courville@umontreal.ca}
148
+ \icmlauthor{Ruslan Salakhutdinov}{rsalakhu@cs.toronto.edu}
149
+ \icmlauthor{Richard S. Zemel}{zemel@cs.toronto.edu}
150
+ \icmlauthor{Yoshua Bengio}{find-me@the.web}
151
+
152
+
153
+
154
+
155
+ \vskip 0.3in
156
+ ]
157
+
158
+ \begin{abstract}
159
+ Inspired by recent work in machine translation and object detection, we
160
+ introduce an attention based model that automatically learns to describe the
161
+ content of images. We describe how we can train this model in a deterministic
162
+ manner using standard backpropagation techniques and stochastically by
163
+ maximizing a variational lower bound. We also show through visualization how
164
+ the model is able to automatically learn to fix its gaze on salient objects
165
+ while generating the corresponding words in the output sequence.
166
+ We validate the use of attention with state-of-the-art performance on three
167
+ benchmark datasets: Flickr8k, Flickr30k and MS COCO.
168
+ \end{abstract}
169
+
170
+ \section{Introduction}
171
+ Automatically generating captions of an image is a task very close to the heart
172
+ of scene understanding --- one of the primary goals of computer vision. Not only
173
+ must caption generation models be powerful enough to solve the computer vision
174
+ challenges of determining which objects are in an image, but they must also be
175
+ capable of capturing and expressing their relationships in a natural language.
176
+ For this reason, caption generation has long been viewed as a difficult
177
+ problem. It is a very important challenge for machine learning
178
+ algorithms, as it amounts to mimicking the remarkable human ability to compress
179
+ huge amounts of salient visual infomation into descriptive language.
180
+
181
+ Despite the challenging nature of this task, there has been a
182
+ recent surge of research interest in attacking the image caption generation
183
+ problem. Aided by advances in training neural networks \citep{Krizhevsky2012}
184
+ and large classification datasets \citep{Imagenet14}, recent work has
185
+ significantly improved the quality of caption generation using a combination of
186
+ convolutional neural networks (convnets) to obtain vectorial representation of images and
187
+ recurrent neural networks to decode those representations into natural language
188
+ sentences (see Sec.~\ref{section:background}).
189
+
190
+ \begin{figure}[tp]
191
+ \label{figure:model_diagram}
192
+ \centering
193
+ \caption{Our model learns a words/image alignment. The visualized
194
+ attentional maps (3) are explained in section \ref{section:model} \& \ref{section:viz}}
195
+ \vspace{3mm}
196
+ \includegraphics[width=\columnwidth]{model_diagram.pdf}
197
+ \vspace{-6mm}
198
+ \end{figure}
199
+ \begin{figure*}[!tp]
200
+ \label{figure:attention_diagram}
201
+ \centering
202
+ \caption{Attention over time. As the model generates each word, its attention changes to reflect the relevant parts of the image. ``soft''
203
+ (top row) vs ``hard'' (bottom row) attention. (Note that both models generated the same captions in this example.) }
204
+ \includegraphics[width=6.5in]{runout}
205
+ \vspace{-5mm}
206
+ \end{figure*}
207
+ \begin{figure*}[!tp]
208
+ \label{figure:alignments}
209
+ \centering
210
+ \caption{Examples of attending to the correct object (\textit{white} indicates the attended regions,
211
+ \textit{underlines} indicated the corresponding word)}
212
+ \includegraphics[width=0.87\textwidth]{good.pdf}
213
+ \vspace{-5mm}
214
+ \end{figure*}
215
+
216
+ One of the most curious facets of the human visual system is the presence of
217
+ attention \citep{Rensink2000,Corbetta2002}. Rather than compress an entire
218
+ image into a static representation, attention allows for salient features to
219
+ dynamically come to the forefront as needed. This is especially important when
220
+ there is a lot of clutter in an image. Using representations (such as those
221
+ from the top layer of a convnet) that distill information in image down to the
222
+ most salient objects is one effective solution that has been widely adopted in
223
+ previous work. Unfortunately, this has one potential drawback of losing
224
+ information which could be useful for richer, more descriptive captions. Using
225
+ more low-level representation can help preserve this information. However
226
+ working with these features necessitates a powerful mechanism to steer the
227
+ model to information important to the task at hand.
228
+
229
+ In this paper, we describe approaches to caption generation that attempt to
230
+ incorporate a form of attention with two variants: a ``hard'' attention
231
+ mechanism and a ``soft'' attention mechanism. We also show how one advantage of
232
+ including attention is the ability to visualize what the model ``sees''.
233
+ Encouraged by recent advances in caption generation and inspired by recent
234
+ success in employing attention in machine translation \citep{Bahdanau2014} and
235
+ object recognition \citep{Ba2014,Mnih2014}, we investigate models that can
236
+ attend to salient part of an image while generating its caption.
237
+
238
+ The contributions of this paper are the following:
239
+ \vspace{- 1mm}
240
+ \begin{itemize}
241
+ \vspace{-2mm}
242
+ \item We introduce two attention-based image caption generators under
243
+ a common framework (Sec.~\ref{section:model}): 1) a ``soft'' deterministic attention mechanism trainable by standard
244
+ back-propagation methods and 2) a ``hard'' stochastic attention mechanism trainable by
245
+ maximizing an approximate variational lower bound or equivalently by
246
+ REINFORCE~\citep{Williams92}.
247
+ \vspace{-2mm}
248
+ \item We show how we can gain insight and interpret the results of
249
+ this framework by visualizing ``where'' and ``what'' the attention focused on.
250
+ (see Sec.~\ref{section:viz})
251
+ \vspace{-2mm}
252
+ \item Finally, we quantitatively validate the usefulness of attention in
253
+ caption generation with state of the art performance
254
+ (Sec.~\ref{section:results}) on three benchmark datasets: Flickr8k
255
+ \citep{Hodosh2013} , Flickr30k \citep{Young2014} and the MS COCO dataset
256
+ \citep{Lin2014}.
257
+ \end{itemize}
258
+ \section{Related Work}
259
+ \label{section:background}
260
+
261
+ In this section we provide relevant background on previous work on
262
+ image caption generation and attention.
263
+ Recently, several methods have been proposed for generating image descriptions.
264
+ Many of these methods are based on recurrent neural networks and inspired
265
+ by the successful use of sequence to sequence training with neural networks for
266
+ machine translation~\citep{Cho2014,Bahdanau2014,Sutskever2014}. One
267
+ major reason image caption generation is well suited to the encoder-decoder framework
268
+ \citep{Cho2014} of machine translation is because it is analogous
269
+ to ``translating'' an image to a sentence.
270
+
271
+ The first approach to use neural networks for caption generation was
272
+ \citet{Kiros2014a}, who proposed a multimodal log-bilinear model that was
273
+ biased by features from the image. This work was later followed by
274
+ \citet{Kiros2014b} whose method was designed to explicitly allow a natural way
275
+ of doing both ranking and generation. \citet{Mao2014} took a similar approach
276
+ to generation but replaced a feed-forward neural language model with a
277
+ recurrent one. Both \citet{Vinyals2014} and \citet{Donahue2014} use LSTM RNNs
278
+ for their models. Unlike \citet{Kiros2014a} and \citet{Mao2014} whose models
279
+ see the image at each time step of the output word sequence,
280
+ \citet{Vinyals2014} only show the image to the RNN at the beginning. Along
281
+ with images, \citet{Donahue2014} also apply LSTMs to videos, allowing
282
+ their model to generate video descriptions.
283
+
284
+ All of these works represent images as a single feature vector from the top
285
+ layer of a pre-trained convolutional network. \citet{Karpathy2014} instead
286
+ proposed to learn a joint embedding space for ranking and generation whose
287
+ model learns to score sentence and image similarity as a function of R-CNN
288
+ object detections with outputs of a bidirectional RNN.
289
+ \citet{Fang2014} proposed a three-step pipeline for
290
+ generation by incorporating
291
+ object detections.
292
+ Their model first learn detectors for several visual concepts
293
+ based on a multi-instance learning framework. A language model trained on
294
+ captions was then applied to the detector outputs, followed by rescoring from a
295
+ joint image-text embedding space. Unlike these models, our proposed attention
296
+ framework does not explicitly use object detectors but instead learns latent
297
+ alignments from scratch. This allows our model to go beyond ``objectness'' and
298
+ learn to attend to abstract concepts.
299
+
300
+ Prior to the use of neural networks for generating captions, two main
301
+ approaches were dominant. The first involved generating caption templates which
302
+ were filled in based on the results of object detections and attribute
303
+ discovery (\citet{Kulkarni2013}, \citet{Li2011},
304
+ \citet{Yang2011}, \citet{Mitchell2012}, \citet{Elliott2013}). The second
305
+ approach was based on first retrieving similar captioned images from a large
306
+ database then modifying these retrieved captions to fit the query
307
+ \citep{Kuznetsova2012,Kuznetsova2014}. These approaches typically involved an
308
+ intermediate ``generalization'' step to remove the specifics of a caption that
309
+ are only relevant to the retrieved image, such as the name of a city. Both of
310
+ these approaches have since fallen out of favour to the now dominant neural
311
+ network methods.
312
+
313
+ There has been a long line of previous work incorporating attention into neural
314
+ networks for vision related tasks. Some that share the same spirit as our work
315
+ include \citet{Larochelle2010,Denil2012,Tang2014}. In particular however, our
316
+ work directly extends the work of \citet{Bahdanau2014,Mnih2014,Ba2014}.
317
+
318
+ \section{Image Caption Generation with Attention Mechanism}
319
+
320
+ \subsection{Model Details}
321
+ \label{section:model}
322
+ In this section, we describe the two variants of our attention-based model by
323
+ first describing their common framework. The main difference is the definition
324
+ of the $\phi$ function which we describe in detail in Section
325
+ \ref{sec:det_sto}. We denote vectors with bolded font and matrices with
326
+ capital letters. In our description below, we suppress bias terms for
327
+ readability.
328
+
329
+ \subsubsection{Encoder: Convolutional Features}
330
+
331
+ Our model takes a single raw image and generates a caption $\vy$
332
+ encoded as a sequence of 1-of-$K$ encoded words.
333
+ \[
334
+ y = \left\{\vy_1, \ldots, \vy_{C} \right\},\mbox{ } \vy_i \in \RR^{K}
335
+ \]
336
+ where $K$ is the size of the vocabulary and $C$ is the length of the caption.
337
+
338
+ We use a convolutional neural network in order to extract a set of feature
339
+ vectors which we refer to as annotation vectors. The extractor produces $L$
340
+ vectors, each of which is a D-dimensional representation corresponding to a
341
+ part of the image.
342
+ \begin{align*}
343
+ a = \left\{\va_1, \ldots, \va_L \right\},\mbox{ } \va_i \in \RR^{D}
344
+ \end{align*}
345
+ In order to obtain a correspondence between the feature vectors and portions of
346
+ the 2-D image, we extract features from a lower convolutional layer unlike
347
+ previous work which instead used a fully connected layer. This allows the
348
+ decoder to selectively focus on certain parts of an image by selecting a subset
349
+ of all the feature vectors.
350
+ \begin{figure}[tp]
351
+ \vskip 0.2in
352
+ \begin{center}
353
+ \centerline{\includegraphics[width=\columnwidth]{lstm_2.pdf}}
354
+ \caption{A LSTM cell, lines with bolded squares imply projections with a
355
+ learnt weight vector. Each cell learns how to weigh
356
+ its input components (input gate), while learning how to modulate that
357
+ contribution to the memory (input modulator). It also learns
358
+ weights which erase the memory cell (forget gate), and weights
359
+ which control how this memory should be emitted (output gate).}
360
+ \label{figure:conditional_lstm}
361
+ \end{center}
362
+ \vskip -0.3 in
363
+ \end{figure}
364
+ \subsubsection{Decoder: Long Short-Term Memory Network}
365
+
366
+ We use a long short-term memory (LSTM)
367
+ network~\citep{Hochreiter+Schmidhuber-1997} that produces a caption by
368
+ generating one word at every time step conditioned on a context vector, the
369
+ previous hidden state and the previously generated words. Our implementation of
370
+ LSTM closely follows the one used in \citet{Zaremba2014} (see Fig.~\ref{figure:conditional_lstm}). Using $T_{s,t}:
371
+ \RR^{s} \rightarrow \RR^{t}$ to denote a simple affine transformation with
372
+ parameters that are learned,
373
+
374
+ \begin{align}
375
+ \label{eq:lstm_gates}
376
+ \begin{pmatrix}
377
+ \vi_t \\
378
+ \vf_t \\
379
+ \vo_t \\
380
+ \vg_t \\
381
+ \end{pmatrix} =
382
+ &
383
+ \begin{pmatrix}
384
+ \sigmoid \\
385
+ \sigmoid \\
386
+ \sigmoid \\
387
+ \tanh \\
388
+ \end{pmatrix}
389
+ T_{D+m+n, n}
390
+ \begin{pmatrix}
391
+ \vE\vy_{t-1}\\
392
+ \vh_{t-1}\\
393
+ \hat{\vz_t}\\
394
+ \end{pmatrix}
395
+ \\
396
+ \label{eq:lstm_memory}
397
+ \vc_t &= \vf_t \odot \vc_{t-1} + \vi_t \odot \vg_t \\
398
+ \label{eq:lstm_hidden}
399
+ \vh_t &= \vo_t \odot \tanh (\vc_{t}).
400
+ \end{align}
401
+ Here, $\vi_t$, $\vf_t$, $\vc_t$, $\vo_t$, $\vh_t$ are the input, forget, memory, output
402
+ and hidden state of the LSTM, respectively. The vector $\hat{\vz} \in \RR^{D}$ is the context
403
+ vector, capturing the visual information associated with a particular
404
+ input location, as explained below. $\vE\in\RR^{m\times K}$ is an embedding matrix. Let $m$ and $n$
405
+ denote the embedding and LSTM dimensionality respectively and $\sigma$ and
406
+ $\odot$ be the logistic sigmoid activation and element-wise multiplication
407
+ respectively.
408
+
409
+ In simple terms, the context vector $\hat{\vz}_t$ (equations~\eqref{eq:lstm_gates}--\eqref{eq:lstm_hidden})
410
+ is a dynamic representation of the relevant part of the image input at time $t$.
411
+ We define a mechanism $\phi$ that computes $\hat{\vz}_t$ from the annotation vectors $\va_i, i=1,\ldots,L$
412
+ corresponding to the features extracted at different image locations. For
413
+ each location $i$, the
414
+ mechanism generates a positive weight $\alpha_i$ which can
415
+ be interpreted either as the probability that location $i$ is the right place to focus
416
+ for producing the next word (the ``hard'' but stochastic attention mechanism),
417
+ or as the relative importance to give to location $i$ in blending the $a_i$'s together.
418
+ The weight $\alpha_i$ of each annotation vector $a_i$ is computed by an
419
+ \emph{attention model} $f_{\mbox{att}}$ for which we use a multilayer perceptron conditioned on the previous hidden state $h_{t-1}$.
420
+ The soft version of this attention mechanism was introduced by~\citet{Bahdanau2014}.
421
+ For emphasis, we note that the hidden state varies as the output RNN advances in
422
+ its output sequence: ``where'' the network looks next depends on the sequence of words that has already been
423
+ generated.
424
+ \begin{align}
425
+ e_{ti} =& f_{\mbox{att}} (\va_i, \vh_{t-1}) \\
426
+ \label{eq:alpha}
427
+ \alpha_{ti} =& \frac{\exp(e_{ti})}{\sum_{k=1}^L \exp(e_{tk})}.
428
+ \end{align}
429
+ Once the weights (which sum to one) are computed, the context vector $\hat{z}_t$
430
+ is computed by
431
+ \begin{align}
432
+ \label{eq:context}
433
+ \hat{\vz}_t = \phi\left( \left\{ \va_i \right\}, \left\{ \alpha_i \right\}
434
+ \right),
435
+ \end{align}
436
+ where $\phi$ is a function that returns a single vector given the set of
437
+ annotation vectors and their corresponding weights. The details of $\phi$ function
438
+ are discussed in Sec.~\ref{sec:det_sto}.
439
+
440
+ The initial memory state and hidden state of the LSTM are predicted by an
441
+ average of the annotation vectors fed through two separate MLPs
442
+ ($\text{init,c}$ and $\text{init,h}$):
443
+ \begin{align}
444
+ \vc_0 = f_{\text{init,c}} (\frac{1}{L} \sum_i^L \va_i) \nonumber \\
445
+ \vh_0 = f_{\text{init,h}} (\frac{1}{L} \sum_i^L \va_i) \nonumber
446
+ \end{align}
447
+
448
+ In this work, we use a deep output layer~\citep{Pascanu2014} to compute the
449
+ output word probability given the LSTM state, the context vector and the
450
+ previous word:
451
+ \begin{gather}
452
+ \label{eq:p-out}
453
+ p(\vy_t | \va, \vy_1^{t-1}) \propto \exp(\vL_o(\vE\vy_{t-1} + \vL_h\vh_t+ \vL_z \hat{\vz}_t))
454
+ \end{gather}
455
+
456
+ Where $\vL_o\in\RR^{K\times m}$, $\vL_h\in\RR^{m\times n}$, $\vL_z\in\RR^{m\times D}$, and
457
+ $\vE$ are learned parameters initialized randomly.
458
+
459
+ \section{Learning Stochastic ``Hard'' vs Deterministic ``Soft'' Attention}
460
+ \label{sec:det_sto}
461
+
462
+ In this section we discuss two alternative mechanisms for the attention
463
+ model $f_{\mbox{att}}$: stochastic attention and deterministic attention.
464
+ \begin{figure*}[!tp]
465
+ \label{fig:second}
466
+ \caption{Examples of mistakes where we can use attention to gain intuition into what the model saw.}
467
+ \includegraphics[width=1.02\textwidth]{errors.pdf}
468
+ \label{fig:subfigures}
469
+ \end{figure*}
470
+ \subsection{Stochastic ``Hard'' Attention}
471
+ \label{sec:sto_attn}
472
+ We represent the location variable $s_t$ as where the model decides to focus attention
473
+ when generating the $t^{th}$ word. $s_{t,i}$ is an indicator one-hot variable which is set to 1
474
+ if the $i$-th location (out of $L$)
475
+ is the one used to extract visual features. By treating
476
+ the attention locations as intermediate latent variables, we can
477
+ assign a multinoulli distribution parametrized by $\{\alpha_i\}$,
478
+ and view $\hat{z}_t$ as a random variable:
479
+ \begin{align}
480
+ \label{eq:s_dist}
481
+ p(&s_{t,i} = 1 \mid s_{j<t}, \va ) = \alpha_{t,i} \\
482
+ \label{eq:hard_context}
483
+ \hat{\vz}_t &= \sum_i {s}_{t,i}\va_{i}.
484
+ \end{align}
485
+ We define a new objective function $L_s$ that is a variational lower bound on the marginal log-likelihood
486
+ $\log p( \vy \mid \va)$ of observing the sequence of words $\vy$ given image features $\va$.
487
+ The learning algorithm for the parameters $W$ of the models can be derived by
488
+ directly optimizing $L_s$:
489
+ \begin{align}
490
+ L_s = & \sum_s p(s \mid \va) \log p(\vy \mid s, \va) \nonumber \\
491
+ \leq & \log \sum_{s}p(s \mid \va)p( \vy \mid s, \va) \nonumber \\
492
+ = & \log p( \vy \mid \va) \label{eq:mll}
493
+ \end{align}
494
+ \begin{multline}
495
+ \label{eq:lb_gradient}
496
+ \frac{\partial L_s}{\partial W} = \sum_s p(s \mid \va) \bigg[ \frac{\partial \log p(\vy \mid s, \va)}{\partial W} + \\
497
+ \log p(\vy \mid s, \va) \frac{\partial \log p(s \mid \va)}{\partial W} \bigg].
498
+ \end{multline}
499
+ Equation \ref{eq:lb_gradient} suggests a Monte Carlo based sampling
500
+ approximation of the gradient with respect to the model parameters. This can be
501
+ done by sampling the location $s_t$ from a multinouilli distribution defined by
502
+ Equation \ref{eq:s_dist}.
503
+ \begin{gather}
504
+ \tilde{s_t} \sim \text{Multinoulli}_L(\{\alpha_i\}) \nonumber
505
+ \end{gather}
506
+ \begin{multline}
507
+ \frac{\partial L_s}{\partial W} \approx \frac{1}{N} \sum_{n=1}^{N} \bigg[ \frac{\partial \log p(\vy \mid \tilde{s}^n, \va)}{\partial W} + \\
508
+ \log p(\vy \mid \tilde{s}^n, \va) \frac{\partial \log p(\tilde{s}^n \mid \va)}{\partial W} \bigg]
509
+ \end{multline}
510
+ A moving average baseline is used to reduce the variance in the
511
+ Monte Carlo estimator of the gradient, following~\citet{Weaver+Tao-UAI2001}.
512
+ Similar, but more complicated variance
513
+ reduction techniques have previously been used by \citet{Mnih2014} and
514
+ \citet{Ba2014}. Upon seeing the $k^{th}$ mini-batch, the moving average baseline
515
+ is estimated as an accumulated sum of the previous log likelihoods with exponential decay:
516
+ \begin{align}
517
+ b_k = 0.9 \times b_{k-1} + 0.1 \times \log p(\vy \mid \tilde{s}_k , \va) \nonumber
518
+ \end{align}
519
+ To further reduce the estimator variance, an entropy term on the multinouilli
520
+ distribution $H[s]$ is added. Also, with probability 0.5 for a given image, we
521
+ set the sampled attention location $\tilde{s}$ to its expected value $\alpha$.
522
+ Both techniques improve the robustness of the stochastic attention learning algorithm.
523
+ The final learning rule for the model is then the following:
524
+ \begin{multline}
525
+ \frac{\partial L_s}{\partial W} \approx \frac{1}{N} \sum_{n=1}^{N} \bigg[ \frac{\partial \log p(\vy \mid \tilde{s}^n, \va)}{\partial W} + \\
526
+ \lambda_r( \log p(\vy \mid \tilde{s}^n, \va) - b) \frac{\partial \log p(\tilde{s}^n \mid \va)}{\partial W} + \lambda_e\frac{\partial H[\tilde{s}^n]}{\partial W} \bigg]
527
+ \nonumber
528
+ \end{multline}
529
+ where, $\lambda_r$ and $\lambda_e$ are two hyper-parameters set by cross-validation.
530
+ As pointed out and used in \citet{Ba2014} and \citet{Mnih2014}, this is
531
+ formulation is equivalent to the REINFORCE learning rule~\citep{Williams92}, where the
532
+ reward for the attention choosing a sequence of actions is a real value proportional to the log likelihood
533
+ of the target sentence under the sampled attention trajectory.
534
+
535
+ In making a hard choice at every point, $\phi\left( \left\{ \va_i \right\},
536
+ \left\{ \alpha_i \right\}\right)$ from Equation \ref{eq:context} is a function that
537
+ returns a sampled $\va_i$ at every point in time based upon a multinouilli
538
+ distribution parameterized by $\alpha$.
539
+
540
+ \subsection{Deterministic ``Soft'' Attention}
541
+ \label{sec:det_attn}
542
+ Learning stochastic attention requires sampling the attention location $s_t$
543
+ each time, instead we can take the expectation of the context vector
544
+ $\hat{\vz}_t$ directly,
545
+ \begin{align}
546
+ \label{eq:s_dist_soft}
547
+ \mathbb{E}_{p(s_t|a)}[\hat{\vz}_t] = \sum_{i=1}^L \alpha_{t,i} \va_{i}
548
+ \end{align}
549
+ and formulate a deterministic attention model by computing a soft
550
+ attention weighted annotation vector $\phi\left( \left\{ \va_i \right\},
551
+ \left\{ \alpha_i \right\}\right) = \sum_i^L \alpha_i \va_i$ as introduced
552
+ by \citet{Bahdanau2014}. This corresponds to feeding in a
553
+ soft $\alpha$ weighted context into the system. The whole model is smooth and
554
+ differentiable under the deterministic attention, so learning end-to-end is
555
+ trivial by using standard back-propagation.
556
+
557
+ Learning the deterministic attention can also be understood as approximately
558
+ optimizing the marginal likelihood in Equation \ref{eq:mll} under the attention
559
+ location random variable $s_t$ from Sec.~\ref{sec:sto_attn}. The hidden activation of
560
+ LSTM $\vh_t$ is a linear projection of the stochastic context vector $\hat{\vz}_t$
561
+ followed by $\tanh$ non-linearity. To the first order Taylor approximation, the
562
+ expected value $\mathbb{E}_{p(s_t|a)}[\vh_t]$ is equal to computing $\vh_t$
563
+ using a single forward prop with the expected context vector
564
+ $\mathbb{E}_{p(s_t|a)}[\hat{\vz}_t]$. Considering Eq.~\ref{eq:p-out},
565
+ let $\vn_t = \vL_o(\vE\vy_{t-1} + \vL_h\vh_t+ \vL_z \hat{\vz}_t)$,
566
+ $\vn_{t,i}$ denotes $\vn_t$ computed by setting the random variable $\hat{\vz}$ value to $\va_i$.
567
+ We define the normalized weighted geometric mean for the softmax $k^{th}$ word
568
+ prediction:
569
+ \begin{align}
570
+ NWGM[p(y_t=k \mid \va)] &= {\prod_i \exp(n_{t,k,i})^{p(s_{t,i}=1|a)} \over \sum_j \prod_i \exp(n_{t,j,i})^{p(s_{t,i}=1|a)}} \nonumber \\
571
+ &= {\exp(\mathbb{E}_{p(s_{t}|a)}[n_{t,k}]) \over \sum_j \exp(\mathbb{E}_{p(s_t|a)}[n_{t,j}])} \nonumber
572
+ \end{align}
573
+ The equation above shows the normalized weighted geometric mean of the caption
574
+ prediction can be approximated well by using the expected context vector, where
575
+ $\mathbb{E}[\vn_t] = \vL_o(\vE\vy_{t-1} + \vL_h\mathbb{E}[\vh_t]+ \vL_z \mathbb{E}[\hat{\vz}_t])$.
576
+ It shows that the NWGM of a softmax unit is obtained by applying softmax to the
577
+ expectations of the underlying linear projections. Also, from the results in
578
+ \cite{baldi2014dropout}, $NWGM[p(y_t=k \mid \va)] \approx \mathbb{E}[p(y_t=k \mid \va)]$
579
+ under softmax activation. That means the expectation of the outputs over all
580
+ possible attention locations induced by random variable $s_t$ is computed by
581
+ simple feedforward propagation with expected context vector
582
+ $\mathbb{E}[\hat{\vz}_t]$. In other words, the deterministic attention model
583
+ is an approximation to the marginal likelihood over the attention locations.
584
+
585
+ \subsubsection{Doubly Stochastic Attention}
586
+ \label{section:ds_attn}
587
+
588
+ By construction, $\sum_i \alpha_{ti} = 1$ as they are the output of a softmax.
589
+ In training the deterministic version of our model we introduce a form of
590
+ doubly stochastic regularization, where we also encourage $\sum_t \alpha_{ti} \approx
591
+ 1$. This can be interpreted as encouraging the model to pay equal attention to
592
+ every part of the image over the course of generation. In our experiments, we
593
+ observed that this penalty was important quantitatively to improving overall
594
+ BLEU score and that qualitatively this leads to more rich and descriptive
595
+ captions. In addition, the soft attention model predicts a gating scalar $\beta$
596
+ from previous hidden state $\vh_{t-1}$ at each time step $t$, such that, $\phi\left( \left\{ \va_i \right\}, \left\{ \alpha_i \right\}\right) = \beta \sum_i^L \alpha_i \va_i$, where $\beta_t = \sigma(f_{\beta}(\vh_{t-1}))$. We notice
597
+ our attention weights put more emphasis on the objects in the images by
598
+ including the scalar $\beta$.
599
+
600
+ \begin{table*}[!tph]
601
+ \label{table:results}
602
+ \caption{BLEU-{1,2,3,4}/METEOR metrics compared to other methods, $\dagger$ indicates a different split, (---) indicates an unknown metric, $\circ$ indicates the authors
603
+ kindly provided missing metrics by personal communication, $\Sigma$ indicates an ensemble, $a$ indicates using
604
+ AlexNet}
605
+
606
+ \vskip 0.15in
607
+ \centering
608
+ \begin{tabular}{cccccccc}
609
+ \cline{3-6}
610
+ & \multicolumn{1}{l|}{} & \multicolumn{4}{c|}{\bf BLEU} & & \\ \cline{1-7}
611
+ \multicolumn{1}{|c|}{Dataset} & \multicolumn{1}{c|}{Model} & \multicolumn{1}{c|}{BLEU-1} & \multicolumn{1}{c|}{BLEU-2} & \multicolumn{1}{c|}{BLEU-3} & \multicolumn{1}{c|}{BLEU-4} & \multicolumn{1}{c|}{METEOR} & \\ \cline{1-7}
612
+ \multicolumn{1}{|c|}{Flickr8k} & \specialcell{ Google NIC\citep{Vinyals2014}$^\dagger$$^\Sigma$ \\ Log Bilinear \citep{Kiros2014a}$^\circ$ \\ Soft-Attention \\ Hard-Attention }
613
+ & \specialcell{63 \\ 65.6 \\ \bf 67 \\ \bf 67 }
614
+ & \specialcell{41 \\ 42.4 \\ 44.8 \\ \bf 45.7 }
615
+ & \specialcell{27 \\ 27.7 \\ 29.9 \\ \bf 31.4 }
616
+ & \specialcell{ --- \\ 17.7 \\ 19.5 \\ \bf 21.3 }
617
+ & \multicolumn{1}{c|}{\specialcell{ --- \\ 17.31 \\ 18.93 \\ \bf 20.30 }} \ts\\
618
+ \cline{1-7}
619
+ \multicolumn{1}{|c|}{Flickr30k} & \specialcell{ Google NIC$^\dagger$$^\circ$$^\Sigma$ \\Log Bilinear\\ Soft-Attention \\ Hard-Attention }
620
+ & \specialcell{66.3 \\ 60.0 \\ 66.7 \\ \bf 66.9 }
621
+ & \specialcell{42.3 \\ 38 \\ 43.4 \\ \bf 43.9 }
622
+ & \specialcell{27.7 \\ 25.4 \\ 28.8 \\ \bf 29.6 }
623
+ & \specialcell{18.3 \\ 17.1 \\ 19.1 \\ \bf 19.9 }
624
+ & \multicolumn{1}{c|}{\specialcell{ --- \\ 16.88 \\ \bf 18.49 \\ 18.46}} \ts\\
625
+ \cline{1-7}
626
+ \multicolumn{1}{|c|}{COCO} & \specialcell{CMU/MS Research \citep{Chen2014}$^a$ \\ MS Research \citep{Fang2014}$^\dagger$$^a$ \\ BRNN \citep{Karpathy2014}$^\circ$ \\Google NIC$^\dagger$$^\circ$$^\Sigma$ \\ Log Bilinear$^\circ$ \\ Soft-Attention \\ Hard-Attention }
627
+ & \specialcell{--- \\ --- \\ 64.2 \\ 66.6 \\ 70.8 \\ 70.7 \\ \bf 71.8 }
628
+ & \specialcell{--- \\ --- \\ 45.1 \\ 46.1 \\ 48.9 \\ 49.2 \\ \bf 50.4 }
629
+ & \specialcell{--- \\ --- \\ 30.4 \\ 32.9 \\ 34.4 \\ 34.4 \\ \bf 35.7 }
630
+ & \specialcell{--- \\ --- \\ 20.3 \\ 24.6 \\ 24.3 \\ 24.3 \\ \bf 25.0 }
631
+ & \multicolumn{1}{c|}{\specialcell{ 20.41 \\ 20.71 \\ --- \\ --- \\ 20.03 \\ \bf 23.90 \\ 23.04 }} \ts\\
632
+ \cline{1-7}
633
+ \end{tabular}
634
+ \vskip -0.1in
635
+ \end{table*}
636
+
637
+
638
+ Concretely, the model is trained end-to-end by minimizing the following penalized negative
639
+ log-likelihood:
640
+ \begin{gather}
641
+ L_d = -\log(P(\textbf{y}|\textbf{x})) + \lambda \sum_i^{L}(1 - \sum_t^C \alpha_{ti})^2
642
+ \end{gather}
643
+ \subsection{Training Procedure}
644
+
645
+ Both variants of our attention model were trained with stochastic gradient
646
+ descent using adaptive learning rate algorithms. For the Flickr8k dataset, we
647
+ found that RMSProp \citep{Tieleman2012} worked best, while for Flickr30k/MS
648
+ COCO dataset we used the recently proposed Adam algorithm~\citep{Kingma2014} .
649
+
650
+ To create the annotations $a_i$ used by our decoder, we used the Oxford VGGnet
651
+ \citep{Simonyan14} pre-trained on ImageNet without finetuning. In principle however,
652
+ any encoding function could be used. In addition, with enough data, we could also train
653
+ the encoder from scratch (or fine-tune) with the rest of the model.
654
+ In our experiments we use the 14$\times$14$\times$512 feature map
655
+ of the fourth convolutional layer before max pooling. This means our
656
+ decoder operates on the flattened 196 $\times$ 512 (i.e $L\times D$) encoding.
657
+
658
+ As our implementation requires time proportional to the length of the longest
659
+ sentence per update, we found training on a random group of captions to be
660
+ computationally wasteful. To mitigate this problem, in preprocessing we build a
661
+ dictionary mapping the length of a sentence to the corresponding subset of
662
+ captions. Then, during training we randomly sample a length and retrieve a
663
+ mini-batch of size 64 of that length. We found that this greatly improved
664
+ convergence speed with no noticeable diminishment in performance. On our
665
+ largest dataset (MS COCO), our soft attention model took less than 3 days to
666
+ train on an NVIDIA Titan Black GPU.
667
+
668
+ In addition to dropout~\citep{Srivastava14}, the only other regularization
669
+ strategy we used was early stopping on BLEU score. We observed a breakdown in
670
+ correlation between the validation set log-likelihood and BLEU in the later stages
671
+ of training during our experiments. Since BLEU is the most commonly reported
672
+ metric, we used BLEU on our validation set for model selection.
673
+
674
+ In our experiments with
675
+ soft attention, we also used Whetlab\footnote{\url{https://www.whetlab.com/}} \citep{Snoek2012,Snoek2014} in our
676
+ Flickr8k experiments. Some of the intuitions we gained from hyperparameter
677
+ regions it explored were especially important in our Flickr30k and COCO experiments.
678
+
679
+ We make our code for these models based in Theano \citep{Bergstra2010}
680
+ publicly available upon publication to encourage future research in this
681
+ area.
682
+
683
+ \section{Experiments}
684
+ We describe our experimental methodology and quantitative results which validate the effectiveness
685
+ of our model for caption generation.
686
+
687
+ \subsection{Data}
688
+ We report results on the popular Flickr8k and Flickr30k dataset which has 8,000
689
+ and 30,000 images respectively as well as the more challenging Microsoft COCO
690
+ dataset which has 82,783 images. The Flickr8k/Flickr30k dataset both come with
691
+ 5 reference sentences per image, but for the MS COCO dataset, some of the
692
+ images have references in excess of 5 which for consistency across
693
+ our datasets we discard. We applied only basic tokenization to MS COCO so that it is
694
+ consistent with the tokenization present in Flickr8k and Flickr30k. For all our
695
+ experiments, we used a fixed vocabulary size of 10,000.
696
+
697
+ Results for our attention-based architecture are reported in Table
698
+ \ref{table:results}. We report results with the frequently used BLEU
699
+ metric\footnote{We verified that our BLEU evaluation code matches the authors of
700
+ \citet{Vinyals2014}, \citet{Karpathy2014} and \citet{Kiros2014b}. For fairness, we only compare against
701
+ results for which we have verified that our BLEU evaluation code is the same. With
702
+ the upcoming release of the COCO evaluation server, we will include comparison
703
+ results with all other recent image captioning models.} which is the standard in the
704
+ caption generation literature. We report BLEU from 1 to 4 without a brevity
705
+ penalty. There has been, however, criticism of BLEU, so in addition we report
706
+ another common metric METEOR \citep{Denkowski2014}, and compare whenever
707
+ possible.
708
+
709
+ \subsection{Evaluation Procedures}
710
+
711
+ A few challenges exist for comparison, which we explain here. The first is a
712
+ difference in choice of convolutional feature extractor. For identical decoder
713
+ architectures, using more recent architectures such as GoogLeNet or Oxford VGG
714
+ \citet{Szegedy2014}, \citet{Simonyan14} can give a boost in performance over
715
+ using the AlexNet \citep{Krizhevsky2012}. In our evaluation, we compare directly
716
+ only with results which use the comparable GoogLeNet/Oxford VGG features, but
717
+ for METEOR comparison we note some results that use AlexNet.
718
+
719
+ The second challenge is a single model versus ensemble comparison. While
720
+ other methods have reported performance boosts by using ensembling, in our
721
+ results we report a single model performance.
722
+
723
+ Finally, there is challenge due to differences between dataset splits. In our
724
+ reported results, we use the pre-defined splits of Flickr8k. However, one
725
+ challenge for the Flickr30k and COCO datasets is the lack of standardized
726
+ splits. As a result, we report with the publicly available
727
+ splits\footnote{\url{http://cs.stanford.edu/people/karpathy/deepimagesent/}}
728
+ used in previous work \citep{Karpathy2014}. In our experience, differences in
729
+ splits do not make a substantial difference in overall performance, but
730
+ we note the differences where they exist.
731
+
732
+ \subsection{Quantitative Analysis}
733
+ \label{section:results}
734
+ In Table \ref{table:results}, we provide a summary of the experiment validating
735
+ the quantitative effectiveness of attention. We obtain state of the art
736
+ performance on the Flickr8k, Flickr30k and MS COCO. In addition, we note
737
+ that in our experiments we are able to significantly improve the state of the
738
+ art performance METEOR on MS COCO that we speculate is connected to
739
+ some of the regularization techniques we used \ref{section:ds_attn} and our
740
+ lower level representation. Finally, we also note that we are
741
+ able to obtain this performance using a single model without an ensemble.
742
+
743
+ \subsection{Qualitative Analysis: Learning to attend}
744
+ \label{section:viz}
745
+ By visualizing the attention component learned by the model, we are able to add
746
+ an extra layer of interpretability to the output of the model
747
+ (see Fig. 1). Other systems that have done this rely on object
748
+ detection systems to produce candidate alignment targets~\citep{Karpathy2014}.
749
+ Our approach is much more flexible, since the model can attend to ``non
750
+ object'' salient regions.
751
+
752
+ The 19-layer OxfordNet uses stacks of 3x3 filters meaning the only time the
753
+ feature maps decrease in size are due to the max pooling layers. The input
754
+ image is resized so that the shortest side is 256 dimensional with preserved
755
+ aspect ratio. The input to the convolutional network is the center cropped
756
+ 224x224 image. Consequently, with 4 max pooling layers we get an output
757
+ dimension of the top convolutional layer of 14x14. Thus in order to visualize
758
+ the attention weights for the soft model, we simply upsample the weights by a
759
+ factor of $2^4$ = 16 and apply a Gaussian filter. We note that the receptive
760
+ fields of each of the 14x14 units are highly overlapping.
761
+
762
+ As we can see in Figure 2 and 3, the model learns alignments
763
+ that correspond very strongly with human intuition. Especially in the examples of
764
+ mistakes, we see that it is possible to exploit such visualizations to get an
765
+ intuition as to why those mistakes were made. We provide a more extensive list of
766
+ visualizations in Appendix \ref{section:appendix} for the reader.
767
+
768
+ \section{Conclusion}
769
+
770
+ We propose an attention based approach that gives state of the art performance
771
+ on three benchmark datasets using the BLEU and METEOR metric. We also show how the
772
+ learned attention can be exploited to give more interpretability into the
773
+ models generation process, and demonstrate that the learned alignments
774
+ correspond very well to human intuition. We hope that the results of this paper
775
+ will encourage future work in using visual attention. We also expect that the
776
+ modularity of the encoder-decoder approach combined with attention to have
777
+ useful applications in other domains.
778
+
779
+ \section*{Acknowledgments}
780
+ \label{sec:ack}
781
+
782
+ The authors would like to thank the developers of Theano~\citep{Bergstra2010,
783
+ Bastien2012}. We acknowledge the support of the following
784
+ organizations for research funding and computing support: the Nuance Foundation, NSERC, Samsung,
785
+ Calcul Qu\'{e}bec, Compute Canada, the Canada Research Chairs and CIFAR. The authors
786
+ would also like to thank Nitish Srivastava for assistance with his ConvNet
787
+ package as well as preparing the Oxford convolutional network and Relu Patrascu
788
+ for helping with numerous infrastructure related problems.
789
+
790
+
791
+
792
+
793
+ \small
794
+ \bibliography{capgen}
795
+ \bibliographystyle{icml2015}
796
+ \newpage
797
+ \appendix
798
+ \normalsize
799
+ \onecolumn
800
+ \section{Appendix}
801
+ \label{section:appendix}
802
+ Visualizations from our ``hard'' (a) and ``soft'' (b) attention
803
+ model. \textit{White} indicates the regions where the model roughly attends to (see section \ref{section:viz}).
804
+ \vskip -0.14in
805
+ \begin{figure*}[!ht]
806
+ \begin{center}
807
+ \centerline{\includegraphics[width=6.75in]{61_hard}}
808
+ \vskip -0.3in
809
+ (a) A man and a woman playing frisbee in a field.
810
+ \centerline{\includegraphics[width=6.75in]{61}}
811
+ \vskip -0.2in
812
+ (b) A woman is throwing a frisbee in a park.
813
+ \caption{}
814
+ \label{figure:im61}
815
+ \vskip -3in
816
+ \end{center}
817
+ \end{figure*}
818
+
819
+ \begin{figure*}[ht]
820
+ \vskip 0.2in
821
+ \begin{center}
822
+ \centerline{\includegraphics[width=7.3in]{1038_hard}}
823
+ \vskip -0.2in
824
+ (a) A giraffe standing in the field with trees.
825
+ \centerline{\includegraphics[width=7.3in]{1038}}
826
+ (b) A large white bird standing in a forest.
827
+ \caption{}
828
+ \label{figure:im1038}
829
+ \end{center}
830
+ \vskip -0.4in
831
+ \end{figure*}
832
+
833
+ \begin{figure*}[ht]
834
+ \begin{center}
835
+ \centerline{\includegraphics[width=7in]{570_hard}}
836
+ \vskip -0.2in
837
+ (a) A dog is laying on a bed with a book.
838
+ \centerline{\includegraphics[width=7in]{570}}
839
+ (b) A dog is standing on a hardwood floor.
840
+ \caption{}
841
+ \label{figure:im570}
842
+ \end{center}
843
+ \vskip -0.4in
844
+ \end{figure*}
845
+
846
+ \begin{figure*}[ht]
847
+ \begin{center}
848
+ \centerline{\includegraphics[width=7in]{352_hard}}
849
+ \vskip -0.3in
850
+ (a) A woman is holding a donut in his hand.
851
+ \centerline{\includegraphics[width=7in]{352}}
852
+ \vskip -0.4in
853
+ (b) A woman holding a clock in her hand.
854
+ \caption{}
855
+ \label{figure:im352}
856
+ \end{center}
857
+ \vskip -0.4in
858
+ \end{figure*}
859
+
860
+ \begin{figure*}[ht]
861
+ \begin{center}
862
+ \vskip -0.1in
863
+ \centerline{\includegraphics[width=8in]{1304_hard}}
864
+ \vskip -0.5in
865
+ (a) A stop sign with a stop sign on it.
866
+ \centerline{\includegraphics[width=9in]{1304}}
867
+ \vskip -0.2in
868
+ (b) A stop sign is on a road with a mountain in the background.
869
+ \caption{}
870
+ \label{figure:im1304}
871
+ \end{center}
872
+ \vskip -0.4in
873
+ \end{figure*}
874
+
875
+ \begin{figure*}[ht]
876
+ \begin{center}
877
+ \centerline{\includegraphics[width=7.5in]{1066_hard}}
878
+ \vskip -0.5in
879
+ (a) A man in a suit and a hat holding a remote control.
880
+ \centerline{\includegraphics[width=7.5in]{1066}}
881
+ \vskip -0.2in
882
+ (b) A man wearing a hat and a hat on a skateboard.
883
+ \caption{}
884
+ \label{figure:im1066}
885
+ \end{center}
886
+ \vskip -0.7in
887
+ \end{figure*}
888
+
889
+ \begin{figure*}[ht]
890
+ \vskip 0.2in
891
+ \begin{center}
892
+ \centerline{\includegraphics[width=7.5in]{120_hard}}
893
+ \vskip -0.4in
894
+ (a) A little girl sitting on a couch with a teddy bear.
895
+ \centerline{\includegraphics[width=7.5in]{120}}
896
+ \vskip -0.2in
897
+ (b) A little girl sitting on a bed with a teddy bear.
898
+ \label{figure:im120}
899
+ \end{center}
900
+ \vskip -0.4in
901
+ \end{figure*}
902
+
903
+ \begin{figure*}[ht]
904
+ \begin{center}
905
+ \centerline{\includegraphics[width=7in]{3884_hard}}
906
+ \vskip -0.4in
907
+ (a) A man is standing on a beach with a surfboard.
908
+ \centerline{\includegraphics[width=7in]{861_3884}}
909
+ \vskip -0.2in
910
+ (b) A person is standing on a beach with a surfboard.
911
+ \label{figure:im861_3884}
912
+ \end{center}
913
+ \vskip -0.8in
914
+ \end{figure*}
915
+
916
+ \begin{figure*}[ht]
917
+ \begin{center}
918
+ \centerline{\includegraphics[width=7.5in]{1322_hard}}
919
+ \vskip -0.4in
920
+ (a) A man and a woman riding a boat in the water.
921
+ \centerline{\includegraphics[width=7.5in]{1322_1463_3861_4468}}
922
+ \vskip -0.4in
923
+ (b) A group of people sitting on a boat in the water.
924
+ \caption{}
925
+ \label{figure:im1322_1463_3861_4468}
926
+ \end{center}
927
+ \vskip -0.4in
928
+ \end{figure*}
929
+
930
+ \begin{figure*}[ht]
931
+ \begin{center}
932
+ \centerline{\includegraphics[width=8in]{462_hard}}
933
+ \vskip -0.5in
934
+ (a) A man is standing in a market with a large amount of food.
935
+ \centerline{\includegraphics[width=8in]{462_1252}}
936
+ \vskip -0.4in
937
+ (b) A woman is sitting at a table with a large pizza.
938
+ \caption{}
939
+ \label{figure:im462_1252}
940
+ \end{center}
941
+ \vskip -0.4in
942
+ \end{figure*}
943
+
944
+ \begin{figure*}[ht]
945
+ \begin{center}
946
+ \centerline{\includegraphics[width=7.3in]{4837_hard}}
947
+ \vskip -0.3in
948
+ (a) A giraffe standing in a field with trees.
949
+ \centerline{\includegraphics[width=7.3in]{1300_4837}}
950
+ \vskip -0.4in
951
+ (b) A giraffe standing in a forest with trees in the background.
952
+ \caption{}
953
+ \label{figure:im1300_4837}
954
+ \end{center}
955
+ \vskip -0.4in
956
+ \end{figure*}
957
+
958
+ \begin{figure*}[ht]
959
+ \begin{center}
960
+ \centerline{\includegraphics[width=7.5in]{1228_hard}}
961
+ \vskip -0.2in
962
+ (a) A group of people standing next to each other.
963
+ \centerline{\includegraphics[width=7.5in]{1228}}
964
+ \vskip -0.4in
965
+ (b) A man is talking on his cell phone while another man watches.
966
+ \caption{}
967
+ \label{figure:im1228}
968
+ \end{center}
969
+ \vskip -0.4in
970
+ \end{figure*}
971
+
972
+ \end{document}
papers/1502/1502.04681.tex ADDED
@@ -0,0 +1,1271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass{article}
2
+ \pdfoutput=1
3
+ \usepackage{times}
4
+ \usepackage{graphicx} \usepackage{subfigure}
5
+ \usepackage{natbib}
6
+ \usepackage{algorithm}
7
+ \usepackage{algorithmic}
8
+ \usepackage{hyperref}
9
+ \newcommand{\theHalgorithm}{\arabic{algorithm}}
10
+
11
+ \usepackage[accepted]{icml2015_copy}
12
+ \usepackage{array}
13
+ \usepackage{booktabs}
14
+ \usepackage{tikz}
15
+ \newcommand{\ii}{{\bf i}}
16
+ \newcommand{\cc}{{\bf c}}
17
+ \newcommand{\oo}{{\bf o}}
18
+ \newcommand{\ff}{{\bf f}}
19
+ \newcommand{\hh}{{\bf h}}
20
+ \newcommand{\xx}{{\bf x}}
21
+ \newcommand{\ww}{{\bf w}}
22
+ \newcommand{\WW}{{\bf W}}
23
+ \newcommand{\zz}{{\bf z}}
24
+ \newcommand{\vv}{{\bf v}}
25
+ \newcommand{\bb}{{\bf b}}
26
+ \newcommand{\phivec}{{\boldsymbol \phi}}
27
+ \newcommand{\deriv}{\mathrm{d}}
28
+ \newcommand{\Figref}[1]{Fig.~\ref{#1}}
29
+ \newcommand{\Tabref}[1]{Table~\ref{#1}}
30
+ \newcommand{\Secref}[1]{Sec.~\ref{#1}}
31
+ \newcommand{\Eqref}[1]{Eq.~\ref{#1}} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
32
+ \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
33
+ \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
34
+
35
+ \icmltitlerunning{Unsupervised Learning with LSTMs}
36
+
37
+ \begin{document}
38
+ \twocolumn[
39
+ \icmltitle{Unsupervised Learning of Video Representations using LSTMs}
40
+
41
+ \icmlauthor{Nitish Srivastava}{nitish@cs.toronto.edu}
42
+ \icmlauthor{Elman Mansimov}{emansim@cs.toronto.edu}
43
+ \icmlauthor{Ruslan Salakhutdinov}{rsalakhu@cs.toronto.edu}
44
+ \icmladdress{University of Toronto,
45
+ 6 Kings College Road, Toronto, ON M5S 3G4 CANADA}
46
+ \icmlkeywords{unsupervised learning, deep learning, sequence learning, video, action recognition.}
47
+ \vskip 0.3in
48
+ ]
49
+
50
+ \begin{abstract}
51
+
52
+ We use multilayer Long Short Term Memory (LSTM) networks to learn
53
+ representations of video sequences. Our model uses an encoder LSTM to map an
54
+ input sequence into a fixed length representation. This representation is
55
+ decoded using single or multiple decoder LSTMs to perform different tasks, such
56
+ as reconstructing the input sequence, or predicting the future sequence. We
57
+ experiment with two kinds of input sequences -- patches of image pixels and
58
+ high-level representations (``percepts") of video frames extracted using a
59
+ pretrained convolutional net. We explore different design choices such as
60
+ whether the decoder LSTMs should condition on the generated output. We analyze the
61
+ outputs of the model qualitatively to see how well the model can extrapolate the
62
+ learned video representation into the future and into the past. We try to
63
+ visualize and interpret the learned features. We stress test the model by
64
+ running it on longer time scales and on out-of-domain data. We further evaluate
65
+ the representations by finetuning them for a supervised learning problem --
66
+ human action recognition on the UCF-101 and HMDB-51 datasets. We show that the
67
+ representations help improve classification accuracy, especially when there are
68
+ only a few training examples. Even models pretrained on unrelated datasets (300
69
+ hours of YouTube videos) can help action recognition performance.
70
+ \end{abstract}
71
+
72
+ \section{Introduction}
73
+ \label{submission}
74
+
75
+ Understanding temporal sequences is important for solving many problems in the
76
+ AI-set. Recently, recurrent neural networks using the Long Short Term Memory
77
+ (LSTM) architecture \citep{Hochreiter} have been used successfully to perform various supervised
78
+ sequence learning tasks, such as speech recognition \citep{graves_lstm}, machine
79
+ translation \citep{IlyaMT, ChoMT}, and caption generation for images
80
+ \citep{OriolCaption}. They have also been applied on videos for recognizing
81
+ actions and generating natural language descriptions \citep{BerkeleyVideo}. A
82
+ general sequence to sequence learning framework was described by \citet{IlyaMT}
83
+ in which a recurrent network is used to encode a sequence into a fixed length
84
+ representation, and then another recurrent network is used to decode a sequence
85
+ out of that representation. In this work, we apply and extend this framework to
86
+ learn representations of sequences of images. We choose to work in the
87
+ \emph{unsupervised} setting where we only have access to a dataset of unlabelled
88
+ videos.
89
+
90
+ Videos are an abundant and rich source of visual information and can be seen as
91
+ a window into the physics of the world we live in, showing us examples of what
92
+ constitutes objects, how objects move against backgrounds, what happens when
93
+ cameras move and how things get occluded. Being able to learn a representation
94
+ that disentangles these factors would help in making intelligent machines that
95
+ can understand and act in their environment. Additionally, learning good video
96
+ representations is essential for a number of useful tasks, such as recognizing
97
+ actions and gestures.
98
+
99
+ \subsection{Why Unsupervised Learning?}
100
+ Supervised learning has been extremely successful in learning good visual
101
+ representations that not only produce good results at the task they are trained
102
+ for, but also transfer well to other tasks and datasets. Therefore, it is
103
+ natural to extend the same approach to learning video representations. This has
104
+ led to research in 3D convolutional nets \citep{conv3d, C3D}, different temporal
105
+ fusion strategies \citep{KarpathyCVPR14} and exploring different ways of
106
+ presenting visual information to convolutional nets \citep{Simonyan14b}.
107
+ However, videos are much higher dimensional entities compared to single images.
108
+ Therefore, it becomes increasingly difficult to do credit assignment and learn long
109
+ range structure, unless we collect much more labelled data or do a lot of
110
+ feature engineering (for example computing the right kinds of flow features) to
111
+ keep the dimensionality low. The costly work of collecting more labelled data
112
+ and the tedious work of doing more clever engineering can go a long way in
113
+ solving particular problems, but this is ultimately unsatisfying as a machine
114
+ learning solution. This highlights the need for using unsupervised learning to
115
+ find and represent structure in videos. Moreover, videos have a lot of
116
+ structure in them (spatial and temporal regularities) which makes them
117
+ particularly well suited as a domain for building unsupervised learning models.
118
+
119
+ \subsection{Our Approach}
120
+ When designing any unsupervised learning model, it is crucial to have the right
121
+ inductive biases and choose the right objective function so that the learning
122
+ signal points the model towards learning useful features. In
123
+ this paper, we use the LSTM Encoder-Decoder framework to learn video
124
+ representations. The key inductive bias here is that the same operation must be
125
+ applied at each time step to propagate information to the next step. This
126
+ enforces the fact that the physics of the world remains the same, irrespective of
127
+ input. The same physics acting on any state, at any time, must produce the next
128
+ state. Our model works as follows.
129
+ The Encoder LSTM runs through a sequence of frames to come up
130
+ with a representation. This representation is then decoded through another LSTM
131
+ to produce a target sequence. We consider different choices of the target
132
+ sequence. One choice is to predict the same sequence as the input. The
133
+ motivation is similar to that of autoencoders -- we wish to capture all that is
134
+ needed to reproduce the input but at the same time go through the inductive
135
+ biases imposed by the model. Another option is to predict the future frames.
136
+ Here the motivation is to learn a representation that extracts all that is
137
+ needed to extrapolate the motion and appearance beyond what has been observed. These two
138
+ natural choices can also be combined. In this case, there are two decoder LSTMs
139
+ -- one that decodes the representation into the input sequence and another that
140
+ decodes the same representation to predict the future.
141
+
142
+ The inputs to the model can, in principle, be any representation of individual
143
+ video frames. However, for the purposes of this work, we limit our attention to
144
+ two kinds of inputs. The first is image patches. For this we use natural image
145
+ patches as well as a dataset of moving MNIST digits. The second is
146
+ high-level ``percepts" extracted by applying a convolutional net trained on
147
+ ImageNet. These percepts are the states of last (and/or second-to-last) layers of
148
+ rectified linear hidden states from a convolutional neural net model.
149
+
150
+ In order to evaluate the learned representations we qualitatively analyze the
151
+ reconstructions and predictions made by the model. For a more quantitative
152
+ evaluation, we use these LSTMs as initializations for the supervised task of
153
+ action recognition. If the unsupervised learning model comes up with useful
154
+ representations then the classifier should be able to perform better, especially
155
+ when there are only a few labelled examples. We find that this is indeed the
156
+ case.
157
+
158
+ \subsection{Related Work}
159
+
160
+ The first approaches to learning representations of videos in an unsupervised
161
+ way were based on ICA \citep{Hateren, HurriH03}. \citet{QuocISA} approached this
162
+ problem using multiple layers of Independent Subspace Analysis modules. Generative
163
+ models for understanding transformations between pairs of consecutive images are
164
+ also well studied \citep{memisevic_relational_pami, factoredBoltzmann, morphBM}.
165
+ This work was extended recently by \citet{NIPS2014_5549} to model longer
166
+ sequences.
167
+
168
+ Recently, \citet{RanzatoVideo} proposed a generative model for videos. The model
169
+ uses a recurrent neural network to predict the next frame or interpolate between
170
+ frames. In this work, the authors highlight the importance of choosing the right
171
+ loss function. It is argued that squared loss in input space is not the right
172
+ objective because it does not respond well to small distortions in input space.
173
+ The proposed solution is to quantize image patches into a large dictionary and
174
+ train the model to predict the identity of the target patch. This does solve
175
+ some of the problems of squared loss but it introduces an arbitrary dictionary
176
+ size into the picture and altogether removes the idea of patches being similar
177
+ or dissimilar to one other.
178
+ Designing an appropriate loss
179
+ function that respects our notion of visual similarity is a very hard problem
180
+ (in a sense, almost as hard as the modeling problem we want to solve in the
181
+ first place). Therefore, in this paper, we use the simple squared loss
182
+ objective function as a starting point and focus on designing an encoder-decoder
183
+ RNN architecture that can be used with any loss function.
184
+
185
+ \section{Model Description}
186
+
187
+ In this section, we describe several variants of our LSTM Encoder-Decoder model.
188
+ The basic unit of our network is the LSTM cell block.
189
+ Our implementation of LSTMs follows closely the one discussed by \citet{Graves13}.
190
+
191
+ \subsection{Long Short Term Memory}
192
+ In this section we briefly describe the LSTM unit which is the basic building block of
193
+ our model. The unit is shown in \Figref{fig:lstm} (reproduced from \citet{Graves13}).
194
+
195
+ \begin{figure}[ht]
196
+ \centering
197
+ \includegraphics[clip=true, trim=150 460 160 100,width=0.9\linewidth]{lstm_figure.pdf}
198
+ \caption{LSTM unit}
199
+ \label{fig:lstm}
200
+ \vspace{-0.2in}
201
+ \end{figure}
202
+
203
+ Each LSTM unit has a cell which has a state $c_t$ at time $t$. This cell can be
204
+ thought of as a memory unit. Access to this memory unit for reading or modifying
205
+ it is controlled through sigmoidal gates -- input gate $i_t$, forget gate $f_t$
206
+ and output gate $o_t$. The LSTM unit operates as follows. At each time step it
207
+ receives inputs from two external sources at each of the four terminals (the
208
+ three gates and the input). The first source is the current frame ${\xx_t}$.
209
+ The second source is the previous hidden states of all LSTM units in the same
210
+ layer $\hh_{t-1}$. Additionally, each gate has an internal source, the cell
211
+ state $c_{t-1}$ of its cell block. The links between a cell and its own gates
212
+ are called \emph{peephole} connections. The inputs coming from different sources
213
+ get added up, along with a bias. The gates are activated by passing their total
214
+ input through the logistic function. The total input at the input terminal is
215
+ passed through the tanh non-linearity. The resulting activation is multiplied by
216
+ the activation of the input gate. This is then added to the cell state after
217
+ multiplying the cell state by the forget gate's activation $f_t$. The final output
218
+ from the LSTM unit $h_t$ is computed by multiplying the output gate's activation
219
+ $o_t$ with the updated cell state passed through a tanh non-linearity. These
220
+ updates are summarized for a layer of LSTM units as follows
221
+ \vspace{-0.1in}
222
+ \begin{eqnarray*}
223
+ \ii_t &=& \sigma\left(W_{xi}\xx_t + W_{hi}\hh_{t-1} + W_{ci}\cc_{t-1} + \bb_i\right),\\
224
+ \ff_t &=& \sigma\left(W_{xf}\xx_t + W_{hf}\hh_{t-1} + W_{cf}\cc_{t-1} + \bb_f\right),\\
225
+ \cc_t &=& \ff_t \cc_{t-1} + \ii_t \tanh\left(W_{xc}\xx_t + W_{hc}\hh_{t-1} + \bb_c\right),\\
226
+ \oo_t &=& \sigma\left(W_{xo}\xx_t + W_{ho}\hh_{t-1} + W_{co}\cc_{t} + \bb_o \right),\\
227
+ \hh_t &=& \oo_t\tanh(\cc_t).
228
+ \end{eqnarray*}
229
+ \vspace{-0.3in}
230
+
231
+ Note that all $W_{c\bullet}$ matrices are diagonal, whereas the rest are dense.
232
+ The key advantage of using an LSTM unit over a traditional neuron in an RNN is
233
+ that the cell state in an LSTM unit \emph{sums} activities over time. Since derivatives
234
+ distribute over sums, the error derivatives don't vanish quickly as they get sent back
235
+ into time. This makes it easy to do credit assignment over long sequences and
236
+ discover long-range features.
237
+
238
+ \subsection{LSTM Autoencoder Model}
239
+
240
+ In this section, we describe a model that uses Recurrent Neural Nets (RNNs) made
241
+ of LSTM units to do unsupervised learning. The model consists of two RNNs --
242
+ the encoder LSTM and the decoder LSTM as shown in \Figref{fig:autoencoder}. The
243
+ input to the model is a sequence of vectors (image patches or features). The
244
+ encoder LSTM reads in this sequence. After the last input has been read, the
245
+ decoder LSTM takes over and outputs a prediction for the target sequence. The
246
+ target sequence is same as the input sequence, but in reverse order. Reversing
247
+ the target sequence makes the optimization easier because the model can get off
248
+ the ground by looking at low range correlations. This is also inspired by how
249
+ lists are represented in LISP. The encoder can be seen as creating a list by
250
+ applying the {\tt cons} function on the previously constructed list and the new
251
+ input. The decoder essentially unrolls this list, with the hidden to output
252
+ weights extracting the element at the top of the list ({\tt car} function) and
253
+ the hidden to hidden weights extracting the rest of the list ({\tt cdr}
254
+ function). Therefore, the first element out is the last element in.
255
+
256
+ \begin{figure}[t]
257
+ \resizebox{0.9\linewidth}{!}{\makeatletter
258
+ \ifx\du\undefined
259
+ \newlength{\du}
260
+ \fi
261
+ \setlength{\du}{\unitlength}
262
+ \ifx\spacing\undefined
263
+ \newlength{\spacing}
264
+ \fi
265
+ \setlength{\spacing}{60\unitlength}
266
+ \begin{tikzpicture}
267
+ \pgfsetlinewidth{0.5\du}
268
+ \pgfsetmiterjoin
269
+ \pgfsetbuttcap
270
+
271
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v1) at (0\spacing, 0) {$v_1$};
272
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v2) at (\spacing, 0) {$v_2$};
273
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v3) at (2\spacing, 0) {$v_3$};
274
+
275
+ \node[rectangle, dotted, draw=black, minimum width=10\du, minimum height=50\du] (v5) at (4\spacing, 0) {$v_3$};
276
+ \node[rectangle, dotted, draw=black, minimum width=10\du, minimum height=50\du] (v6) at (5\spacing, 0) {$v_2$};
277
+
278
+ \node[rectangle, dotted, draw=black, minimum width=10\du, minimum height=50\du] (v5f) at (4\spacing, 0) {$v_3$};
279
+ \node[rectangle, dotted, draw=black, minimum width=10\du, minimum height=50\du] (v6f) at (5\spacing, 0) {$v_2$};
280
+
281
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h1) at (0\spacing, \spacing) {};
282
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h2) at (\spacing, \spacing) {};
283
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h3) at (2\spacing, \spacing) {};
284
+
285
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h4) at (3\spacing, \spacing) {};
286
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h5) at (4\spacing, \spacing) {};
287
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h6) at (5\spacing, \spacing) {};
288
+
289
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v4r) at (3\spacing, 2\spacing) {$\hat{v_3}$};
290
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v5r) at (4\spacing, 2\spacing) {$\hat{v_2}$};
291
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v6r) at (5\spacing, 2\spacing) {$\hat{v_1}$};
292
+
293
+ \node[anchor=center] at (1.6\spacing, 2\spacing) {Learned};
294
+ \node[anchor=center] (p1) at (1.6\spacing, 1.8\spacing) {Representation};
295
+ \node[anchor=center] (p2) at (2\spacing, \spacing) {};
296
+
297
+
298
+ \draw[->] (v1) -- (h1);
299
+ \draw[->] (v2) -- (h2);
300
+ \draw[->] (v3) -- (h3);
301
+ \draw[->] (v5) -- (h5);
302
+ \draw[->] (v6) -- (h6);
303
+
304
+ \draw[->] (h1) -- node[above] {$W_1$} (h2);
305
+ \draw[->] (h2) -- node[above] {$W_1$} (h3);
306
+ \draw[->] (h3) -- node[above] {copy} (h4);
307
+ \draw[->] (h4) -- node[above] {$W_2$} (h5);
308
+ \draw[->] (h5) -- node[above] {$W_2$} (h6);
309
+
310
+ \draw[->] (h4) -- (v4r);
311
+ \draw[->] (h5) -- (v5r);
312
+ \draw[->] (h6) -- (v6r);
313
+
314
+ \draw[-, ultra thick] (p1) -- (p2);
315
+ \end{tikzpicture}
316
+ }
317
+ \caption{\small LSTM Autoencoder Model}
318
+ \label{fig:autoencoder}
319
+ \vspace{-0.1in}
320
+ \end{figure}
321
+
322
+ The decoder can be of two kinds -- conditional or unconditioned.
323
+ A conditional decoder receives the last generated output frame as
324
+ input, i.e., the dotted input in \Figref{fig:autoencoder} is present.
325
+ An unconditioned decoder does not receive that input. This is discussed in more
326
+ detail in \Secref{sec:condvsuncond}. \Figref{fig:autoencoder} shows a single
327
+ layer LSTM Autoencoder. The architecture can be extend to multiple layers by
328
+ stacking LSTMs on top of each other.
329
+
330
+ \emph{Why should this learn good features?}\\
331
+ The state of the encoder LSTM after the last input has been read is the
332
+ representation of the input video. The decoder LSTM is being asked to
333
+ reconstruct back the input sequence from this representation. In order to do so,
334
+ the representation must retain information about the appearance of the objects
335
+ and the background as well as the motion contained in the video.
336
+ However, an important question for any autoencoder-style model is what prevents
337
+ it from learning an identity mapping and effectively copying the input to the
338
+ output. In that case all the information about the input would still be present
339
+ but the representation will be no better than the input. There are two factors
340
+ that control this behaviour. First, the fact that there are only a fixed number
341
+ of hidden units makes it unlikely that the model can learn trivial mappings for
342
+ arbitrary length input sequences. Second, the same LSTM operation is used to
343
+ decode the representation recursively. This means that the same dynamics must be
344
+ applied on the representation at any stage of decoding. This further prevents
345
+ the model from learning an identity mapping.
346
+
347
+ \subsection{LSTM Future Predictor Model}
348
+
349
+ Another natural unsupervised learning task for sequences is predicting the
350
+ future. This is the approach used in language models for modeling sequences of
351
+ words. The design of the Future Predictor Model is same as that of the
352
+ Autoencoder Model, except that the decoder LSTM in this case predicts frames of
353
+ the video that come after the input sequence (\Figref{fig:futurepredictor}).
354
+ \citet{RanzatoVideo} use a similar model but predict only the next frame at each
355
+ time step. This model, on the other hand, predicts a long sequence into the
356
+ future. Here again we can consider two variants of the decoder -- conditional
357
+ and unconditioned.
358
+
359
+ \emph{Why should this learn good features?}\\
360
+ In order to predict the next few frames correctly, the model needs information
361
+ about which objects and background are present and how they are moving so that
362
+ the motion can be extrapolated. The hidden state coming out from the
363
+ encoder will try to capture this information. Therefore, this state can be seen as
364
+ a representation of the input sequence.
365
+
366
+ \begin{figure}[t]
367
+ \resizebox{0.9\linewidth}{!}{\makeatletter
368
+ \ifx\du\undefined
369
+ \newlength{\du}
370
+ \fi
371
+ \setlength{\du}{\unitlength}
372
+ \ifx\spacing\undefined
373
+ \newlength{\spacing}
374
+ \fi
375
+ \setlength{\spacing}{60\unitlength}
376
+ \begin{tikzpicture}
377
+ \pgfsetlinewidth{0.5\du}
378
+ \pgfsetmiterjoin
379
+ \pgfsetbuttcap
380
+
381
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v1) at (0\spacing, 0) {$v_1$};
382
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v2) at (\spacing, 0) {$v_2$};
383
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v3) at (2\spacing, 0) {$v_3$};
384
+
385
+ \node[rectangle, dotted, draw=black, minimum width=10\du, minimum height=50\du] (v5) at (4\spacing, 0) {$v_4$};
386
+ \node[rectangle, dotted, draw=black, minimum width=10\du, minimum height=50\du] (v6) at (5\spacing, 0) {$v_5$};
387
+
388
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h1) at (0\spacing, \spacing) {};
389
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h2) at (\spacing, \spacing) {};
390
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h3) at (2\spacing, \spacing) {};
391
+
392
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h4) at (3\spacing, \spacing) {};
393
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h5) at (4\spacing, \spacing) {};
394
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h6) at (5\spacing, \spacing) {};
395
+
396
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v4r) at (3\spacing, 2\spacing) {$\hat{v_4}$};
397
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v5r) at (4\spacing, 2\spacing) {$\hat{v_5}$};
398
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v6r) at (5\spacing, 2\spacing) {$\hat{v_6}$};
399
+
400
+ \node[anchor=center] at (1.6\spacing, 2\spacing) {Learned};
401
+ \node[anchor=center] (p1) at (1.6\spacing, 1.8\spacing) {Representation};
402
+ \node[anchor=center] (p2) at (2\spacing, \spacing) {};
403
+
404
+
405
+ \draw[->] (v1) -- (h1);
406
+ \draw[->] (v2) -- (h2);
407
+ \draw[->] (v3) -- (h3);
408
+
409
+ \draw[->] (v5) -- (h5);
410
+ \draw[->] (v6) -- (h6);
411
+
412
+ \draw[->] (h1) -- node[above] {$W_1$} (h2);
413
+ \draw[->] (h2) -- node[above] {$W_1$} (h3);
414
+ \draw[->] (h3) -- node[above] {copy} (h4);
415
+ \draw[->] (h4) -- node[above] {$W_2$} (h5);
416
+ \draw[->] (h5) -- node[above] {$W_2$} (h6);
417
+
418
+ \draw[->] (h4) -- (v4r);
419
+ \draw[->] (h5) -- (v5r);
420
+ \draw[->] (h6) -- (v6r);
421
+ \draw[-, ultra thick] (p1) -- (p2);
422
+ \end{tikzpicture}
423
+ }
424
+ \caption{\small LSTM Future Predictor Model}
425
+ \label{fig:futurepredictor}
426
+ \vspace{-0.1in}
427
+ \end{figure}
428
+
429
+ \subsection{Conditional Decoder}
430
+ \label{sec:condvsuncond}
431
+
432
+ For each of these two models, we can consider two possibilities - one in which
433
+ the decoder LSTM is conditioned on the last generated frame and the other in
434
+ which it is not. In the experimental section, we explore these choices
435
+ quantitatively. Here we briefly discuss arguments for and against a conditional
436
+ decoder. A strong argument in favour of using a conditional decoder is that it
437
+ allows the decoder to model multiple modes in the target sequence distribution.
438
+ Without that, we would end up averaging the multiple modes in the low-level
439
+ input space. However, this is an issue only if we expect multiple modes in the
440
+ target sequence distribution. For the LSTM Autoencoder, there is only one
441
+ correct target and hence a unimodal target distribution. But for the LSTM Future
442
+ Predictor there is a possibility of multiple targets given an input because even
443
+ if we assume a deterministic universe, everything needed to predict the future
444
+ will not necessarily be observed in the input.
445
+
446
+ There is also an argument against using a conditional decoder from the
447
+ optimization point-of-view. There are strong short-range correlations in
448
+ video data, for example, most of the content of a frame is same as the previous
449
+ one. If the decoder was given access to the last few frames while generating a
450
+ particular frame at training time, it would find it easy to pick up on these
451
+ correlations. There would only be a very small gradient that tries to fix up the
452
+ extremely subtle errors that require long term knowledge about the input
453
+ sequence. In an unconditioned decoder, this input is removed and the model is
454
+ forced to look for information deep inside the encoder.
455
+
456
+ \subsection{A Composite Model}
457
+ \begin{figure}[t]
458
+ \resizebox{0.9\linewidth}{!}{\makeatletter
459
+ \ifx\du\undefined
460
+ \newlength{\du}
461
+ \fi
462
+ \setlength{\du}{\unitlength}
463
+ \ifx\spacing\undefined
464
+ \newlength{\spacing}
465
+ \fi
466
+ \setlength{\spacing}{60\unitlength}
467
+ \begin{tikzpicture}
468
+ \pgfsetlinewidth{0.5\du}
469
+ \pgfsetmiterjoin
470
+ \pgfsetbuttcap
471
+
472
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v1) at (0\spacing, 1.5\spacing) {$v_1$};
473
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v2) at (\spacing, 1.5\spacing) {$v_2$};
474
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v3) at (2\spacing, 1.5\spacing) {$v_3$};
475
+
476
+ \node[rectangle, dotted, draw=black, minimum width=10\du, minimum height=50\du] (v5) at (4\spacing, 3\spacing) {$v_3$};
477
+ \node[rectangle, dotted, draw=black, minimum width=10\du, minimum height=50\du] (v6) at (5\spacing, 3\spacing) {$v_2$};
478
+
479
+ \node[rectangle, dotted, draw=black, minimum width=10\du, minimum height=50\du] (v5f) at (4\spacing, 0) {$v_4$};
480
+ \node[rectangle, dotted, draw=black, minimum width=10\du, minimum height=50\du] (v6f) at (5\spacing, 0) {$v_5$};
481
+
482
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h1) at (0\spacing, 2.5\spacing) {};
483
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h2) at (\spacing, 2.5\spacing) {};
484
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h3) at (2\spacing, 2.5\spacing) {};
485
+
486
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h4) at (3\spacing, 4\spacing) {};
487
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h5) at (4\spacing, 4\spacing) {};
488
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h6) at (5\spacing, 4\spacing) {};
489
+
490
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h4f) at (3\spacing, \spacing) {};
491
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h5f) at (4\spacing, \spacing) {};
492
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h6f) at (5\spacing, \spacing) {};
493
+
494
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v4r) at (3\spacing, 5\spacing) {$\hat{v_3}$};
495
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v5r) at (4\spacing, 5\spacing) {$\hat{v_2}$};
496
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v6r) at (5\spacing, 5\spacing) {$\hat{v_1}$};
497
+
498
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v4rf) at (3\spacing, 2\spacing) {$\hat{v_4}$};
499
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v5rf) at (4\spacing, 2\spacing) {$\hat{v_5}$};
500
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v6rf) at (5\spacing, 2\spacing) {$\hat{v_6}$};
501
+
502
+
503
+ \node[anchor=center] at (\spacing, 0.8\spacing) {Sequence of Input Frames};
504
+ \node[anchor=center] at (2.5\spacing, 0\spacing) {Future Prediction};
505
+ \node[anchor=center] at (1.6\spacing, 5\spacing) {Input Reconstruction};
506
+ \node[anchor=center] at (1.6\spacing, 3.5\spacing) {Learned};
507
+ \node[anchor=center] (p1) at (1.6\spacing, 3.3\spacing) {Representation};
508
+ \node[anchor=center] (p2) at (2\spacing, 2.5\spacing) {};
509
+
510
+ \draw[-, ultra thick] (p1) -- (p2);
511
+
512
+
513
+ \draw[->] (v1) -- (h1);
514
+ \draw[->] (v2) -- (h2);
515
+ \draw[->] (v3) -- (h3);
516
+ \draw[->] (v5) -- (h5);
517
+ \draw[->] (v6) -- (h6);
518
+ \draw[->] (v5f) -- (h5f);
519
+ \draw[->] (v6f) -- (h6f);
520
+
521
+ \draw[->] (h1) -- node[above] {$W_1$} (h2);
522
+ \draw[->] (h2) -- node[above] {$W_1$} (h3);
523
+ \draw[->] (h3) -- node[above] {copy} (h4);
524
+ \draw[->] (h3) -- node[above] {copy} (h4f);
525
+ \draw[->] (h4) -- node[above] {$W_2$} (h5);
526
+ \draw[->] (h5) -- node[above] {$W_2$} (h6);
527
+ \draw[->] (h4f) -- node[above] {$W_3$} (h5f);
528
+ \draw[->] (h5f) -- node[above] {$W_3$} (h6f);
529
+
530
+ \draw[->] (h4) -- (v4r);
531
+ \draw[->] (h5) -- (v5r);
532
+ \draw[->] (h6) -- (v6r);
533
+ \draw[->] (h4f) -- (v4rf);
534
+ \draw[->] (h5f) -- (v5rf);
535
+ \draw[->] (h6f) -- (v6rf);
536
+ \end{tikzpicture}
537
+ }
538
+ \caption{\small The Composite Model: The LSTM predicts the future as well as the input sequence.}
539
+ \label{fig:modelcombo}
540
+ \vspace{-0.2in}
541
+ \end{figure}
542
+
543
+
544
+ The two tasks -- reconstructing the input and predicting the future can be
545
+ combined to create a composite model as shown in \Figref{fig:modelcombo}. Here
546
+ the encoder LSTM is asked to come up with a state from which we can \emph{both} predict
547
+ the next few frames as well as reconstruct the input.
548
+
549
+ This composite model tries to overcome the shortcomings that each model suffers
550
+ on its own. A high-capacity autoencoder would suffer from the tendency to learn
551
+ trivial representations that just memorize the inputs. However, this
552
+ memorization is not useful at all for predicting the future. Therefore, the
553
+ composite model cannot just memorize information. On the other hand, the future
554
+ predictor suffers form the tendency to store information only about the last few
555
+ frames since those are most important for predicting the future, i.e., in order
556
+ to predict $v_{t}$, the frames $\{v_{t-1}, \ldots, v_{t-k}\}$ are much more
557
+ important than $v_0$, for some small value of $k$. Therefore the representation
558
+ at the end of the encoder will have forgotten about a large part of the input.
559
+ But if we ask the model to also predict \emph {all} of the input sequence, then
560
+ it cannot just pay attention to the last few frames.
561
+
562
+ \section{Experiments}
563
+ \begin{figure*}[ht]
564
+ \centering
565
+
566
+ \includegraphics[width=\textwidth]{mnist_input.pdf}
567
+
568
+ \vspace{-0.18in}
569
+ \includegraphics[width=\textwidth]{mnist_input_2.pdf}
570
+
571
+ \vspace{0.1in}
572
+ \includegraphics[width=\textwidth]{mnist_one_layer.pdf}
573
+
574
+ \vspace{-0.18in}
575
+ \includegraphics[width=\textwidth]{mnist_one_layer_2.pdf}
576
+
577
+ \vspace{0.1in}
578
+ \includegraphics[width=\textwidth]{mnist_two_layer.pdf}
579
+
580
+ \vspace{-0.18in}
581
+ \includegraphics[width=\textwidth]{mnist_two_layer_2.pdf}
582
+
583
+ \vspace{0.1in}
584
+ \includegraphics[width=\textwidth]{mnist_two_layer_cond.pdf}
585
+
586
+ \vspace{-0.18in}
587
+ \includegraphics[width=\textwidth]{mnist_two_layer_cond_2.pdf}
588
+ \small
589
+ \begin{picture}(0, 0)(-245, -200)
590
+ \put(-142, 62){Ground Truth Future}
591
+ \put(-50, 64) {\vector(1, 0){40}}
592
+ \put(-150, 64) {\vector(-1, 0){90}}
593
+ \put(-397, 62){Input Sequence}
594
+ \put(-400, 64) {\vector(-1, 0){80}}
595
+ \put(-335, 64) {\vector(1, 0){90}}
596
+
597
+ \put(-142, -2){Future Prediction}
598
+ \put(-50, -1) {\vector(1, 0){40}}
599
+ \put(-150, -1) {\vector(-1, 0){90}}
600
+ \put(-397, -2){Input Reconstruction}
601
+ \put(-400, -1) {\vector(-1, 0){80}}
602
+ \put(-305, -1) {\vector(1, 0){60}}
603
+
604
+ \put(-290, -60){One Layer Composite Model}
605
+ \put(-290, -125){Two Layer Composite Model}
606
+ \put(-350, -190){Two Layer Composite Model with a Conditional Future Predictor}
607
+
608
+ \end{picture}
609
+ \caption{\small Reconstruction and future prediction obtained from the Composite Model
610
+ on a dataset of moving MNIST digits.}
611
+ \label{fig:bouncing_mnist}
612
+ \vspace{-0.2in}
613
+ \end{figure*}
614
+
615
+
616
+ We design experiments to accomplish the following objectives:
617
+ \begin{itemize}
618
+ \vspace{-0.1in}
619
+ \item Get a qualitative understanding of what the LSTM learns to do.
620
+ \item Measure the benefit of initializing networks for supervised learning tasks
621
+ with the weights found by unsupervised learning, especially with very few training examples.
622
+ \item Compare the different proposed models - Autoencoder, Future Predictor and
623
+ Composite models and their conditional variants.
624
+ \item Compare with state-of-the-art action recognition benchmarks.
625
+ \end{itemize}
626
+
627
+
628
+ \subsection{Datasets}
629
+ We use the UCF-101 and HMDB-51 datasets for supervised tasks.
630
+ The UCF-101 dataset \citep{UCF101} contains 13,320 videos with an average length of
631
+ 6.2 seconds belonging to 101 different action categories. The dataset has 3
632
+ standard train/test splits with the training set containing around 9,500 videos
633
+ in each split (the rest are test).
634
+ The HMDB-51 dataset \citep{HMDB} contains 5100 videos belonging to 51 different
635
+ action categories. Mean length of the videos is 3.2 seconds. This also has 3
636
+ train/test splits with 3570 videos in the training set and rest in test.
637
+
638
+ To train the unsupervised models, we used a subset of the Sports-1M dataset
639
+ \citep{KarpathyCVPR14}, that contains 1~million YouTube clips.
640
+ Even though this dataset is labelled for actions, we did
641
+ not do any supervised experiments on it because of logistical constraints with
642
+ working with such a huge dataset. We instead collected 300 hours of video by
643
+ randomly sampling 10 second clips from the dataset. It is possible to collect
644
+ better samples if instead of choosing randomly, we extracted videos where a lot of
645
+ motion is happening and where there are no shot boundaries. However, we did not
646
+ do so in the spirit of unsupervised learning, and because we did not want to
647
+ introduce any unnatural bias in the samples. We also used the supervised
648
+ datasets (UCF-101 and HMDB-51) for unsupervised training. However, we found that
649
+ using them did not give any significant advantage over just using the YouTube
650
+ videos.
651
+
652
+ We extracted percepts using the convolutional neural net model of
653
+ \citet{Simonyan14c}. The videos have a resolution of 240 $\times$ 320 and were
654
+ sampled at almost 30 frames per second. We took the central 224 $\times$ 224
655
+ patch from each frame and ran it through the convnet. This gave us the RGB
656
+ percepts. Additionally, for UCF-101, we computed flow percepts by extracting flows using the Brox
657
+ method and training the temporal stream convolutional network as described by
658
+ \citet{Simonyan14b}. We found that the fc6 features worked better than fc7 for
659
+ single frame classification using both RGB and flow percepts. Therefore, we
660
+ used the 4096-dimensional fc6 layer as the input representation of our data.
661
+ Besides these percepts, we also trained the proposed models on 32 $\times$ 32
662
+ patches of pixels.
663
+
664
+ All models were trained using backprop on a single NVIDIA Titan GPU. A two layer
665
+ 2048 unit Composite model that predicts 13 frames and reconstructs 16 frames
666
+ took 18-20 hours to converge on 300 hours of percepts. We initialized weights
667
+ by sampling from a uniform distribution whose scale was set to 1/sqrt(fan-in).
668
+ Biases at all the gates were initialized to zero. Peep-hole connections were
669
+ initialized to zero. The supervised classifiers trained on 16 frames took 5-15
670
+ minutes to converge. The code can be found at \url{https://github.com/emansim/unsupervised-videos}.
671
+
672
+ \subsection{Visualization and Qualitative Analysis}
673
+
674
+ \begin{figure*}[ht]
675
+ \centering
676
+
677
+ \includegraphics[clip=true, trim=252 0 0 0, width=\textwidth]{patch_input.pdf}
678
+
679
+ \vspace{-0.18in}
680
+ \includegraphics[clip=true, trim=252 0 0 0, width=\textwidth]{patch_input_2.pdf}
681
+
682
+ \vspace{0.1in}
683
+ \includegraphics[clip=true, trim=252 0 0 0, width=\textwidth]{patch_2048.pdf}
684
+
685
+ \vspace{-0.18in}
686
+ \includegraphics[clip=true, trim=252 0 0 0, width=\textwidth]{patch_2048_2.pdf}
687
+
688
+ \vspace{0.1in}
689
+ \includegraphics[clip=true, trim=252 0 0 0, width=\textwidth]{patch_4096.pdf}
690
+
691
+ \vspace{-0.18in}
692
+ \includegraphics[clip=true, trim=252 0 0 0, width=\textwidth]{patch_4096_2.pdf}
693
+
694
+ \small
695
+ \begin{picture}(0, 0)(-245, -150)
696
+ \put(-142, 60){Ground Truth Future}
697
+ \put(-50, 62) {\vector(1, 0){40}}
698
+ \put(-150, 62) {\vector(-1, 0){125}}
699
+ \put(-397, 60){Input Sequence}
700
+ \put(-400, 62) {\vector(-1, 0){80}}
701
+ \put(-335, 62) {\vector(1, 0){58}}
702
+
703
+ \put(-142, -10){Future Prediction}
704
+ \put(-50, -9) {\vector(1, 0){40}}
705
+ \put(-150, -9) {\vector(-1, 0){125}}
706
+ \put(-397, -10){Input Reconstruction}
707
+ \put(-400, -9) {\vector(-1, 0){80}}
708
+ \put(-320, -9) {\vector(1, 0){43}}
709
+
710
+ \put(-340, -67){Two Layer Composite Model with 2048 LSTM units}
711
+ \put(-340, -135){Two Layer Composite Model with 4096 LSTM units}
712
+
713
+ \end{picture}
714
+ \vspace{-0.2in}
715
+ \caption{\small Reconstruction and future prediction obtained from the Composite Model
716
+ on a dataset of natural image patches.
717
+ The first two rows show ground truth sequences. The model takes 16 frames as inputs.
718
+ Only the last 10 frames of the input sequence are shown here. The next 13 frames are the ground truth future. In the rows that
719
+ follow, we show the reconstructed and predicted frames for two instances of
720
+ the model.
721
+ }
722
+ \label{fig:image_patches}
723
+ \vspace{-0.2in}
724
+ \end{figure*}
725
+
726
+ The aim of this set of experiments to visualize the properties of the proposed
727
+ models.
728
+
729
+ \emph{Experiments on MNIST} \\
730
+ We first trained our models on a dataset of moving MNIST digits. In this
731
+ dataset, each
732
+ video was 20 frames long and consisted of two digits moving inside a 64 $\times$ 64
733
+ patch. The digits were chosen randomly from the training set and placed
734
+ initially at random locations inside the patch. Each digit was assigned a
735
+ velocity whose direction was chosen uniformly randomly on a unit circle and
736
+ whose magnitude was also chosen uniformly at random over a fixed range. The
737
+ digits bounced-off the edges of the 64 $\times$ 64 frame and overlapped if they
738
+ were at the same location. The reason for working with this dataset is that it is
739
+ infinite in size and can be generated quickly on the fly. This makes it possible
740
+ to explore the model without expensive disk accesses or overfitting issues. It
741
+ also has interesting behaviours due to occlusions and the dynamics of bouncing
742
+ off the walls.
743
+
744
+ We first trained a single layer Composite Model. Each LSTM had 2048 units. The
745
+ encoder took 10 frames as input. The decoder tried to reconstruct these 10
746
+ frames and the future predictor attempted to predict the next 10 frames. We
747
+ used logistic output units with a cross entropy loss function.
748
+ \Figref{fig:bouncing_mnist} shows two examples of running this model. The true
749
+ sequences are shown in the first two rows. The next two rows show the
750
+ reconstruction and future prediction from the one layer Composite Model. It is
751
+ interesting to note that the model figures out how to separate superimposed
752
+ digits and can model them even as they pass through each other. This shows some
753
+ evidence of \emph{disentangling} the two independent factors of variation in
754
+ this sequence. The model can also correctly predict the motion after bouncing
755
+ off the walls. In order to see if adding depth helps, we trained a two layer
756
+ Composite Model, with each layer having 2048 units. We can see that adding
757
+ depth helps the model make better predictions. Next, we changed the future
758
+ predictor by making it conditional. We can see that this model makes sharper
759
+ predictions.
760
+
761
+ \emph{Experiments on Natural Image Patches} \\
762
+ Next, we tried to see if our models can also work with natural image patches.
763
+ For this, we trained the models on sequences of 32 $\times$ 32 natural image
764
+ patches extracted from the UCF-101 dataset. In this case, we used linear output
765
+ units and the squared error loss function. The input was 16 frames and the model
766
+ was asked to reconstruct the 16 frames and predict the future 13 frames.
767
+ \Figref{fig:image_patches} shows the results obtained from a two layer Composite
768
+ model with 2048 units. We found that the reconstructions and the predictions are
769
+ both very blurry. We then trained a bigger model with 4096 units. The outputs
770
+ from this model are also shown in \Figref{fig:image_patches}. We can see that
771
+ the reconstructions get much sharper.
772
+
773
+
774
+ \begin{figure*}[ht]
775
+ \centering
776
+ \subfigure[Trained Future Predictor]{
777
+ \label{fig:gatepatterntrained}
778
+ \includegraphics[clip=true, trim=150 50 120 50,width=0.9\textwidth]{gate_pattern.pdf}}
779
+ \subfigure[Randomly Initialized Future Predictor]{
780
+ \label{fig:gatepatternrandom}
781
+ \includegraphics[clip=true, trim=150 50 120 50,width=0.9\textwidth]{gate_pattern_random_init.pdf}}
782
+ \vspace{-0.1in}
783
+ \caption{Pattern of activity in 200 randomly chosen LSTM units in the Future
784
+ Predictor of a 1 layer (unconditioned) Composite Model trained on moving MNIST digits.
785
+ The vertical axis corresponds to different LSTM units. The horizontal axis is
786
+ time. The model was only trained to predict the next 10 frames, but here we let it run to predict
787
+ the next 100 frames.
788
+ {\bf Top}: The dynamics has a periodic quality which does not die
789
+ out. {\bf Bottom} : The pattern of activity, if the trained weights
790
+ in the future predictor are replaced by random weights. The dynamics quickly
791
+ dies out.
792
+ }
793
+ \label{fig:gatepattern}
794
+ \vspace{-0.2in}
795
+ \end{figure*}
796
+
797
+ \begin{figure*}
798
+ \centering
799
+
800
+ \includegraphics[width=0.9\textwidth]{mnist_one_digit_truth.pdf}
801
+
802
+ \vspace{-0.18in}
803
+ \includegraphics[width=0.9\textwidth]{mnist_three_digit_truth.pdf}
804
+
805
+ \vspace{0.1in}
806
+ \includegraphics[width=0.9\textwidth]{mnist_one_digit_recon.pdf}
807
+
808
+ \vspace{-0.18in}
809
+ \includegraphics[width=0.9\textwidth]{mnist_three_digit_recon.pdf}
810
+ \vspace{-0.2in}
811
+
812
+ \small
813
+ \begin{picture}(0, 0)(-245, -45)
814
+ \put(-172, 62){Ground Truth Future}
815
+ \put(-90, 64) {\vector(1, 0){60}}
816
+ \put(-180, 64) {\vector(-1, 0){60}}
817
+ \put(-397, 62){Input Sequence}
818
+ \put(-400, 64) {\vector(-1, 0){55}}
819
+ \put(-335, 64) {\vector(1, 0){90}}
820
+
821
+ \put(-150, 0){Future Prediction}
822
+ \put(-80, 1) {\vector(1, 0){50}}
823
+ \put(-155, 1) {\vector(-1, 0){85}}
824
+ \put(-390, 0){Input Reconstruction}
825
+ \put(-395, 1) {\vector(-1, 0){60}}
826
+ \put(-305, 1) {\vector(1, 0){60}}
827
+ \end{picture}
828
+ \caption{\small Out-of-domain runs. Reconstruction and Future prediction for
829
+ test sequences of one and three moving digits. The model was trained on
830
+ sequences of two moving digits.}
831
+ \label{fig:out_of_domain}
832
+ \vspace{-0.18in}
833
+ \end{figure*}
834
+
835
+ \emph{Generalization over time scales} \\
836
+ In the next experiment, we test if the model can work at time scales that are
837
+ different than what it was trained on. We take a one hidden layer unconditioned
838
+ Composite Model trained on moving MNIST digits. The model has 2048 LSTM units
839
+ and looks at a 64 $\times$ 64 input. It was trained on input sequences of 10
840
+ frames to reconstruct those 10 frames as well as predict 10 frames into the
841
+ future. In order to test if the future predictor is able to generalize beyond 10
842
+ frames, we let the model run for 100 steps into the future.
843
+ \Figref{fig:gatepatterntrained} shows the pattern of activity in the LSTM units of the
844
+ future predictor
845
+ pathway for a randomly chosen test input. It shows the activity at each of the
846
+ three sigmoidal gates (input, forget, output), the input (after the tanh
847
+ non-linearity, before being multiplied by the input gate), the cell state and
848
+ the final output (after being multiplied by the output gate). Even though the
849
+ units are ordered randomly along the vertical axis, we can see that the dynamics
850
+ has a periodic quality to it. The model is able to generate persistent motion
851
+ for long periods of time. In terms of reconstruction, the model only outputs
852
+ blobs after the first 15 frames, but the motion is relatively well preserved.
853
+ More results, including long range future predictions over hundreds of time steps can see been at
854
+ \url{http://www.cs.toronto.edu/~nitish/unsupervised_video}.
855
+ To show that setting up a periodic behaviour is not trivial,
856
+ \Figref{fig:gatepatternrandom} shows the activity from a randomly initialized future
857
+ predictor. Here, the LSTM state quickly converges and the outputs blur completely.
858
+
859
+
860
+
861
+ \emph{Out-of-domain Inputs} \\
862
+ Next, we test this model's ability to deal with out-of-domain inputs. For this,
863
+ we test the model on sequences of one and three moving digits. The model was
864
+ trained on sequences of two moving digits, so it has never seen inputs with just
865
+ one digit or three digits. \Figref{fig:out_of_domain} shows the reconstruction
866
+ and future prediction results. For one moving digit, we can see that the model
867
+ can do a good job but it really tries to hallucinate a second digit overlapping
868
+ with the first one. The second digit shows up towards the end of the future
869
+ reconstruction. For three digits, the model merges digits into blobs. However,
870
+ it does well at getting the overall motion right. This highlights a key drawback
871
+ of modeling entire frames of input in a single pass. In order to model videos
872
+ with variable number of objects, we perhaps need models that not only have an attention
873
+ mechanism in place, but can also learn to execute themselves a variable number
874
+ of times and do variable amounts of computation.
875
+
876
+
877
+
878
+ \emph{Visualizing Features} \\
879
+ Next, we visualize the features learned by this model.
880
+ \Figref{fig:input_features} shows the weights that connect each input frame to
881
+ the encoder LSTM. There are four sets of weights. One set of weights connects
882
+ the frame to the input units. There are three other sets, one corresponding to
883
+ each of the three gates (input, forget and output). Each weight has a size of 64
884
+ $\times$ 64. A lot of features look like thin strips. Others look like higher
885
+ frequency strips. It is conceivable that the high frequency features help in
886
+ encoding the direction and velocity of motion.
887
+
888
+ \Figref{fig:output_features} shows the output features from the two LSTM
889
+ decoders of a Composite Model. These correspond to the weights connecting the
890
+ LSTM output units to the output layer. They appear to be somewhat qualitatively
891
+ different from the input features shown in \Figref{fig:input_features}. There
892
+ are many more output features that are local blobs, whereas those are rare in
893
+ the input features. In the output features, the ones that do look like strips
894
+ are much shorter than those in the input features. One way to interpret this is
895
+ the following. The model needs to know about motion (which direction and how
896
+ fast things are moving) from the input. This requires \emph{precise} information
897
+ about location (thin strips) and velocity (high frequency strips). But when it
898
+ is generating the output, the model wants to hedge its bets so that it
899
+ does not
900
+ suffer a huge loss for predicting things sharply at the wrong place. This could
901
+ explain why the output features have somewhat bigger blobs. The relative
902
+ shortness of the strips in the output features can be explained by the fact that
903
+ in the inputs, it does not hurt to have a longer feature than what is needed to
904
+ detect a location because information is coarse-coded through multiple features.
905
+ But in the output, the model may not want to put down a feature that is bigger
906
+ than any digit because other units will have to conspire to correct for it.
907
+
908
+
909
+
910
+ \begin{figure*}
911
+ \centering
912
+ \small
913
+ \subfigure[\small Inputs]{
914
+ \includegraphics[clip=true, trim=0 0 0 100,width=0.48\textwidth]{mnist_act.pdf}}
915
+ \subfigure[\small Input Gates]{
916
+ \includegraphics[clip=true, trim=0 0 0 100,width=0.48\textwidth]{mnist_input_gates.pdf}}
917
+ \subfigure[\small Forget Gates]{
918
+ \includegraphics[clip=true, trim=0 0 0 100,width=0.48\textwidth]{mnist_forget_gates.pdf}}
919
+ \subfigure[\small Output Gates]{
920
+ \includegraphics[clip=true, trim=0 0 0 100,width=0.48\textwidth]{mnist_output_gates.pdf}}
921
+ \caption{\small Input features from a Composite Model trained on moving MNIST
922
+ digits. In an LSTM, each input frame is connected to four sets of units - the input, the input
923
+ gate, forget gate and output gate. These figures show the top-200 features ordered by $L_2$ norm of the
924
+ input features. The features in corresponding locations belong to the same LSTM unit.}
925
+ \label{fig:input_features}
926
+ \end{figure*}
927
+
928
+ \begin{figure*}
929
+ \centering
930
+ \small
931
+ \subfigure[\small Input Reconstruction]{
932
+ \includegraphics[width=0.48\textwidth]{output_weights_decoder.pdf}}
933
+ \subfigure[\small Future Prediction]{
934
+ \includegraphics[width=0.48\textwidth]{output_weights_future.pdf}}
935
+ \caption{\small Output features from the two decoder LSTMs of a Composite Model trained on moving MNIST
936
+ digits. These figures show the top-200 features ordered by $L_2$ norm.}
937
+ \label{fig:output_features}
938
+ \end{figure*}
939
+
940
+ \subsection{Action Recognition on UCF-101/HMDB-51}
941
+
942
+
943
+ The aim of this set of experiments is to see if the features learned by
944
+ unsupervised learning can help improve performance on supervised tasks.
945
+
946
+ We trained a two layer Composite Model with 2048 hidden units with no conditioning on
947
+ either decoders. The model was trained on percepts extracted from 300 hours of
948
+ YouTube data. The model was trained to autoencode 16 frames and predict the
949
+ next 13 frames. We initialize an LSTM classifier with the
950
+ weights learned by the encoder LSTM from this model. The classifier is shown in
951
+ \Figref{fig:lstm_classifier}. The output from each LSTM in the second layer goes into a softmax
952
+ classifier that makes a prediction about the action being performed at each time
953
+ step. Since only one action is being performed in each video in the datasets we
954
+ consider, the target is the same at each time step. At test time, the
955
+ predictions made at each time step are averaged. To get a prediction for the
956
+ entire video, we average the predictions from all 16 frame blocks in the video
957
+ with a stride of 8 frames. Using a smaller stride did not improve results.
958
+
959
+
960
+ The baseline for comparing these models is an identical LSTM classifier but with
961
+ randomly initialized weights. All classifiers used dropout regularization,
962
+ where we dropped activations as they were communicated across layers but not
963
+ through time within the same LSTM as proposed in \citet{dropoutLSTM}. We
964
+ emphasize that this is a very strong baseline and does significantly better than
965
+ just using single frames. Using dropout was crucial in order to train good
966
+ baseline models especially with very few training examples.
967
+
968
+ \begin{figure}[ht]
969
+ \centering
970
+ \resizebox{0.7\linewidth}{!}{\makeatletter
971
+ \ifx\du\undefined
972
+ \newlength{\du}
973
+ \fi
974
+ \setlength{\du}{\unitlength}
975
+ \ifx\spacing\undefined
976
+ \newlength{\spacing}
977
+ \fi
978
+ \setlength{\spacing}{60\unitlength}
979
+ \begin{tikzpicture}
980
+ \pgfsetlinewidth{0.5\du}
981
+ \pgfsetmiterjoin
982
+ \pgfsetbuttcap
983
+
984
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v1) at (0\spacing, 0) {$v_1$};
985
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v2) at (\spacing, 0) {$v_2$};
986
+ \node[rectangle, draw=white, minimum width=10\du, minimum height=50\du] (v3) at (2\spacing, 0) {$\ldots$};
987
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (v4) at (3\spacing, 0) {$v_T$};
988
+
989
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h1) at (0\spacing, \spacing) {};
990
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h2) at (\spacing, \spacing) {};
991
+ \node[rectangle, draw=white, minimum width=10\du, minimum height=50\du] (h3) at (2\spacing, \spacing) {$\ldots$};
992
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (h4) at (3\spacing, \spacing) {};
993
+
994
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (g1) at (0\spacing, 2\spacing) {};
995
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (g2) at (\spacing, 2\spacing) {};
996
+ \node[rectangle, draw=white, minimum width=10\du, minimum height=50\du] (g3) at (2\spacing, 2\spacing) {$\ldots$};
997
+ \node[rectangle, draw=black, minimum width=10\du, minimum height=50\du] (g4) at (3\spacing, 2\spacing) {};
998
+
999
+
1000
+ \node[circle, draw=black, minimum size=10\du] (y1) at (0\spacing, 3\spacing) {$y_1$};
1001
+ \node[circle, draw=black, minimum size=10\du] (y2) at (\spacing, 3\spacing){$y_2$};
1002
+ \node[circle, draw=white, minimum size=10\du] (y3) at (2\spacing, 3\spacing){$\ldots$};
1003
+ \node[circle, draw=black, minimum size=10\du] (y4) at (3\spacing, 3\spacing){$y_T$};
1004
+
1005
+
1006
+ \draw[->] (v1) -- (h1);
1007
+ \draw[->] (v2) -- (h2);
1008
+ \draw[->] (v4) -- (h4);
1009
+
1010
+ \draw[->] (h1) -- (g1);
1011
+ \draw[->] (h2) -- (g2);
1012
+ \draw[->] (h4) -- (g4);
1013
+
1014
+ \draw[->] (g1) -- (y1);
1015
+ \draw[->] (g2) -- (y2);
1016
+ \draw[->] (g4) -- (y4);
1017
+
1018
+ \draw[->] (h1) -- node[above] {$W^{(1)}$} (h2);
1019
+ \draw[->] (h2) -- node[above] {$W^{(1)}$} (h3);
1020
+ \draw[->] (h3) -- node[above] {$W^{(1)}$} (h4);
1021
+
1022
+ \draw[->] (g1) -- node[above] {$W^{(2)}$} (g2);
1023
+ \draw[->] (g2) -- node[above] {$W^{(2)}$} (g3);
1024
+ \draw[->] (g3) -- node[above] {$W^{(2)}$} (g4);
1025
+
1026
+ \end{tikzpicture}
1027
+ }
1028
+ \caption{\small LSTM Classifier.}
1029
+ \label{fig:lstm_classifier}
1030
+ \vspace{-0.2in}
1031
+ \end{figure}
1032
+
1033
+
1034
+
1035
+ \Figref{fig:small_dataset} compares three models - single frame classifier
1036
+ (logistic regression), baseline LSTM classifier and the LSTM classifier
1037
+ initialized with weights from the Composite Model as the number of labelled
1038
+ videos per class is varied. Note that having one labelled video means having
1039
+ many labelled 16 frame blocks. We can see that for the case of very few
1040
+ training examples, unsupervised learning gives a substantial improvement. For
1041
+ example, for UCF-101, the performance improves from 29.6\% to 34.3\% when
1042
+ training on only one labelled video. As the size of the labelled dataset grows,
1043
+ the improvement becomes smaller. Even for the full UCF-101 dataset we still get a
1044
+ considerable improvement from 74.5\% to 75.8\%. On HMDB-51, the improvement is
1045
+ from 42.8\% to 44.0\% for the full dataset (70 videos per class) and 14.4\% to
1046
+ 19.1\% for one video per class. Although, the improvement in classification by
1047
+ using unsupervised learning was not as big as we expected, we still managed to
1048
+ yield an additional improvement over a strong baseline. We discuss some avenues
1049
+ for improvements later.
1050
+
1051
+ \begin{figure*}[ht]
1052
+ \centering
1053
+ \subfigure[UCF-101 RGB]{
1054
+ \includegraphics[width=0.38\linewidth]{ucf101_small_dataset.pdf}
1055
+ }
1056
+ \subfigure[HMDB-51 RGB]{
1057
+ \includegraphics[width=0.38\linewidth]{hmdb_small_dataset.pdf}
1058
+ }
1059
+ \vspace{-0.1in}
1060
+ \caption{\small Effect of pretraining on action recognition with change in the size of the labelled training set. The error bars are over 10 different samples of training sets.}
1061
+ \label{fig:small_dataset}
1062
+ \vspace{-0.1in}
1063
+ \end{figure*}
1064
+
1065
+ We further ran similar experiments on the optical flow percepts extracted from
1066
+ the UCF-101 dataset. A temporal stream convolutional net, similar to the one
1067
+ proposed by \citet{Simonyan14c}, was trained on single frame optical flows as
1068
+ well as on stacks of 10 optical flows. This gave an accuracy of 72.2\% and
1069
+ 77.5\% respectively. Here again, our models took 16 frames as input,
1070
+ reconstructed them and predicted 13 frames into the future. LSTMs with 128
1071
+ hidden units improved the accuracy by 2.1\% to 74.3\% for the single frame
1072
+ case. Bigger LSTMs did not improve results. By pretraining the LSTM, we were
1073
+ able to further improve the classification to 74.9\% ($\pm 0.1$). For stacks of
1074
+ 10 frames we improved very slightly to 77.7\%. These results are summarized in
1075
+ \Tabref{tab:action}.
1076
+
1077
+
1078
+
1079
+
1080
+
1081
+ \begin{table}
1082
+ \small
1083
+ \centering
1084
+ \tabcolsep=0.0in
1085
+ \begin{tabular}{L{0.35\linewidth}C{0.15\linewidth}C{0.25\linewidth}C{0.25\linewidth}}
1086
+ \toprule
1087
+ {\bf Model} & {\bf UCF-101 RGB} & {\bf UCF-101 1-~frame flow} & {\bf HMDB-51 RGB}\\
1088
+ \midrule
1089
+ Single Frame & 72.2 & 72.2 & 40.1 \\
1090
+ LSTM classifier & 74.5 & 74.3 & 42.8 \\
1091
+ Composite LSTM Model + Finetuning & \textbf{75.8} & \textbf{74.9} & \textbf{44.1} \\
1092
+ \bottomrule
1093
+ \end{tabular}
1094
+ \caption{\small Summary of Results on Action Recognition.}
1095
+ \label{tab:action}
1096
+ \vspace{-0.2in}
1097
+ \end{table}
1098
+
1099
+ \subsection{Comparison of Different Model Variants}
1100
+ \begin{table}[t!]
1101
+ \small
1102
+ \centering
1103
+ \tabcolsep=0.0in
1104
+ \begin{tabular}{L{0.5\linewidth}C{0.25\linewidth}C{0.25\linewidth}} \toprule
1105
+ {\bf Model} & {\bf Cross Entropy on MNIST} & {\bf Squared loss on image
1106
+ patches} \\ \midrule
1107
+ Future Predictor & 350.2 & 225.2\\
1108
+ Composite Model & 344.9 & 210.7 \\
1109
+ Conditional Future Predictor & 343.5 & 221.3 \\
1110
+ Composite Model with Conditional Future Predictor & 341.2 & 208.1 \\
1111
+ \bottomrule
1112
+ \end{tabular}
1113
+ \caption{\small Future prediction results on MNIST and image patches. All models use 2
1114
+ layers of LSTMs.}
1115
+ \label{tab:prediction}
1116
+ \vspace{-0.25in}
1117
+ \end{table}
1118
+
1119
+
1120
+ \begin{table*}[t]
1121
+ \small
1122
+ \centering
1123
+ \tabcolsep=0.0in
1124
+ \begin{tabular}{L{0.4\textwidth}C{0.15\textwidth}C{0.15\textwidth}C{0.15\textwidth}C{0.15\textwidth}}\toprule
1125
+ {\bf Method} & {\bf UCF-101 small} & {\bf UCF-101} & {\bf HMDB-51 small} & {\bf HMDB-51}\\\midrule
1126
+ Baseline LSTM & 63.7 & 74.5 & 25.3 & 42.8 \\
1127
+ Autoencoder & 66.2 & 75.1 & 28.6 & 44.0 \\
1128
+ Future Predictor & 64.9 & 74.9 & 27.3 & 43.1 \\
1129
+ Conditional Autoencoder & 65.8 & 74.8 & 27.9 & 43.1 \\
1130
+ Conditional Future Predictor & 65.1 & 74.9 & 27.4 & 43.4 \\
1131
+ Composite Model & 67.0 & {\bf 75.8} & 29.1 & {\bf 44.1}\\
1132
+ Composite Model with Conditional Future Predictor & {\bf 67.1} & {\bf 75.8} & {\bf 29.2} & 44.0 \\ \bottomrule
1133
+ \end{tabular}
1134
+ \caption{\small Comparison of different unsupervised pretraining methods. UCF-101 small
1135
+ is a subset containing 10 videos per
1136
+ class. HMDB-51 small contains 4 videos per class.}
1137
+ \label{tab:variants}
1138
+ \vspace{-0.2in}
1139
+ \end{table*}
1140
+
1141
+
1142
+ The aim of this set of experiments is to compare the different variants of the
1143
+ model proposed in this paper. Since it is always possible to get lower
1144
+ reconstruction error by copying the inputs, we cannot use input reconstruction
1145
+ error as a measure of how good a model is doing. However, we can use the error
1146
+ in predicting the future as a reasonable measure of how good the model is
1147
+ doing. Besides, we can use the performance on supervised tasks as a proxy for
1148
+ how good the unsupervised model is doing. In this section, we present results from
1149
+ these two analyses.
1150
+
1151
+ Future prediction results are summarized in \Tabref{tab:prediction}. For MNIST
1152
+ we compute the cross entropy of the predictions with respect to the ground
1153
+ truth, both of which are 64 $\times$ 64 patches. For natural image patches, we
1154
+ compute the squared loss. We see that the Composite Model always does a better
1155
+ job of predicting the future compared to the Future Predictor. This indicates
1156
+ that having the autoencoder along with the future predictor to force the model
1157
+ to remember more about the inputs actually helps predict the future better.
1158
+ Next, we can compare each model with its conditional variant. Here, we find that
1159
+ the conditional models perform better, as was also noted in \Figref{fig:bouncing_mnist}.
1160
+
1161
+ Next, we compare the models using performance on a supervised task.
1162
+ \Tabref{tab:variants} shows the performance on action
1163
+ recognition achieved by finetuning different unsupervised learning models.
1164
+ Besides running the experiments on the full UCF-101 and HMDB-51 datasets, we also ran the
1165
+ experiments on small subsets of these to better highlight the case where we have
1166
+ very few training examples. We find that all unsupervised models improve over the
1167
+ baseline LSTM which is itself well-regularized by using dropout. The Autoencoder
1168
+ model seems to perform consistently better than the Future Predictor. The
1169
+ Composite model which combines the two does better than either one alone.
1170
+ Conditioning on the generated inputs does not seem to give a clear
1171
+ advantage over not doing so. The Composite Model with a conditional future
1172
+ predictor works the best, although its performance is almost same as that of the
1173
+ Composite Model.
1174
+
1175
+ \subsection{Comparison with Other Action Recognition Benchmarks}
1176
+
1177
+ Finally, we compare our models to the state-of-the-art action recognition
1178
+ results. The performance is summarized in \Tabref{tab:sota}. The table is
1179
+ divided into three sets. The first set compares models that use only RGB data
1180
+ (single or multiple frames). The second set compares models that use explicitly
1181
+ computed flow features only. Models in the third set use both.
1182
+
1183
+ On RGB data, our model performs at par with the best deep models. It performs
1184
+ 3\% better than the LRCN model that also used LSTMs on top of convnet features\footnote{However,
1185
+ the improvement is only partially from unsupervised learning, since we
1186
+ used a better convnet model.}. Our model performs better than C3D features that
1187
+ use a 3D convolutional net. However, when the C3D features are concatenated
1188
+ with fc6 percepts, they do slightly better than our model.
1189
+
1190
+ The improvement for flow features over using a randomly initialized LSTM network
1191
+ is quite small. We believe this is atleast partly due to the fact that the flow percepts
1192
+ already capture a lot of the motion information that the LSTM would otherwise
1193
+ discover.
1194
+
1195
+
1196
+ When we combine predictions from the RGB and flow models, we obtain 84.3
1197
+ accuracy on UCF-101. We believe further improvements can be made by running the
1198
+ model over different patch locations and mirroring the patches. Also, our model
1199
+ can be applied deeper inside the convnet instead of just at the top-level. That
1200
+ can potentially lead to further improvements.
1201
+ In this paper, we focus on showing that unsupervised training helps
1202
+ consistently across both datasets and across different sized training sets.
1203
+
1204
+
1205
+ \begin{table}[t]
1206
+ \small
1207
+ \centering
1208
+ \tabcolsep=0.0in
1209
+ \begin{tabular}{L{0.7\linewidth}C{0.15\linewidth}C{0.15\linewidth}}\toprule
1210
+ {\bf Method} & {\bf UCF-101} & {\bf HMDB-51}\\\midrule
1211
+ Spatial Convolutional Net \citep{Simonyan14b} & 73.0 & 40.5 \\
1212
+ C3D \citep{C3D} & 72.3 & - \\
1213
+ C3D + fc6 \citep{C3D} & {\bf 76.4} & - \\
1214
+ LRCN \citep{BerkeleyVideo} & 71.1 & - \\
1215
+ Composite LSTM Model & 75.8 & 44.0 \\
1216
+ \midrule
1217
+
1218
+ Temporal Convolutional Net \citep{Simonyan14b} & {\bf 83.7} & 54.6 \\
1219
+ LRCN \citep{BerkeleyVideo} & 77.0 & - \\
1220
+ Composite LSTM Model & 77.7 & - \\
1221
+ \midrule
1222
+
1223
+ LRCN \cite{BerkeleyVideo} & 82.9 & - \\
1224
+ Two-stream Convolutional Net \cite{Simonyan14b} & 88.0 & 59.4 \\
1225
+ Multi-skip feature stacking \cite{MFS} & {\bf 89.1} & {\bf 65.1} \\
1226
+ Composite LSTM Model & 84.3 & - \\
1227
+ \bottomrule
1228
+ \end{tabular}
1229
+ \caption{\small Comparison with state-of-the-art action recognition models.}
1230
+ \label{tab:sota}
1231
+ \vspace{-0.2in}
1232
+ \end{table}
1233
+
1234
+ \section{Conclusions} \vspace{-0.1in}
1235
+ We proposed models based on LSTMs that can learn good video representations. We
1236
+ compared them and analyzed their properties through visualizations. Moreover, we
1237
+ managed to get an improvement on supervised tasks. The best performing model was
1238
+ the Composite Model that combined an autoencoder and a future predictor.
1239
+ Conditioning on generated outputs did not have a significant impact on the
1240
+ performance for supervised tasks, however it made the future predictions look
1241
+ slightly better. The model was able to persistently generate motion well beyond
1242
+ the time scales it was trained for. However, it lost the precise object features
1243
+ rapidly after the training time scale. The features at the input and output
1244
+ layers were found to have some interesting properties.
1245
+
1246
+ To further get improvements for supervised tasks, we believe that the model can
1247
+ be extended by applying it convolutionally across patches of the video and
1248
+ stacking multiple layers of such models. Applying this model in the lower layers
1249
+ of a convolutional net could help extract motion information that would
1250
+ otherwise be lost across max-pooling layers. In our future work, we plan to
1251
+ build models based on these autoencoders from the bottom up instead of applying
1252
+ them only to percepts.
1253
+
1254
+ \vspace{-0.12in}
1255
+ \section*{Acknowledgments}
1256
+ \vspace{-0.12in}
1257
+
1258
+ We acknowledge the support of Samsung, Raytheon BBN Technologies, and NVIDIA
1259
+ Corporation for the donation of a GPU used for this research. The authors
1260
+ would like to thank Geoffrey Hinton and Ilya Sutskever for helpful discussions
1261
+ and comments.
1262
+
1263
+ \vspace{-0.2in}
1264
+
1265
+
1266
+ {\small
1267
+ \bibliography{refs}
1268
+ \bibliographystyle{icml2015}
1269
+ }
1270
+
1271
+ \end{document}
papers/1503/1503.04069.tex ADDED
@@ -0,0 +1,688 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[journal]{IEEEtran}
2
+
3
+ \usepackage[utf8]{inputenc}
4
+
5
+ \usepackage{times}
6
+ \usepackage{graphicx}
7
+ \usepackage{subfigure}
8
+ \usepackage{algorithm}
9
+ \usepackage{algorithmic}
10
+ \usepackage{amsmath}
11
+ \usepackage{amsfonts}
12
+ \usepackage{amssymb}
13
+ \allowdisplaybreaks[0]
14
+ \usepackage{enumitem}
15
+ \usepackage[super]{nth}
16
+ \usepackage{microtype}
17
+ \usepackage{todonotes}
18
+ \usepackage[numbers,sort&compress]{natbib}
19
+ \usepackage{flushend}
20
+
21
+
22
+ \PassOptionsToPackage{hyphens}{url}
23
+ \usepackage[colorlinks]{hyperref}
24
+ \usepackage{color}
25
+ \definecolor{mydarkblue}{rgb}{0,0.08,0.45}
26
+ \hypersetup{ pdftitle={},
27
+ pdfauthor={},
28
+ pdfsubject={},
29
+ pdfkeywords={},
30
+ pdfborder=0 0 0,
31
+ pdfpagemode=UseNone,
32
+ colorlinks=true,
33
+ linkcolor=mydarkblue,
34
+ citecolor=mydarkblue,
35
+ filecolor=mydarkblue,
36
+ urlcolor=mydarkblue,
37
+ pdfview=FitH}
38
+
39
+
40
+ \usepackage{cleveref}[2012/02/15]\crefformat{footnote}{#2\footnotemark[#1]#3}
41
+
42
+
43
+ \newcommand{\theHalgorithm}{\arabic{algorithm}}
44
+
45
+ \usepackage[english]{babel}
46
+ \addto\extrasenglish{\def\sectionautorefname{Section}\def\subsectionautorefname{Section}\def\subsubsectionautorefname{Section}}
47
+
48
+ \newcommand{\noskip}{\setlength\itemsep{3pt}\setlength\parsep{0pt}\setlength\partopsep{0pt}\setlength\parskip{0pt}\setlength\topskip{0pt}}
49
+
50
+ \renewcommand{\subsectionautorefname}{Section}
51
+ \renewcommand{\subsubsectionautorefname}{Section}
52
+ \newcommand{\subfigureautorefname}{\figureautorefname}
53
+
54
+
55
+ \begin{document}
56
+ \title{LSTM: A Search Space Odyssey}
57
+
58
+ \author{Klaus~Greff,
59
+ Rupesh~K.~Srivastava,
60
+ Jan~Koutn\'{i}k,
61
+ Bas~R.~Steunebrink,
62
+ J\"{u}rgen~Schmidhuber\thanks{\copyright 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be
63
+ obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted
64
+ component of this work in other works.
65
+ Manuscript received May 15, 2015; revised March 17, 2016; accepted June 9, 2016. Date of publication July 8, 2016; date of current version June 20, 2016.
66
+ DOI: \href{https://doi.org/10.1109/TNNLS.2016.2582924}{10.1109/TNNLS.2016.2582924}
67
+
68
+ This research was supported by the Swiss National Science Foundation grants
69
+ ``Theory and Practice of Reinforcement Learning 2'' (\#138219) and ``Advanced Reinforcement Learning'' (\#156682),
70
+ and by EU projects ``NASCENCE'' (FP7-ICT-317662), ``NeuralDynamics'' (FP7-ICT-270247) and WAY (FP7-ICT-288551).}\thanks{K. Greff, R. K. Srivastava, J. Kout\'{i}k, B. R. Steunebrink and J. Schmidhuber are with the Istituto Dalle Molle di studi sull'Intelligenza Artificiale (IDSIA), the Scuola universitaria professionale della Svizzera italiana (SUPSI), and the Università della Svizzera italiana (USI).}\thanks{Author e-mails addresses: \{klaus, rupesh, hkou, bas, juergen\}@idsia.ch}}
71
+
72
+
73
+ \markboth{Transactions on Neural Networks and Learning Systems}{Greff \MakeLowercase{\textit{et al.}}: LSTM: A Search Space Odyssey}
74
+
75
+ \maketitle
76
+
77
+ \begin{abstract}
78
+ Several variants of the Long Short-Term Memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995.
79
+ In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems.
80
+ This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants.
81
+ In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling.
82
+ The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful fANOVA framework.
83
+ In total, we summarize the results of 5400 experimental runs ($\approx 15$ years of CPU time), which makes our study the largest of its kind on LSTM networks.
84
+ Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components.
85
+ We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment.
86
+ \end{abstract}
87
+
88
+ \begin{IEEEkeywords}
89
+ Recurrent neural networks, Long Short-Term Memory, LSTM, sequence learning, random search, fANOVA.
90
+ \end{IEEEkeywords}
91
+
92
+
93
+
94
+
95
+ \section{Introduction}
96
+ Recurrent neural networks with Long Short-Term Memory (which we will concisely refer to as LSTMs) have emerged as an effective and scalable model for several learning problems related to sequential data.
97
+ Earlier methods for attacking these problems have either been tailored towards a specific problem or did not scale to long time dependencies.
98
+ LSTMs on the other hand are both general and effective at capturing long-term temporal dependencies.
99
+ They do not suffer from the optimization hurdles that plague simple recurrent networks (SRNs) \cite{Hochreiter1991, Hochreiter2001} and have been used to advance the state-of-the-art for many difficult problems.
100
+ This includes handwriting recognition \cite{Graves2009,Pham2013,Doetsch2014} and generation \cite{Graves2013d}, language modeling \cite{Zaremba2014} and translation \cite{Luong2014}, acoustic modeling of speech \cite{Sak2014}, speech synthesis \cite{Fan2014}, protein secondary structure prediction \cite{Sonderby2014}, analysis of audio \cite{Marchi2014}, and video data \cite{Donahue2014} among others.
101
+
102
+ The central idea behind the LSTM architecture is a memory cell which can maintain its state over time, and non-linear gating units which regulate the information flow into and out of the cell.
103
+ Most modern studies incorporate many improvements that have been made to the LSTM architecture since its original formulation \cite{Hochreiter1995,Hochreiter1997}.
104
+ However, LSTMs are now applied to many learning problems which differ significantly in scale and nature from the problems that these improvements were initially tested on.
105
+ A systematic study of the utility of various computational components which comprise LSTMs (see \autoref{fig:lstm}) was missing. This paper fills that gap and systematically addresses the open question of improving the LSTM architecture.
106
+
107
+ \begin{figure*}[t]
108
+ \centering
109
+ \includegraphics[width=0.9\textwidth]{figures/lstm2}
110
+ \caption{Detailed schematic of the Simple Recurrent Network (SRN) unit (left) and a Long Short-Term Memory block (right) as used in the hidden layers of a recurrent neural network.}
111
+ \label{fig:lstm}
112
+ \end{figure*}
113
+
114
+ We evaluate the most popular LSTM architecture (\emph{vanilla LSTM}; \autoref{sec:lstm}) and eight different variants thereof on three benchmark problems: acoustic modeling, handwriting recognition, and polyphonic music modeling.
115
+ Each variant differs from the vanilla LSTM by a single change.
116
+ This allows us to isolate the effect of each of these changes on the performance of the architecture.
117
+ Random search \citep{Anderson1953, Solis1981, Bergstra2012} is used to find the best-performing hyperparameters for each variant on each problem, enabling a reliable comparison of the performance of different variants.
118
+ We also provide insights gained about hyperparameters and their interaction using fANOVA \citep{Hutter2014}.
119
+
120
+
121
+
122
+
123
+
124
+ \section{Vanilla LSTM}
125
+ \label{sec:lstm}
126
+
127
+ The LSTM setup most commonly used in literature was originally described by \citet{Graves2005}. We refer to it as \emph{vanilla LSTM} and use it as a reference for comparison of all the variants.
128
+ The vanilla LSTM incorporates changes by \citet{Gers1999} and \citet{Gers2000} into the original LSTM~\cite{Hochreiter1997} and uses full gradient training. \autoref{sec:history} provides descriptions of these major LSTM changes.
129
+
130
+ A schematic of the vanilla LSTM block can be seen in \autoref{fig:lstm}.
131
+ It features three gates (input, forget, output), block input, a single cell (the Constant Error Carousel), an output activation function, and peephole connections\footnote{Some studies omit peephole connections, described in Section \ref{sec:peephole}.}.
132
+ The output of the block is recurrently connected back to the block input and all of the gates.
133
+
134
+ \pagebreak
135
+
136
+ \subsection{Forward Pass}
137
+ \label{lstm_forward}
138
+ Let $\mathbf{x}^t$ be the input vector at time $t$, $N$ be the number of LSTM blocks and $M$ the number of inputs. Then we get the following weights for an LSTM layer:
139
+ \begin{itemize}
140
+ \noskip
141
+ \item Input weights: $\mathbf{W}_z$, $\mathbf{W}_i$, $\mathbf{W}_f$, $\mathbf{W}_o \in \mathbb{R}^{N \times M}$
142
+ \item Recurrent weights: $\mathbf{R}_z$, $\mathbf{R}_i$, $\mathbf{R}_f$, $\mathbf{R}_o$ $\in \mathbb{R}^{N \times N}$
143
+ \item Peephole weights: $\mathbf{p}_i$, $\mathbf{p}_f$, $\mathbf{p}_o$ $\in \mathbb{R}^{N}$
144
+ \item Bias weights: $\mathbf{b}_z$, $\mathbf{b}_i$, $\mathbf{b}_f$, $\mathbf{b}_o$ $\in \mathbb{R}^{N}$
145
+ \end{itemize}
146
+
147
+
148
+ Then the vector formulas for a vanilla LSTM layer forward pass can be written as:
149
+ \begin{align*}
150
+ \bar{\mathbf{z}}^t &= \mathbf{W}_z \mathbf{x}^t + \mathbf{R}_z \mathbf{y}^{t-1} + \mathbf{b}_z\\
151
+ \mathbf{z}^t &= g(\bar{\mathbf{z}}^t) & \textit{block input}\\
152
+ \bar{\mathbf{i}}^t &= \mathbf{W}_i \mathbf{x}^t + \mathbf{R}_i \mathbf{y}^{t-1} + \mathbf{p}_i \odot \mathbf{c}^{t-1} + \mathbf{b}_i\\
153
+ \mathbf{i}^t &= \sigma(\bar{\mathbf{i}}^t) & \textit{input gate}\\
154
+ \bar{\mathbf{f}}^t &= \mathbf{W}_f \mathbf{x}^t + \mathbf{R}_f \mathbf{y}^{t-1} + \mathbf{p}_f \odot \mathbf{c}^{t-1} + \mathbf{b}_f\\
155
+ \mathbf{f}^t &= \sigma(\bar{\mathbf{f}}^t) & \textit{forget gate}\\
156
+ \mathbf{c}^t &= \mathbf{z}^t \odot \mathbf{i}^t + \mathbf{c}^{t-1} \odot \mathbf{f}^t & \textit{cell}\\
157
+ \bar{\mathbf{o}}^t &= \mathbf{W}_o \mathbf{x}^t + \mathbf{R}_o \mathbf{y}^{t-1} + \mathbf{p}_o \odot \mathbf{c}^{t} + \mathbf{b}_o\\
158
+ \mathbf{o}^t &= \sigma(\bar{\mathbf{o}}^t) & \textit{output gate}\\
159
+ \mathbf{y}^t &= h(\mathbf{c}^t) \odot \mathbf{o}^t & \textit{block output}
160
+ \end{align*}
161
+ Where $\sigma$, $g$ and $h$ are point-wise non-linear activation functions.
162
+ The {\it logistic sigmoid} ($\sigma(x) = \frac{1}{1+e^{-x}}$) is used as gate activation function and the {\it hyperbolic tangent} ($g(x) = h(x) = \tanh(x)$) is usually used as the block input and output activation function.
163
+ Point-wise multiplication of two vectors is denoted by $\odot$.
164
+
165
+ \subsection{Backpropagation Through Time}
166
+ The deltas inside the LSTM block are then calculated as:
167
+ \begin{align*}
168
+ \mathbf{\delta y}^t &= \Delta^t + \mathbf{R}_z^T \mathbf{\delta z}^{t+1} +
169
+ \mathbf{R}_i^T \mathbf{\delta i}^{t+1} +
170
+ \mathbf{R}_f^T \mathbf{\delta f}^{t+1} +
171
+ \mathbf{R}_o^T \mathbf{\delta o}^{t+1} \\
172
+ \delta \bar{\mathbf{o}}^t &= \mathbf{\delta y}^t \odot h(\mathbf{c}^t) \odot \sigma'(\bar{\mathbf{o}}^t) \\
173
+ \mathbf{\delta c}^t &= \mathbf{\delta y}^t \odot \mathbf{o}^t \odot h'(\mathbf{c}^t) +
174
+ \mathbf{p}_o \odot \delta \bar{\mathbf{o}}^t +
175
+ \mathbf{p}_i \odot \delta \bar{\mathbf{i}}^{t+1}\\
176
+ &\quad+\mathbf{p}_f \odot \delta \bar{\mathbf{f}}^{t+1} +
177
+ \mathbf{\delta c}^{t+1} \odot \mathbf{f}^{t+1} \\
178
+ \delta \bar{\mathbf{f}}^t &= \mathbf{\delta c}^t \odot \mathbf{c}^{t-1} \odot \sigma'(\bar{\mathbf{f}}^t) \\
179
+ \delta \bar{\mathbf{i}}^t &= \mathbf{\delta c}^t \odot \mathbf{z}^{t} \odot \sigma'(\bar{\mathbf{i}}^t) \\
180
+ \delta \bar{\mathbf{z}}^t &= \mathbf{\delta c}^t \odot \mathbf{i}^{t} \odot g'(\bar{\mathbf{z}}^t) \\
181
+ \end{align*}
182
+
183
+ Here $\Delta^t$ is the vector of deltas passed down from the layer
184
+ above.
185
+ If $E$ is the loss function it formally corresponds to $\frac{\partial E}{\partial\mathbf{y}^{t}}$, but not including the recurrent dependencies.
186
+ The deltas for the inputs are only needed if there is a layer below that needs training, and can be computed as follows:
187
+
188
+ \[
189
+ \mathbf{\delta x}^t = \mathbf{W}_z^T \delta \bar{\mathbf{z}}^t +
190
+ \mathbf{W}_i^T \delta \bar{\mathbf{i}}^t +
191
+ \mathbf{W}_f^T \delta \bar{\mathbf{f}}^t +
192
+ \mathbf{W}_o^T \delta \bar{\mathbf{o}}^t
193
+ \]
194
+
195
+
196
+ Finally, the gradients for the weights are calculated as follows, where
197
+ $\mathbf{\star}$ can be any of $\{\bar{\mathbf{z}}, \bar{\mathbf{i}}, \bar{\mathbf{f}}, \bar{\mathbf{o}}\}$, and $\langle \star_1, \star_2 \rangle$ denotes the outer product of two vectors:
198
+
199
+ \begin{align*}
200
+ \delta\mathbf{W}_\star &= \sum^T_{t=0} \langle \mathbf{\delta\star}^t, \mathbf{x}^t \rangle
201
+ & \delta\mathbf{p}_i &= \sum^{T-1}_{t=0} \mathbf{c}^t \odot \delta \bar{\mathbf{i}}^{t+1}
202
+ \\
203
+ \delta\mathbf{R}_\star &= \sum^{T-1}_{t=0} \langle \mathbf{\delta\star}^{t+1}, \mathbf{y}^t \rangle
204
+ & \delta\mathbf{p}_f &= \sum^{T-1}_{t=0} \mathbf{c}^t \odot \delta \bar{\mathbf{f}}^{t+1}
205
+ \\
206
+ \delta\mathbf{b}_\star &= \sum^{T}_{t=0} \mathbf{\delta\star}^{t}
207
+ & \delta\mathbf{p}_o &= \sum^{T}_{t=0} \mathbf{c}^t \odot \delta \bar{\mathbf{o}}^{t}
208
+ \\
209
+ \end{align*}
210
+
211
+
212
+
213
+
214
+
215
+ \section{History of LSTM}
216
+ \label{sec:history}
217
+ The initial version of the LSTM block \cite{Hochreiter1995,Hochreiter1997} included (possibly multiple) cells, input and output gates, but no forget gate and no peephole connections.
218
+ The output gate, unit biases, or input activation function were omitted for certain experiments.
219
+ Training was done using a mixture of Real Time Recurrent Learning (RTRL) \cite{Robinson1987, Williams1989} and Backpropagation Through Time (BPTT) \cite{Werbos1988, Williams1989}. Only the gradient of the cell was propagated back through time, and the gradient for the other recurrent connections was truncated.
220
+ Thus, that study did not use the exact gradient for training.
221
+ Another feature of that version was the use of \emph{full gate recurrence}, which means that all the gates received recurrent inputs from all gates at the previous time-step in addition to the recurrent inputs from the block outputs.
222
+ This feature did not appear in any of the later papers.
223
+
224
+ \subsection{Forget Gate}
225
+ The first paper to suggest a modification of the LSTM architecture introduced the forget gate \citep{Gers1999}, enabling the LSTM to reset its own state.
226
+ This allowed learning of continual tasks such as embedded Reber grammar.
227
+
228
+ \subsection{Peephole Connections}\label{sec:peephole}
229
+ \citet{Gers2000} argued that in order to learn precise timings, the cell needs to control the gates.
230
+ So far this was only possible through an open output gate. Peephole connections (connections from the cell to the gates, blue in \autoref{fig:lstm}) were added to the architecture in order to make precise timings easier to learn.
231
+ Additionally, the output activation function was omitted, as there was no evidence that it was essential for solving the problems that LSTM had been tested on so far.
232
+
233
+ \subsection{Full Gradient}
234
+ The final modification towards the vanilla LSTM was done by \citet{Graves2005}.
235
+ This study presented the full backpropagation through time (BPTT) training for LSTM networks with the architecture described in \autoref{sec:lstm}, and presented results on the TIMIT \cite{Garofolo1993} benchmark. Using full BPTT had the added advantage that LSTM gradients could be checked using finite differences, making practical implementations more reliable.
236
+
237
+ \subsection{Other Variants}
238
+ Since its introduction the vanilla LSTM has been the most commonly used architecture, but other variants have been suggested too.
239
+ Before the introduction of full BPTT training, \citet{Gers2002} utilized a training method based on Extended Kalman Filtering which enabled the LSTM to be trained on some pathological cases at the cost of high computational complexity.
240
+ \citet{Schmidhuber2007} proposed using a hybrid evolution-based method instead of BPTT for training but retained the vanilla LSTM architecture.
241
+
242
+ \citet{Bayer2009} evolved different LSTM block architectures that maximize fitness on context-sensitive grammars.
243
+ A larger study of this kind was later done by \citet{Jozefowicz2015}.
244
+ \citet{Sak2014} introduced a linear projection layer that projects the output of the LSTM layer down before recurrent and forward connections in order to reduce the amount of parameters for LSTM networks with many blocks.
245
+ By introducing a trainable scaling parameter for the slope of the gate activation functions, \citet{Doetsch2014} were able to improve the performance of LSTM on an offline handwriting recognition dataset.
246
+ In what they call \emph{Dynamic Cortex Memory}, \citet{Otte2014} improved convergence speed of LSTM by adding recurrent connections between the gates of a single block (but not between the blocks).
247
+
248
+ \citet{Cho2014} proposed a simplified variant of the LSTM architecture called \emph{Gated Recurrent Unit} (GRU).
249
+ They used neither peephole connections nor output activation functions, and coupled the input and the forget gate into an \emph{update gate}.
250
+ Finally, their output gate (called \emph{reset gate}) only gates the recurrent connections to the block input ($\mathbf{W}_z$). \citet{Chung2014} performed an initial comparison between GRU and Vanilla LSTM and reported mixed results.
251
+
252
+
253
+
254
+
255
+ \section{Evaluation Setup}
256
+ The focus of our study is to empirically compare different LSTM variants, and not to achieve state-of-the-art results.
257
+ Therefore, our experiments are designed to keep the setup simple and the comparisons fair.
258
+ The vanilla LSTM is used as a baseline and evaluated together with eight of its variants.
259
+ Each variant adds, removes, or modifies the baseline in exactly one aspect, which allows to isolate their effect.
260
+ They are evaluated on three different datasets from different domains to account for cross-domain variations.
261
+
262
+ For fair comparison, the setup needs to be similar for each variant.
263
+ Different variants might require different settings of hyperparameters to give good performance, and we are interested in the best performance that can be achieved with each variant.
264
+ For this reason we chose to tune the hyperparameters like learning rate or amount of input noise individually for each variant.
265
+ Since hyperparameter space is large and impossible to traverse completely, random search was used in order to obtain good-performing hyperparameters \cite{Bergstra2012} for every combination of variant and dataset.
266
+ Random search was also chosen for the added benefit of providing enough data for analyzing the general effect of various hyperparameters on the performance of each LSTM variant (\autoref{sec:hyper-impact}).
267
+
268
+ \subsection{Datasets}
269
+ Each dataset is split into three parts: a training set, a validation set used for early stopping and for optimizing the hyperparameters, and a test set for the final evaluation.
270
+
271
+ \subsubsection*{TIMIT}
272
+ \label{sec:timit}
273
+ The TIMIT Speech corpus \cite{Garofolo1993} is large enough to be a reasonable acoustic modeling benchmark for speech recognition, yet it is small enough to keep a large study such as ours manageable.
274
+ Our experiments focus on the frame-wise classification task for this dataset, where the objective is to classify each audio-frame as one of 61 phones.\footnote{Note that in linguistics a \emph{phone} represents a distinct speech sound independent of the language.
275
+ In contrast, a \emph{phoneme} refers to a sound that distinguishes two words \emph{in a given language} \cite{Crystal2011}.
276
+ These terms are often confused in the machine learning literature.}
277
+ From the raw audio we extract 12 Mel Frequency Cepstrum Coefficients (MFCCs) \citep{Mermelstein1976} +
278
+ energy over 25ms hamming-windows with stride of 10ms and a pre-emphasis
279
+ coefficient of 0.97. This preprocessing is standard in speech recognition and was chosen in order to stay comparable with earlier LSTM-based results (e.g. \cite{Graves2005, Graves2008a}). The 13 coefficients along with their first and second derivatives comprise the 39 inputs to the network and were normalized to have zero mean and unit variance.
280
+
281
+ The performance is measured as classification error percentage. The training, testing, and validation sets are split in line with \citet{Halberstadt1998} into $3696$, $400$, and $192$ sequences, having $304$ frames on average.
282
+
283
+
284
+ We restrict our study to the \textit{core test set}, which is an established subset of the full TIMIT corpus, and use the splits into
285
+ training, testing, and validation sets as detailed by \citet{Halberstadt1998}.
286
+ In short, that means we only use the core test set and drop the SA samples\footnote{The dialect sentences (the SA samples) were meant to expose the dialectal variants of the speakers and were read by all 630 speakers. We follow \cite{Halberstadt1998} and remove them because they bias the distribution of phones.} from the training set.
287
+ The validation set is built from some of the discarded samples from the full test set.
288
+
289
+
290
+
291
+
292
+ \subsubsection*{IAM Online}
293
+ \begin{figure}
294
+ \centering
295
+ \subfigure[]{\includegraphics[width=0.5\textwidth]{figures/hwSample}\label{fig:hwsample:strokes}}
296
+ \hfill
297
+ \subfigure[]{
298
+ \raisebox{1.1cm}{
299
+ \begin{minipage}{0.52\textwidth}
300
+ {\scriptsize\tt
301
+ Ben Zoma said: "The days of 1thy\\
302
+ life means in the day-time; all the days\\
303
+ of 1thy life means even at night-time ."\\
304
+ (Berochoth .) And the Rabbis thought\\
305
+ it important that when we read the\\
306
+ }
307
+ \end{minipage}}
308
+ \label{fig:hwsample:labels}}
309
+ \caption{(a) Example board ({\tt a08-551z}, training set) from the IAM-OnDB dataset
310
+ and (b) its transcription into character label
311
+ sequences.}
312
+ \label{fig:hwsample}
313
+ \end{figure}
314
+
315
+
316
+ The IAM Online Handwriting Database \cite{Liwicki2005}\footnote{The IAM-OnDB was obtained from
317
+ \url{http://www.iam.unibe.ch/fki/databases/iam-on-line-handwriting-database}} consists of English sentences as time series of pen movements that have to be mapped to characters.
318
+ The IAM-OnDB dataset splits into one training set, two validation sets, and one test set, having $775$,
319
+ $192$, $216$, and $544$ {\it boards} each.
320
+ Each board, see \autoref{fig:hwsample:strokes}, contains multiple hand-written lines, which in turn consist of several strokes. We use one line per sequence, and joined the two validation sets together, so the final training, validation, and testing sets contain $5\,355$, $2\,956$ and $3\,859$ sequences respectively.
321
+
322
+ Each handwriting line is accompanied with a target character sequence, see \autoref{fig:hwsample:labels}, assembled from the following $81$~ASCII characters:
323
+ \begin{verbatim}
324
+ abcdefghijklmnopqrstuvwxyz
325
+ ABCDEFGHIJKLMNOPQRSTUVWXYZ
326
+ 0123456789␣!"#&\'()*+,-./[]:;?
327
+ \end{verbatim}
328
+ The board labeled as {\tt a08-551z} (in the training set) contains a sequence of eleven percent (\%) characters that does not have an image in the strokes, and the percent character does not occur in any other board.
329
+ That board was removed from the experiments.
330
+
331
+ We subsampled each sequence to half its length, which speeds up the training and does not harm performance.
332
+ Each frame of the sequence is a 4-dimensional vector containing $\Delta x$, $\Delta y$ (the change in pen position), $t$ (time since the beginning of the stroke), and a fourth dimension that contains value of one at the time of the pen lifting (a transition to the next stroke) and zeroes at all other time steps.
333
+ Possible starts and ends of characters within each stroke are not explicitly marked.
334
+ No additional preprocessing (like base-line straightening, cursive correction, etc.) was used.
335
+
336
+ The networks were trained using the Connectionist Temporal Classification (CTC) error function by \citet{Graves2006} with 82 outputs (81 characters plus the special empty label). We measure performance in terms of the Character Error Rate (CER) after decoding using best-path decoding \citep{Graves2006}.
337
+
338
+
339
+ \subsubsection*{JSB Chorales}
340
+ JSB Chorales is a collection of 382 four-part harmonized chorales by J. S. Bach \citep{Allan2005}, consisting of 202 chorales in major keys and 180 chorals in minor keys.
341
+ We used the preprocessed piano-rolls provided by \citet{Boulanger-Lewandowski2012}.\footnote{Available at \url{http://www-etud.iro.umontreal.ca/~boulanni/icml2012} at the time of writing.}
342
+ These piano-rolls were generated by transposing each MIDI sequence in C major or C minor and sampling frames every quarter note. The networks where trained to do next-step prediction by minimizing the negative log-likelihood. The complete dataset consists of $229$, $76$, and $77$ sequences (training, validation, and test sets respectively) with an average length of $61$.
343
+
344
+
345
+ \subsection{Network Architectures \& Training}\label{sec:arch-training}
346
+ A network with a single LSTM hidden layer and a sigmoid output layer was used for the JSB Chorales task.
347
+ Bidirectional LSTM~\cite{Graves2005} was used for TIMIT and IAM Online tasks, consisting of two hidden layers, one processing the input forwards and the other one backwards in time, both connected to a single softmax output layer.
348
+ As loss function we employed Cross-Entropy Error for TIMIT and JSB Chorales, while for the IAM Online task the Connectionist Temporal Classification (CTC) loss by \citet{Graves2006} was used.
349
+ The initial weights for all networks were drawn from a normal distribution with standard deviation of $0.1$.
350
+ Training was done using Stochastic Gradient Descent with Nesterov-style momentum \cite{Sutskever2013} with updates after each sequence. The learning rate was rescaled by a factor of $(1-\text{momentum})$. Gradients were computed using full BPTT for LSTMs \cite{Graves2005}.
351
+ Training stopped after 150 epochs or once there was no improvement on the validation set for more than fifteen epochs.
352
+
353
+ \subsection{LSTM Variants}
354
+ \label{sec:lstm-variants}
355
+ The vanilla LSTM from \autoref{sec:lstm} is referred as Vanilla~(V).
356
+ For activation functions we follow the standard and use the logistic sigmoid for $\sigma$, and the hyperbolic tangent for both $g$ and $h$.
357
+ The derived eight variants of the V architecture are the following. We only report differences to the forward pass formulas presented in \autoref{lstm_forward}:
358
+
359
+ \begin{description}[align=right, labelwidth=1.5cm]
360
+ \item [NIG:] No Input Gate: \( \mathbf{i}^t = \mathbf{1}\)
361
+ \item [NFG:] No Forget Gate: \( \mathbf{f}^t = \mathbf{1}\)
362
+ \item [NOG:] No Output Gate: \( \mathbf{o}^t = \mathbf{1}\)
363
+ \item [NIAF:] No Input Activation Function: \(g(\mathbf{x}) = \mathbf{x}\)
364
+ \item [NOAF:] No Output Activation Function: \(h(\mathbf{x}) = \mathbf{x}\)
365
+ \item [CIFG:] Coupled Input and Forget Gate:
366
+ \( \mathbf{f}^t = \mathbf{1} - \mathbf{i}^t\)
367
+
368
+ \item [NP:] No Peepholes:
369
+ \begin{align*}
370
+ \bar{\mathbf{i}}^{t} &=\mathbf{W}_{i}\mathbf{x}^{t}+\mathbf{R}_{i}\mathbf{y}^{t-1} + \mathbf{b}_i \\
371
+ \bar{\mathbf{f}}^{t} &=\mathbf{W}_{f}\mathbf{x}^{t}+\mathbf{R}_{f}\mathbf{y}^{t-1} + \mathbf{b}_f \\
372
+ \bar{\mathbf{o}}^{t} &=\mathbf{W}_{o}\mathbf{x}^{t}+\mathbf{R}_{o}\mathbf{y}^{t-1} + \mathbf{b}_o
373
+ \end{align*}
374
+
375
+ \item [FGR:] Full Gate Recurrence:
376
+ \begin{align*}
377
+ \bar{\mathbf{i}}^{t} &=\mathbf{W}_{i}\mathbf{x}^{t}+\mathbf{R}_{i}\mathbf{y}^{t-1}+\mathbf{p}_{i}\odot\mathbf{c}^{t-1} + \mathbf{b}_i\\
378
+ &\quad+ \mathbf{R}_{ii}\mathbf{i}^{t-1}+\mathbf{R}_{fi}\mathbf{f}^{t-1}+\mathbf{R}_{oi}\mathbf{o}^{t-1} \\
379
+ \bar{\mathbf{f}}^{t} &=\mathbf{W}_{f}\mathbf{x}^{t}+\mathbf{R}_{f}\mathbf{y}^{t-1}+\mathbf{p}_{f}\odot\mathbf{c}^{t-1} + \mathbf{b}_f \\
380
+ &\quad+ \mathbf{R}_{if}\mathbf{i}^{t-1}+\mathbf{R}_{ff}\mathbf{f}^{t-1}+\mathbf{R}_{of}\mathbf{o}^{t-1} \\
381
+ \bar{\mathbf{o}}^{t} &=\mathbf{W}_{o}\mathbf{x}^{t}+\mathbf{R}_{o}\mathbf{y}^{t-1}+\mathbf{p}_{o}\odot\mathbf{c}^{t-1} + \mathbf{b}_o \\
382
+ &\quad+ \mathbf{R}_{io}\mathbf{i}^{t-1}+\mathbf{R}_{fo}\mathbf{f}^{t-1}+\mathbf{R}_{oo}\mathbf{o}^{t-1}
383
+ \end{align*}
384
+ \end{description}
385
+
386
+ The first six variants are self-explanatory. The CIFG variant uses only one gate for gating both the input and the cell recurrent self-connection -- a modification of LSTM referred to as Gated Recurrent Units (GRU) \cite{Cho2014}.
387
+ This is equivalent to setting ${\mathbf{f}_t = \mathbf{1} - \mathbf{i}_t}$ instead of learning the forget gate weights independently.
388
+ The FGR variant adds recurrent connections between all the gates as in the original formulation of the LSTM \cite{Hochreiter1997}.
389
+ It adds nine additional recurrent weight matrices, thus significantly increasing the number of parameters.
390
+
391
+ \begin{figure*}
392
+ \centering
393
+ \includegraphics[width=0.95\textwidth]{figures/variants_full_all}
394
+ \includegraphics[width=0.95\textwidth]{figures/variants_top20_all_horiz}
395
+
396
+ \caption{\emph{Test set} performance for all 200 trials (top) and for the best 10\% (bottom) trials (according to the \emph{validation set}) for each dataset and variant. Boxes show the range between the \nth{25} and the \nth{75} percentile of the data, while the whiskers indicate the whole range. The red dot represents the mean and the red line the median of the data. The boxes of variants that differ significantly from the vanilla LSTM are shown in blue with thick lines. The grey histogram in the background presents the average number of parameters for the top 10\% performers of every variant.}
397
+ \label{fig:top20}
398
+ \end{figure*}
399
+
400
+ \subsection{Hyperparameter Search}
401
+ \label{sec:hyperparam_search}
402
+ While there are other methods to efficiently search for good hyperparameters (cf.~\cite{Snoek2012, Hutter2011}), random search has several advantages for our setting:
403
+ it is easy to implement, trivial to parallelize, and covers the search space more uniformly, thereby improving the follow-up analysis of hyperparameter importance.
404
+
405
+ We performed $27$ random searches (one for each combination of the nine variants and three datasets).
406
+ Each random search encompasses $200$ trials for a total of $5400$ trials of randomly sampling the following hyperparameters:\\[-1em]
407
+ \begin{itemize}[leftmargin=*]
408
+ \noskip
409
+ \item number of LSTM blocks per hidden layer:
410
+ log-uniform samples from $[20, 200]$;
411
+ \item learning rate:
412
+ log-uniform samples from $[10^{-6}, 10^{-2}]$;
413
+ \item momentum:
414
+ $1 - \text{log-uniform samples from $[0.01, 1.0]$}$;
415
+ \item standard deviation of Gaussian input noise:
416
+ uniform samples from $[0, 1]$.
417
+ \end{itemize}
418
+
419
+ In the case of the TIMIT dataset, two additional (boolean) hyperparameters were considered (not tuned for the other two datasets).
420
+ The first one was the choice between traditional momentum and Nesterov-style momentum \citep{Sutskever2013}. Our analysis showed that this had no measurable effect on performance so the latter was arbitrarily chosen for all further experiments.
421
+ The second one was whether to clip the gradients to the range $[-1, 1]$. This turned out to hurt overall performance,\footnote{Although this may very well be the result of the range having been chosen too tightly.} therefore the gradients were never clipped in the case of the other two datasets.
422
+
423
+ Note that, unlike an earlier small-scale study \citep{Chung2014}, the number of parameters was not kept fixed for all variants. Since different variants can utilize their parameters differently, fixing this number can bias comparisons.
424
+
425
+
426
+
427
+ \section{Results \& Discussion}
428
+ \label{sec:results}
429
+
430
+ Each of the $5400$ experiments was run on one of 128 AMD Opteron CPUs at 2.5\,GHz and took 24.3\,h on average to complete.
431
+ This sums up to a total single-CPU computation time of just below $15$ years.
432
+
433
+
434
+ For TIMIT the test set performance of the best trial were \textbf{29.6\%} classification error (CIFG) which is close to the best reported result of 26.9\% \cite{Graves2005}.
435
+ Our best result of \textbf{-8.38} log-likelihood (NIAF) on the JSB Chorales dataset on the other hand is well below the -5.56 from \citet{Boulanger-Lewandowski2012}.
436
+ Best LSTM result is 26.9\%
437
+ For the IAM Online dataset our best result was a Character Error Rate of \textbf{9.26\%} (NP) on the test set. The best previously published result is 11.5\% CER by \citet{Graves2008} using a different and much more extensive preprocessing.\footnote{Note that these numbers differ from the best test set performances that can be found in \autoref{fig:top20}. This is the case because here we only report the single best performing trial as determined on the validation set. In \autoref{fig:top20}, on the other hand, we show the test set performance of the 20 best trials for each variant.}
438
+ Note though, that the goal of this study is not to provide state-of-the-art results, but to do a fair comparison of different LSTM variants.
439
+ So these numbers are only meant as a rough orientation for the reader.
440
+
441
+
442
+ \subsection{Comparison of the Variants}
443
+ \label{sec:variant-comparison}
444
+
445
+ A summary of the random search results is shown in \autoref{fig:top20}.
446
+ Welch's $t$-test at a significance level of $p=0.05$ was used\footnote{We applied the \emph{Bonferroni adjustment} to correct for performing eight different tests (one for each variant).} to determine whether the mean test set performance of each variant was significantly different from that of the baseline.
447
+ The box for a variant is highlighted in blue if its mean performance differs significantly from the mean performance of the vanilla LSTM.
448
+
449
+
450
+ The results in the top half of \autoref{fig:top20} represent the distribution of all 200 test set performances over the whole search space.
451
+ Any conclusions drawn from them are therefore specific to our choice of search ranges.
452
+ We have tried to chose reasonable ranges for the hyperparameters that include the best settings for each variant and are still small enough to allow for an effective search.
453
+ The means and variances tend to be rather similar for the different variants and datasets, but even here some significant differences can be found.
454
+
455
+ In order to draw some more interesting conclusions we restrict our further analysis to the top 10\% performing trials for each combination of dataset and variant (see bottom half of \autoref{fig:top20}).
456
+ This way our findings will be less dependent on the chosen search space and will be representative for the case of ``reasonable hyperparameter tuning efforts.''\footnote{How much effort is ``reasonable'' will still depend on the search space. If the ranges are chosen much larger, the search will take much longer to find good hyperparameters.}
457
+
458
+
459
+ The first important observation based on \autoref{fig:top20} is that removing the output activation function (NOAF) or the forget gate (NFG) significantly hurt performance on all three datasets.
460
+ Apart from the CEC, the ability to forget old information and the squashing of the cell state appear to be critical for the LSTM architecture.
461
+ Indeed, without the output activation function, the block output can in principle grow unbounded.
462
+ Coupling the input and the forget gate avoids this problem and might render the use of an output non-linearity less important, which could explain why GRU performs well without it.
463
+
464
+ Input and forget gate coupling (CIFG) did not significantly change mean performance on any of the datasets, although the best performance improved slightly on music modeling. Similarly, removing peephole connections (NP) also did not lead to significant changes, but the best performance improved slightly for handwriting recognition.
465
+ Both of these variants simplify LSTMs and reduce the computational complexity, so it might be worthwhile to incorporate these changes into the architecture.
466
+
467
+ Adding full gate recurrence (FGR) did not significantly change performance on TIMIT or IAM Online, but led to worse results on the JSB Chorales dataset.
468
+ Given that this variant greatly increases the number of parameters, we generally advise against using it.
469
+ Note that this feature was present in the original proposal of LSTM \cite{Hochreiter1995, Hochreiter1997}, but has been absent in all following studies.
470
+
471
+ Removing the input gate (NIG), the output gate (NOG), and the input activation function (NIAF) led to a significant reduction in performance on speech and handwriting recognition.
472
+ However, there was no significant effect on music modeling performance.
473
+ A small (but statistically insignificant) average performance improvement was observed for the NIG and NIAF architectures on music modeling.
474
+ We hypothesize that these behaviors will generalize to similar problems such as language modeling.
475
+ For supervised learning on continuous real-valued data (such as speech and handwriting recognition), the input gate, output gate, and input activation function are all crucial for obtaining good performance.
476
+
477
+ \begin{figure*}[t]
478
+ \centering
479
+ \includegraphics[width=0.97\textwidth]{figures/hyper_all}
480
+ \caption{Predicted marginal error (blue) and marginal time for different values of the \emph{learning rate}, \emph{hidden size}, and the \emph{input noise} (columns) for the test set of all three datasets (rows).
481
+ The shaded area indicates the standard deviation between the tree-predicted marginals and thus the reliability of the predicted mean performance.
482
+ Note that each plot is for the vanilla LSTM but curves for all variants that are not significantly worse look very similar.}
483
+ \label{fig:learning_rate}
484
+ \label{fig:hidden_size}
485
+ \label{fig:input_noise}
486
+ \end{figure*}
487
+
488
+ \subsection{Impact of Hyperparameters}
489
+ \label{sec:hyper-impact}
490
+ The fANOVA framework for assessing hyperparameter importance by \citet{Hutter2014} is based on the observation that marginalizing over dimensions can be done efficiently in regression trees.
491
+ This allows predicting the marginal error for one hyperparameter while averaging over all the others.
492
+ Traditionally this would require a full hyperparameter grid search, whereas here the hyperparameter space can be sampled at random.
493
+
494
+ Average performance for any slice of the hyperparameter space is obtained by first training a regression tree and then summing over its predictions along the corresponding subset of dimensions.
495
+ To be precise, a random regression \emph{forest} of $100$ trees is trained and their prediction performance is averaged.
496
+ This improves the generalization and allows for an estimation of uncertainty of those predictions.
497
+ The obtained marginals can then be used to decompose the variance into additive components using the functional ANalysis Of VAriance (fANOVA) method \cite{Hooker2007} which provides an insight into the overall importance of hyperparameters and their interactions.
498
+
499
+ \subsubsection*{Learning rate}
500
+ Learning rate is the most important hyperparameter, therefore it is very important to understand how to set it correctly in order to achieve good performance.
501
+ \autoref{fig:learning_rate} shows (in blue) how setting the learning rate value affects the predicted average performance on the test set.
502
+ It is important to note that this is an average over all other hyperparameters and over all the trees in the regression forest.
503
+ The shaded area around the curve indicates the standard deviation over tree predictions (not over other hyperparameters), thus quantifying the reliability of the average.
504
+ The same is shown in green with the predicted average training time.
505
+
506
+ The plots in \autoref{fig:learning_rate} show that the optimal value for the learning rate is dependent on the dataset.
507
+ For each dataset, there is a large basin (up to two orders of magnitude) of good learning rates inside of which the performance does not vary much.
508
+ A related but unsurprising observation is that there is a sweet-spot for the learning rate at the high end of the basin.\footnote{Note that it is unfortunately outside the investigated range for IAM Online and JSB Chorales. This means that ideally we should have chosen the range of learning rates to include higher values as well.}
509
+ In this region, the performance is good and the training time is small.
510
+ So while searching for a good learning rate for the LSTM, it is sufficient to do a coarse search by starting with a high value (e.g. $1.0$) and dividing it by ten until performance stops increasing.
511
+
512
+ \autoref{fig:variances} also shows that the fraction of variance caused by the learning rate is much bigger than the fraction due to interaction between learning rate and hidden layer size (some part of the ``higher order'' piece, for more see below at \textit{Interaction of Hyperparameters}).
513
+ This suggests that the learning rate can be quickly tuned on a small network and then used to train a large one.
514
+
515
+
516
+ \subsubsection*{Hidden Layer Size}
517
+ Not surprisingly the hidden layer size is an important hyperparameter affecting the LSTM network performance.
518
+ As expected, larger networks perform better, but with diminishing returns.
519
+ It can also be seen in \autoref{fig:hidden_size} (middle, green) that the required training time increases with the network size.
520
+ Note that the scale here is \emph{wall-time} and thus factors in both the increased computation time for each epoch as well as the convergence speed.
521
+
522
+
523
+ \subsubsection*{Input Noise}
524
+ Additive Gaussian noise on the inputs, a traditional regularizer for neural networks, has been used for LSTM as well.
525
+ However, we find that not only does it almost always hurt performance, it also slightly increases training times.
526
+ The only exception is TIMIT, where a small dip in error for the range of $[0.2, 0.5]$ is observed.
527
+
528
+ \subsubsection*{Momentum}
529
+ One unexpected result of this study is that momentum affects neither performance nor training time in any significant way.
530
+ This follows from the observation that for none of the datasets, momentum accounted for more than 1\% of the variance of test set performance.
531
+ It should be noted that for TIMIT the interaction between learning rate and momentum accounts for 2.5\% of the total variance, but as with learning rate $\times$ hidden size (cf. \textit{Interaction of Hyperparameters} below) it does not reveal any interpretable structure.
532
+ This may be the result of our choice to scale learning rates dependent on momentum (\autoref{sec:arch-training}).
533
+ These observations suggest that momentum does not offer substantial benefits when training LSTMs with online stochastic gradient descent.
534
+
535
+
536
+ \subsubsection*{Analysis of Variance}
537
+ \begin{figure}[t]
538
+ \centering
539
+ \includegraphics[width=0.80\columnwidth]{figures/variances_all_box}
540
+ \caption{Pie charts showing which fraction of variance of the test set performance can be attributed to each of the hyperparameters. The percentage of variance that is due to interactions between multiple parameters is indicated as ``higher order.''}
541
+ \label{fig:variances}
542
+
543
+ \end{figure}
544
+
545
+ \autoref{fig:variances} shows what fraction of the test set performance variance can be attributed to different hyperparameters. It is obvious that the learning rate is by far the most important hyperparameter, always accounting for more than two thirds of the variance. The next most important hyperparameter is the hidden layer size, followed by the input noise, leaving the momentum with less than one percent of the variance. Higher order interactions play an important role in the case of TIMIT, but are much less important for the other two data sets.
546
+
547
+ \subsubsection*{Interaction of Hyperparameters}
548
+ Some hyperparameters interact with each other resulting in different performance from what could be expected by looking at them individually.
549
+ As shown in \autoref{fig:variances} all these interactions together explain between 5\% and 20\% of the variance in test set performance.
550
+ Understanding these interactions might allow us to speed up the search for good combinations of hyperparameters.
551
+ To that end we visualize the interaction between all pairs of hyperparameters in \autoref{fig:hyper}.
552
+ Each heat map in the left part shows marginal performance for different values of the respective two hyperparameters.
553
+ This is the average performance predicted by the decision forest when marginalizing over all other hyperparameters.
554
+ So each one is the 2D version of the performance plots from \autoref{fig:learning_rate} in the paper.
555
+
556
+ The right side employs the idea of ANOVA to better illustrate the \emph{interaction} between the hyperparameters.
557
+ This means that variance of performance that can be explained by varying a single hyperparameter has been removed.
558
+ In case two hyperparameters do not interact at all (are perfectly independent), that residual would thus be all zero (grey).
559
+
560
+ For example, looking at the pair \emph{hidden size} and \emph{learning rate} on the left side for the TIMIT dataset, we can see that performance varies strongly along the $x$-axis (learning rate), first decreasing and then increasing again.
561
+ This is what we would expect knowing the valley-shape of the learning rate from \autoref{fig:learning_rate}.
562
+ Along the $y$-axis (hidden size) performance seems to decrease slightly from top to bottom.
563
+ Again this is roughly what we would expect from the hidden size plot in \autoref{fig:hidden_size}.
564
+
565
+ On the right side of \autoref{fig:hyper} we can see for the same pair of hyperparameters how their interaction differs from the case of them being completely independent. This heat map exhibits less structure, and it may in fact be the case that we would need more samples to properly analyze the interplay between them. However, given our observations so far this might not be worth the effort. In any case, it is clear from the plot on the left that varying the hidden size does not change the region of optimal learning rate.
566
+
567
+ One clear interaction pattern can be observed in the IAM~Online and JSB datasets between learning rate and input noise.
568
+ Here it can be seen that for high learning rates ($\gtrapprox10^{-4}$) lower input noise ($\lessapprox.5$) is better like also observed in the marginals from \autoref{fig:input_noise}.
569
+ But this trend reverses for lower learning rates, where higher values of input noise are beneficial.
570
+ Though interesting this is not of any practical relevance because performance is generally bad in that region of low learning rates.
571
+ Apart from this, however, it is difficult to discern any regularities in the analyzed hyperparameter interactions.
572
+ We conclude that there is little practical value in attending to the interplay between hyperparameters.
573
+ So for practical purposes hyperparameters can be treated as approximately independent and thus optimized separately.
574
+
575
+
576
+
577
+
578
+
579
+ \begin{figure*}
580
+ \centering
581
+ \subfigure[]{
582
+ \includegraphics[width=0.38\textwidth]{figures/interaction_all_TIMIT}
583
+ \includegraphics[width=0.38\textwidth]{figures/interaction_diff_all_TIMIT}
584
+ }
585
+ \subfigure[]{
586
+ \includegraphics[width=0.38\textwidth]{figures/interaction_all_IAMOn}
587
+ \includegraphics[width=0.38\textwidth]{figures/interaction_diff_all_IAMOn}
588
+ }
589
+ \subfigure[]{
590
+ \includegraphics[width=0.38\textwidth]{figures/interaction_all_JSB}
591
+ \includegraphics[width=0.38\textwidth]{figures/interaction_diff_all_JSB}
592
+ }
593
+ \caption{Total marginal predicted performance for all pairs of hyperparameters (left) and the variation only due to their interaction (right).
594
+ The plot is divided vertically into three subplots, one for every dataset (TIMIT, IAM Online, and JSB Chorales).
595
+ The subplots itself are divided horizontally into two parts, each containing a lower triangular matrix of heat maps.
596
+ The rows and columns of these matrices represent the different hyperparameters (learning rate, momentum, hidden size, and input noise) and there is one heat map for every combination.
597
+ The color encodes the performance as measured by the \emph{Classification Error} for TIMIT,
598
+ \emph{Character Error Rate} for IAM Online and \emph{Negative Log-Likelihood} for the JSB Chorales Dataset.
599
+ For all datasets low (blue) is better than high (red).
600
+ }
601
+ \label{fig:hyper}
602
+ \end{figure*}
603
+
604
+
605
+
606
+
607
+
608
+
609
+ \section{Conclusion}
610
+ This paper reports the results of a large scale study on variants of the LSTM architecture. We conclude that the most commonly used LSTM architecture (vanilla LSTM) performs reasonably well on various datasets.
611
+ None of the eight investigated modifications significantly improves performance.
612
+ However, certain modifications such as coupling the input and forget gates (CIFG) or removing peephole connections (NP) simplified LSTMs in our experiments without significantly decreasing performance.
613
+ These two variants are also attractive because they reduce the number of parameters and the computational cost of the LSTM.
614
+
615
+ The forget gate and the output activation function are the most critical components of the LSTM block.
616
+ Removing any of them significantly impairs performance.
617
+ We hypothesize that the output activation function is needed to prevent the unbounded cell state to propagate through the network and destabilize learning.
618
+ This would explain why the LSTM variant GRU can perform reasonably well without it: its cell state is bounded because of the coupling of input and forget gate.
619
+
620
+ As expected, the learning rate is the most crucial hyperparameter, followed by the network size.
621
+ Surprisingly though, the use of momentum was found to be unimportant in our setting of online gradient descent.
622
+ Gaussian noise on the inputs was found to be moderately helpful for TIMIT, but harmful for the other datasets.
623
+
624
+ The analysis of hyperparameter interactions revealed no apparent structure.
625
+ Furthermore, even the highest measured interaction (between learning rate and network size) is quite small.
626
+ This implies that for practical purposes the hyperparameters can be treated as approximately independent.
627
+ In particular, the learning rate can be tuned first using a fairly small network, thus saving a lot of experimentation time.
628
+
629
+
630
+ Neural networks can be tricky to use for many practitioners compared to other methods whose properties are already well understood.
631
+ This has remained a hurdle for newcomers to the field since a lot of practical choices are based on the intuitions of experts, as well as experiences gained over time. With this study, we have attempted to back some of these intuitions with experimental results.
632
+ We have also presented new insights, both on architecture selection and hyperparameter tuning for LSTM networks which have emerged as the method of choice for solving complex sequence learning problems. In future work, we plan to explore more complex modifications of the LSTM architecture.
633
+
634
+
635
+
636
+
637
+ \bibliography{lstm_study}
638
+ \bibliographystyle{unsrtnat}
639
+
640
+
641
+
642
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/klaus_greff}}]{Klaus Greff}
643
+ received his Diploma in Computer Science from the University
644
+ of Kaiserslautern, Germany in 2011. Currently he is pursuing his PhD at
645
+ IDSIA in Lugano, Switzerland, under the supervision of Prof. J\"urgen
646
+ Schmidhuber in the field of Machine Learning. His research interests
647
+ include Sequence Learning and Recurrent Neural Networks.
648
+ \end{IEEEbiography}
649
+
650
+
651
+
652
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/rupesh_srivastava}}]{Rupesh Srivastava}
653
+ is a PhD student at IDSIA \& USI in Switzerland, supervised by Prof. Jürgen Schmidhuber.
654
+ He currently works on understanding and improving neural network architectures.
655
+ In particular, he has worked on understanding the beneficial properties of local competition in neural networks, and new architectures which allow gradient-based training of extremely deep networks.
656
+ In the past, Rupesh worked on reliability based design optimization using evolutionary algorithms at the Kanpur Genetic Algorithms Laboratory, supervised by Prof. Kalyanmoy Deb for his Masters thesis.
657
+ \end{IEEEbiography}
658
+
659
+
660
+
661
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/jan_koutnik}}]{Jan Koutn\'ik}
662
+ received his Ph.D. in computer science from the Czech Technical University in Prague in 2008. He works as machine learning researcher at The Swiss AI Lab IDSIA. His research is mainly focused on artificial neural networks, recurrent neural networks, evolutionary algorithms and deep-learning applied to reinforcement learning, control problems, image classification, handwriting and speech recognition.
663
+ \end{IEEEbiography}
664
+
665
+
666
+
667
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1in,clip,keepaspectratio]{figures/bas_steunebrink}}]{Bas R. Steunebrink}
668
+ is a postdoctoral researcher at the Swiss AI lab IDSIA. He received his PhD in 2010 from Utrecht University, the Netherlands. Bas's interests and expertise include Artificial General Intelligence (AGI), cognitive robotics, machine learning, resource-constrained control, and affective computing.
669
+ \end{IEEEbiography}
670
+
671
+
672
+
673
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/juergen_schmidhuber}}]{J\"urgen Schmidhuber}
674
+ is Professor of Artificial Intelligence (AI) at USI in Switzerland.
675
+ He has pioneered self-improving general problem solvers since 1987, and Deep Learning Neural Networks (NNs) since 1991.
676
+ The recurrent NNs (RNNs) developed by his research groups at the Swiss AI Lab IDSIA \& USI \& SUPSI \& TU Munich were the first RNNs to win official international contests.
677
+ They have helped to revolutionize connected handwriting recognition, speech recognition, machine translation, optical character recognition, image caption generation, and are now in use at Google, Apple, Microsoft, IBM, Baidu, and many other companies.
678
+ IDSIA's Deep Learners were also the first to win object detection and image segmentation contests, and achieved the world's first superhuman visual classification results, winning nine international competitions in machine learning \& pattern recognition (more than any other team).
679
+ They also were the first to learn control policies directly from high-dimensional sensory input using reinforcement learning.
680
+ His research group also established the field of mathematically rigorous universal AI and optimal universal problem solvers.
681
+ His formal theory of creativity \& curiosity \& fun explains art, science, music, and humor.
682
+ He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age's extreme form of minimal art.
683
+ Since 2009 he has been member of the European Academy of Sciences and Arts.
684
+ He has published 333 peer-reviewed papers, earned seven best paper/best video awards, and is recipient of the 2013 Helmholtz Award of the International Neural Networks Society and the 2016 IEEE Neural Networks Pioneer Award.
685
+ \end{IEEEbiography}
686
+
687
+
688
+ \end{document}
papers/1503/1503.08677.tex ADDED
@@ -0,0 +1,830 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,journal,letterpaper,compsoc]{IEEEtran}
2
+ \usepackage{xspace}
3
+ \usepackage{times}
4
+ \usepackage{epsfig}
5
+ \usepackage{graphicx}
6
+ \usepackage{amsmath}
7
+ \usepackage{amssymb}
8
+ \usepackage{dsfont}
9
+ \usepackage{multirow}
10
+ \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}
11
+ \usepackage{subfigure}
12
+ \usepackage{lscape}
13
+ \usepackage{multirow}
14
+ \usepackage{algpseudocode}
15
+ \usepackage{algorithm}
16
+
17
+ \newcommand{\mynote}[2]{{\bf {#1}:~{#2}}}
18
+ \newcommand{\mat}{\boldsymbol}
19
+ \renewcommand{\vec}{\boldsymbol}
20
+ \def\argmax{\mathop{\rm argmax}}
21
+ \def\argmin{\mathop{\rm argmin}}
22
+ \def\amax{\mathop{\rm max}}
23
+ \def\amin{\mathop{\rm min}}
24
+
25
+ \def\loss{\ell}
26
+ \def\a{\alpha}
27
+ \def\d{\delta}
28
+ \def\l{\lambda}
29
+ \def\D{\Delta}
30
+
31
+ \makeatletter
32
+ \DeclareRobustCommand\onedot{\futurelet\@let@token\@onedot}
33
+ \def\@onedot{\ifx\@let@token.\else.\null\fi\xspace}
34
+
35
+ \def\eg{\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot}
36
+ \def\ie{\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot}
37
+ \def\cf{\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot}
38
+ \def\etc{\emph{etc}\onedot} \def\vs{\emph{vs}\onedot}
39
+ \def\wrt{w.r.t\onedot} \def\dof{d.o.f\onedot}
40
+ \def\etal{\emph{et al}\onedot}
41
+ \makeatother
42
+
43
+ \usepackage{framed}
44
+ \usepackage{xcolor}
45
+ \definecolor{gainsboro}{RGB}{220,220,220}
46
+
47
+ \newlength{\imgwidth}
48
+ \newlength{\imgheight}
49
+
50
+ \newcommand{\w}{\mathbf{w}}
51
+ \newcommand{\x}{\mathbf{x}}
52
+ \newcommand{\W}{\mathbf{W}}
53
+ \newcommand{\X}{\mathcal{X}}
54
+ \newcommand{\Y}{\mathcal{Y}}
55
+
56
+ \def\a{\alpha}
57
+ \def\b{\beta}
58
+ \def\g{\gamma}
59
+ \def\d{\delta}
60
+ \def\lbd{\lambda}
61
+ \def\r{\rho}
62
+ \def\s{\sigma}
63
+ \def\S{\Sigma}
64
+ \def\t{\theta}
65
+ \def\o{\omega}
66
+ \def\p{\varphi}
67
+ \def\x2{\chi^2}
68
+ \def\e{\epsilon}
69
+ \def\D{\Delta}
70
+ \def\Re{\mathbb R}
71
+ \def\minimize{\operatornamewithlimits{Minimize}}
72
+ \def\de{d_E}
73
+ \def\A{{\cal A}}
74
+ \def\X{{\cal X}}
75
+ \def\Y{{\cal Y}}
76
+ \def\tX{\tilde{\X}}
77
+ \def\tY{\tilde{\Y}}
78
+ \def\1{\mathds{1}}
79
+ \newcommand{\Xset}{\mathcal{X}}
80
+ \newcommand{\Yset}{\mathcal{Y}}
81
+
82
+ \def\minimize{\operatornamewithlimits{Minimize}}
83
+
84
+ \newcommand{\note}[2]{{\bf \color{red}{#1}:~{#2}}}
85
+ \newcommand{\specialcell}[2][c]{\begin{tabular}[#1]{@{}c@{}}#2\end{tabular}}
86
+
87
+ \begin{document}
88
+
89
+ \title{Label-Embedding for Image Classification}
90
+
91
+
92
+ \author{Zeynep~Akata,~\IEEEmembership{Member,~IEEE,}
93
+ Florent~Perronnin,~\IEEEmembership{Member,~IEEE,}
94
+ Zaid~Harchaoui,~\IEEEmembership{Member,~IEEE,}
95
+ and~Cordelia~Schmid,~\IEEEmembership{Fellow,~IEEE}\IEEEcompsocitemizethanks{\IEEEcompsocthanksitem Z. Akata is currently with the Computer Vision and Multimodal Computing group
96
+ of the Max-Planck Institute for Informatics, Saarbrucken, Germany.
97
+ The vast majority of this work was done while Z. Akata was jointly with the Computer Vision group of the Xerox Research Centre Europe and the LEAR group of INRIA Grenoble Grenoble Rh\^one-Alpes.\protect\\
98
+ \IEEEcompsocthanksitem F. Perronnin is currently with Facebook AI Research. The vast majority of this work was done while F. Perronnin was with the Computer Vision group of the Xerox Research Centre Europe, Meylan, France.\protect\\
99
+ \IEEEcompsocthanksitem Z. Harchaoui and C. Schmid are with the LEAR group of INRIA Grenoble Rh\^one-Alpes, Montbonnot, France.}}
100
+
101
+
102
+
103
+ \IEEEcompsoctitleabstractindextext{\begin{abstract}
104
+ Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as \eg class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.
105
+ \end{abstract}
106
+ \begin{IEEEkeywords}
107
+ Image Classification, Label Embedding, Zero-Shot Learning, Attributes.
108
+ \end{IEEEkeywords}}
109
+
110
+
111
+ \maketitle
112
+
113
+ \section{Introduction}
114
+ We consider the image classification problem where the task is to annotate a given image with one (or multiple) class label(s) describing its visual content. Image classification is a prediction task: the goal is to learn from a labeled training set a function
115
+ $f: \X \rightarrow \Y$ which maps an input $x$ in the space of images $\X$ to an output $y$ in the space of class labels $\Y$. In this work, we are especially interested in the case where classes are related (\eg they all correspond to animals), but where we do not have {\em any (positive) labeled sample} for some of the classes. This problem is generally referred to as zero-shot learning \cite{FEHF09,LNH09,LEB08,PPH09}. Given the impossibility to collect labeled training samples in an exhaustive manner for all possible visual concepts, zero-shot learning is a problem of high practical value.
116
+
117
+ An elegant solution to zero-shot learning, called attribute-based learning, has recently gained popularity in computer vision. Attribute-based learning consists in introducing an intermediate space ${\cal A}$ referred to as {\em attribute} layer \cite{FEHF09,LNH09}.
118
+ Attributes correspond to high-level properties of the objects which are {\em shared} across multiple classes, which can be detected by machines and which can be understood by humans. Each class can be represented as a vector of class-attribute associations according to the presence or absence of each attribute for that class. Such class-attribute associations are often binary. As an example, if the classes correspond to animals, possible attributes include ``has paws'', ``has stripes'' or ``is black''.
119
+ For the class ``zebra'', the ``has paws'' entry of the attribute vector is zero whereas the ``has stripes'' would be one. The most popular attribute-based prediction algorithm requires learning one classifier per attribute. To classify a new image, its attributes are predicted using the learned classifiers and the attribute scores are combined into class-level scores. This two-step strategy is referred to as Direct Attribute Prediction (DAP) in \cite{LNH09}.
120
+
121
+ \begin{figure}[t]
122
+ \centering
123
+ \includegraphics[width=\linewidth, trim=0 3cm 0 3.2cm]{methodology}
124
+ \caption{Much work in computer vision has been devoted to image embedding (left): how to extract suitable features from an image. We focus on {\em label embedding} (right):
125
+ how to embed class labels in a Euclidean space. We use side information such as attributes for the label embedding and measure the ``compatibility''' between the embedded inputs and outputs with a function $F$.
126
+ }
127
+ \label{fig:ale}
128
+ \vspace{-7mm}
129
+ \end{figure}
130
+
131
+ DAP suffers from several shortcomings. First, DAP proceeds in a two-step fashion, learning attribute-specific classifiers in a first step and combining attribute scores into class-level scores in a second step. Since attribute classifiers are learned independently of the end-task the overall strategy of DAP might be optimal at predicting attributes but not necessarily at predicting classes. Second, we would like an approach that can perform zero-shot prediction if no labeled samples are available for some classes, but that can also leverage new labeled samples for these classes as they become available. While DAP is straightforward to implement for zero-shot learning problems, it is not straightforward to extend to such an incremental learning scenario. Third, while attributes can be a useful source of prior information, they are expensive to obtain and the human labeling is not always reliable. Therefore, it is advantageous to seek complementary or alternative sources of side information such as class hierarchies or textual descriptions (see section \ref{sec:bey}). It is not straightforward to design an efficient way to incorporate these additional sources of information into DAP. Various solutions have been proposed to address each of these problems separately (see section \ref{sec:rel}). However, we do not know of any existing solution that addresses all of them in a principled manner.
132
+
133
+ Our primary contribution is therefore to propose such a solution by making use of the {\em label embedding} framework. We underline that, while there is an abundant literature in the computer vision community on image embedding (how to describe an image) much less work has been devoted in comparison to label embedding in the $\Y$ space (how to describe a class). We embed each class $y \in \Y$ in the space of attribute vectors and thus refer to our approach as \emph{Attribute Label Embedding} (ALE). We use a structured output learning formalism and introduce a function which measures the compatibility between an image $x$ and a label $y$ (see Figure~\ref{fig:ale}). The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct class(es) rank higher than the incorrect ones. Given a test image, recognition consists in searching for the class with the highest compatibility.
134
+
135
+ Another important contribution of this work is to show that our approach extends far beyond the setting of attribute-based recognition: it can be readily used for any side information that can be encoded as vectors in order to be leveraged by the label embedding framework.
136
+
137
+ Label embedding addresses in a principled fashion the three limitations of DAP that were mentioned previously. First, we optimize directly a class ranking objective, whereas DAP proceeds in two steps by solving intermediate problems. We show experimentally that ALE outperforms DAP in the zero-shot setting. Second, if available, labeled samples can be used to learn the embedding. Third, other sources of side information can be combined with attributes or used as alternative source in place of attributes.
138
+
139
+ The paper is organized as follows. In Sec.~2-\ref{sec:le}, we review related work and introduce ALE. In Sec.~\ref{sec:bey}, we study extensions of label embedding beyond attributes. In Sec.~\ref{sec:exp}, we present experimental results on Animals with Attributes (AWA) \cite{LNH09} and Caltech-UCSD-Birds (CUB) \cite{WBPB11}. In particular, we compare ALE with competing alternatives, using the same side information \textit{i.e.} attribute-class associations matrices.
140
+
141
+ A preliminary version of this article appeared in \cite{APHS13}. This version adds (1) an expanded related work section; (2)~ a detailed description of the learning procedure for ALE; (3)~additional comparisons with random embeddings~\cite{DB95} and embeddings derived automatically from textual corpora~\cite{MSCCD13,FCS13}; (4)~additional zero-short learning experiments, which show the advantage of using continuous embeddings; and
142
+ (5)~additional few-shots learning experiments.
143
+
144
+ \section{Related work}\label{sec:rel}
145
+
146
+ We now review related work on attributes, zero-shot learning and label embedding, three research areas which strongly overlap.
147
+
148
+ \subsection{Attributes}
149
+ Attributes have been used for image description \cite{FZ07,FEHF09,CGG12}, caption generation \cite{KPD11,OKB11}, face recognition \cite{KBBN09,SKBB12,CGG13},
150
+ image retrieval \cite{KBN08,SFD11,DRS11}, action recognition \cite{LKS11,YJKLGL11},
151
+ novelty detection \cite{WB13} and object classification \cite{LNH09,FEHF09,WF09,WM10,MSN11,SQL12,MVC12}. Since our task is object classification in images, we focus on the corresponding references.
152
+
153
+ The most popular approach to attribute-based recognition is the Direct Attribute Prediction (DAP) model of Lampert \etal which consists in predicting the presence of attributes in an image and combining the attribute prediction probabilities into class prediction probabilities \cite{LNH09}. A significant limitation of DAP is the fact that it assumes that attributes are independent from each other, an assumption which is generally incorrect (see our experiments on attribute correlation in section \ref{sec:zero}).
154
+ Consequently, DAP has been improved to take into account the correlation between attributes or between attributes and classes \cite{WF09,WM10,YA10,MSN11}. However, all these models have limitations of their own. Wang and Forsyth \cite{WF09} assume that {\em images} are labeled with both classes and attributes. In our work we only assume that {\em classes} are labeled with attributes, which requires significantly less hand-labeling of the data. Mahajan \etal \cite{MSN11} use transductive learning and, therefore, assume that the test data is available as a batch, a strong assumption we do not make. Yu and Aloimonos's topic model \cite{YA10} is only applicable to bag-of-visual-word image representations and, therefore, cannot leverage recent state-of-the-art image features such as the Fisher vector \cite{SPM13}. We will use such features in our experiments.
155
+ Finally, the latent SVM framework of Wang and Mori \cite{WM10} is not applicable to zero-shot learning, the focus of this work.
156
+ Several works have also considered the problem of discovering a vocabulary of attributes \cite{BBS10,DPCG12,MP13}. \cite{BBS10} leverages text and images sampled from the Internet and uses the mutual information principle to measure the information of a group of attributes. \cite{DPCG12} discovers local attributes and integrates humans in the loop for recommending the selection of attributes that are semantically meaningful. \cite{MP13} discovers attributes from images, textual comments and ratings for the purpose of aesthetic image description. In our work, we assume that the class-attribute association matrix is provided. In this sense, our work is complementary to those previously mentioned.
157
+
158
+ \subsection{Zero-shot learning}
159
+ Zero-shot learning requires the ability to transfer knowledge from classes for which we have training data to classes for which we do not. There are two crucial choices when performing zero-shot learning: the choice of the prior information and the choice of the recognition model.
160
+
161
+ Possible sources of prior information include attributes \cite{LNH09,FEHF09,PPH09,RSS10,RSS11}, semantic class taxonomies \cite{RSS11,MVP12}, class-to-class similarities \cite{RSS10,YCFSC13}, text features \cite{PPH09,RSS10,RSS11,SGSBMN13,FCS13} or class co-occurrence statistics \cite{MGS14}.
162
+ Rohrbach \etal \cite{RSS11} compare different sources of information for learning with zero or few samples. However, since different models are used for the different sources of prior information, it is unclear whether the observed differences are due to the prior information itself or the model. In our work, we compare attributes, class hierarchies and textual information obtained from the internet using the exact same learning framework and we can, therefore, fairly compare different sources of prior information. Other sources of prior information have been proposed for special purpose problems. For instance, Larochelle \etal \cite{LEB08} encode characters with $7 \times 5$ pixel representations.
163
+ However, it is difficult to extend such an embedding to the case of generic visual categories -- our focus in this work. For a recent survey of different output embeddings optimized for zero-shot learning on fine-grained datasets, the reader may refer to~\cite{ARWLS15}.
164
+
165
+ As for the recognition model, there are several alternatives. As mentioned earlier, DAP uses a probabilistic model which assumes attribute independence \cite{LNH09}. Closest to the proposed ALE are those works where zero-shot recognition is performed by assigning an image to its closest class embedding (see next section). The measure of distance between an image and a class embedding is generally measured as the Euclidean distance and a transformation is learned to map the input image features to the class embeddings \cite{PPH09,SGSBMN13}. The main difference between these works and ours is that we learn the input-to-output mapping features to optimize directly an image classification criterion: we learn to rank the correct label higher than incorrect ones. We will see in section \ref{sec:zero} that this leads to improved results compared to those works which optimize a regression criterion such as \cite{PPH09,SGSBMN13}.
166
+
167
+ Few works have considered the problem of transitioning from zero-shot learning to learning with few shots \cite{YA10,SQL12,YCFSC13}. As mentioned earlier, \cite{YA10} is only applicable to bag-of-words type of models. \cite{SQL12} proposes to augment the attribute-based representation with additional dimensions for which an autoencoder model is coupled with a large margin principle. While this extends DAP to learning with labeled data, this approach does not improve DAP for zero-shot recognition. In contrast, we show that the proposed ALE can transition from zero-shot to few-shots learning {\em and} improves on DAP in the zero-shot regime. \cite{YCFSC13} learns separately the class embeddings and the input-to-output mapping which is suboptimal. In this paper, we learn {\em jointly} the class embeddings (using attributes as prior) and the input-to-output mapping to optimize classification accuracy.
168
+
169
+ \subsection{Label embedding}
170
+ In computer vision, a vast amount of work has been devoted to input embedding, \ie how to represent an image. This includes work on patch encoding (see \cite{CLV11} for a recent comparison), on kernel-based methods \cite{Shawe:Cristianini:2004} with a recent focus on explicit embeddings~\cite{MB09,VZ10}, on dimensionality reduction~\cite{Shawe:Cristianini:2004} and on compression~\cite{JDS11,SP11,VZ12}. Comparatively, much less work has been devoted to label embedding.
171
+
172
+ Provided that the embedding function $\p$ is chosen correctly -- \ie ``similar'' classes are close according to the Euclidean metric in the embedded space -- label embedding can be an effective way to share parameters between classes. Consequently, the main applications have been multiclass classification with many classes\cite{AFSU07,WC08,WBU10,BWG10} and zero-shot learning \cite{LEB08,PPH09}. We now provide a taxonomy of embeddings. While this taxonomy is valid for both input $\t$ and output embeddings $\p$, we focus here on output embeddings. They can be (i) fixed and data-independent, (ii) learned from data, or (iii) computed from side information.
173
+
174
+ \vspace{2mm}\noindent
175
+ {\bf Data-Independent Embeddings.}
176
+ Kernel dependency estimation~\cite{WestonCESV02} is an example of a strategy where $\p$ is data-independent and defined implicitly through a kernel in the $\Y$ space. The compressed sensing approach of Hsu \etal ~\cite{HKL09}, is another example of data-independent embeddings where $\p$ corresponds to random projections. The Error Correcting Output Codes (ECOC) framework encompasses a large family of embeddings that are built using information-theoretic arguments~\cite{H1950}. ECOC approaches allow in particular to tackle multi-class learning problems as described by Dietterich and Bakiri in \cite{DB95}.
177
+ The reader can refer to \cite{EPR10} for a summary of ECOC methods and latest developments in the ternary output coding methods. Other data-independent embeddings are based on pairwise coupling and variants thereof such as generalized Bradley-Terry models~\cite{Hastie:Tibshirani:Friedman:2008}.
178
+
179
+ \vspace{2mm}\noindent
180
+ {\bf Learned Embeddings.}
181
+ A strategy consists in learning jointly $\t$ and $\p$ to embed the inputs and outputs in a common intermediate space ${\cal Z}$. The most popular example is Canonical Correlation Analysis (CCA)~\cite{Hastie:Tibshirani:Friedman:2008}, which maximizes the correlation between inputs and outputs. Other strategies have been investigated which maximize directly classification accuracy, including the nuclear norm regularized learning of Amit \etal~\cite{AFSU07} or the WSABIE algorithm of Weston \etal~\cite{WBU10}.
182
+
183
+ \vspace{2mm}\noindent
184
+ {\bf Embeddings Derived From Side Information.}
185
+ There are situations where side information is available. This setting is particularly relevant when little training data is available, as side information and the derived embeddings can compensate for the lack of data. Side information can be obtained at an image level \cite{FEHF09} or at a class level \cite{LNH09}. We focus on the latter setting which is more practical as collecting side information at an image level is more costly. Side information may include ``hand-drawn'' descriptions \cite{LEB08}, text descriptions \cite{FEHF09,LNH09,PPH09,FCS13} or class taxonomies \cite{WC08,BWG10}. Certainly, the closest work to ours is that of Frome \etal~\cite{FCS13}~\footnote{Note that the work of Frome \etal~\cite{FCS13} is posterior to our conference submission~\cite{APHS13}.} which involves embedding classes using textual corpora and then learning a mapping between the input and output embeddings using a ranking objective function. We also use a ranking objective function and compare different sources of side information to perform embedding: attributes, class taxonomies and textual corpora.
186
+
187
+ \vspace{2mm}
188
+
189
+ While our focus is on embeddings derived from side information for zero-shot recognition,
190
+ we also considered data independent embeddings and learned embeddings (using side information as a prior) for few-shots recognition.
191
+
192
+
193
+ \section{Label embedding with attributes}
194
+ \label{sec:le}
195
+
196
+ Given a training set ${\cal S}=\{(x_n,y_n), n =1 \ldots N\}$ of input/output pairs with $x_n \in \X$ and $y_n \in \Y$, our goal is to learn a function $f: \X \rightarrow \Y$ by minimizing an empirical risk of the form
197
+ \begin{equation}
198
+ \min_{f \in \mathcal{F}}\quad \frac{1}{N} \sum_{n=1}^N \D(y_n,f(x_n))
199
+ \end{equation}
200
+ where $\D: \Y \times \Y \rightarrow \Re$ measures the loss incurred from predicting $f(x)$ when the true label is $y$, and where the function $f$ belongs to the function $\mathcal{F}$. We shall use the 0/1 loss as a target loss: $\D(y,z) = 0$ if $y=z$, 1 otherwise, to measure the test error, while we consider several surrogate losses commonly used for structured prediction at learning time (see Sec.~\ref{sec:obj} for details on the surrogate losses used in this paper).
201
+
202
+ An elegant framework, initially proposed in~\cite{WestonCESV02}, allows to concisely describe learning problems where both input and output spaces are jointly or independently mapped into lower-dimensional spaces. The framework relies on so-called \emph{embedding functions} $\theta: \X \rightarrow \tX$ and $\p: \Y \rightarrow \tY$ resp for the inputs and outputs. Thanks to these embedding functions, the learning problem is cast into a regular learning problem with transformed input/output pairs.
203
+ In what follows, we first describe our function class $\mathcal{F}$ (section\ref{sec:frm}). We then explain how to leverage side information under the form attributes to compute label embeddings (section \ref{sec:ale}). We also discuss how to learn the model parameters (section \ref{sec:obj}). While, for the sake of simplicity, we focus on attributes in this section, the approach readily generalizes to any side information that can be encoded in matrix form (see following section \ref{sec:bey}).
204
+
205
+ \subsection{Framework}
206
+ \label{sec:frm}
207
+ Figure~\ref{fig:ale} illustrates the proposed model. Inspired from the structured prediction formulation \cite{TJH05}, we introduce a compatibility function $F: \X \times \Y \rightarrow \Re$ and define $f$ as follows:
208
+ \begin{equation}
209
+ f(x;w) = \arg \max_{y \in \Y} F(x,y; w)
210
+ \label{eq:annot}
211
+ \end{equation}
212
+ where $w$ denotes the model parameter vector of $F$ and $F(x,y;w)$ measures how compatible is the pair $(x,y)$ given $w$. It is generally assumed that $F$ is linear in some combined feature embedding of inputs/outputs $\psi(x,y)$:
213
+ \begin{equation}
214
+ F(x,y; w) = w'\psi(x,y)
215
+ \end{equation}
216
+ and that the joint embedding $\psi$ can be written as the tensor product between the image embedding $\theta: \X \rightarrow \tX = \Re^D$ and the label embedding $\p: \Y \rightarrow \tY = \Re^E$:
217
+ \begin{equation}
218
+ \psi(x,y) = \theta(x) \otimes \p(y)
219
+ \end{equation}
220
+ and $\psi(x,y): \Re^D \times \Re^E \rightarrow \Re^{D E}$. In this case $w$ is a DE-dimensional vector which can be reshaped into a $D \times E$ matrix $W$. Consequently, we can rewrite $F(x,y;w)$ as a bilinear form:
221
+ \begin{equation}
222
+ F(x,y;W) = \theta(x)' W \p(y) .
223
+ \label{eqn:form}
224
+ \end{equation}
225
+ Other compatibility functions could have been considered. For example, the function:
226
+ \begin{equation}
227
+ F(x,y;W) = - \Vert \theta(x)'W - \p(y) \Vert^2
228
+ \label{eq:reg}
229
+ \end{equation}
230
+ is typically used in regression problems.
231
+
232
+ Also, if $D$ and $E$ are large, it might be valuable to consider a low-rank decomposition $W=U'V$ to reduce the effective number of parameters. In such a case, we have:
233
+ \begin{equation}
234
+ F(x,y;U,V) = \left( U \theta(x)\right)' \left( V \p(y) \right).
235
+ \label{eqn:lowrank}
236
+ \end{equation}
237
+ CCA~\cite{Hastie:Tibshirani:Friedman:2008}, or more recently WSABIE~\cite{WBU10} rely, for example, on such a decomposition.
238
+
239
+ \subsection{Embedding classes with attributes}
240
+ \label{sec:ale}
241
+ We now consider the problem of defining the label embedding function $\p^{\cal A}$ from attribute side information. In this case, we refer to our approach as \textbf{Attribute Label Embedding (ALE)}.
242
+
243
+ We assume that we have $C$ classes, \ie $\Y = \{1, \ldots, C\}$ and that we have a set of $E$ attributes ${\cal } = \{a_i, i=1 \ldots E\}$ to describe the classes. We also assume that we are provided with an association measure $\rho_{y,i}$ between each attribute $a_i$
244
+ and each class $y$. These associations may be binary or real-valued if we have information
245
+ about the association strength (\eg if the association value is obtained by averaging votes). We embed class $y$ in the $E$-dim attribute space as follows:
246
+ \begin{equation}
247
+ \p^{\cal A}(y) = [\rho_{y,1}, \ldots, \rho_{y,E}]
248
+ \end{equation}
249
+ and denote $\Phi^{\cal A}$ the $E \times C$ matrix of attribute embeddings which stacks the individual $\p^{\cal A}(y)$'s.
250
+
251
+ We note that in equation (\ref{eqn:form}) the image and label embeddings play symmetric roles. In the same way it makes sense to normalize samples when they are used as input to large-margin classifiers, it can make sense to normalize the output vectors $\p^{\cal A}(y)$. In section \ref{sec:zero} we compare (i) continuous embeddings, (ii) binary embeddings using $\{0,1\}$ for the encoding and (iii) binary embeddings using $\{-1,+1\}$ for the encoding. We also explore two normalization strategies: (i) mean-centering (\ie compute the mean over all learning classes and subtract it) and (ii) $\ell_2$-normalization. We underline that such encoding and normalization choices are not arbitrary but relate to prior assumptions we might have on the problem. For instance, underlying the $\{0,1\}$ embedding is the assumption that the presence of the same attribute in two classes should contribute to their similarity, but not its absence. Here we assume a dot-product similarity between attribute embeddings which is consistent with our linear compatibility function (\ref{eqn:form}). Underlying the $\{-1,1\}$ embedding is the assumption that the presence or the absence of the same attribute in two classes should contribute equally to their similarity. As for mean-centered attributes, they take into account the fact that some attributes are more frequent than others. For instance, if an attribute appears in almost all classes, then in the mean-centered embedding, its absence will contribute more to the similarity than its presence. This is similar to an IDF effect in TF-IDF encoding. As for the $\ell_2$-normalization, it enforces that each class is closest to itself according to the dot-product similarity.
252
+
253
+ In the case where attributes are redundant, it might be advantageous to de-correlate them. In such a case, we make use of the compatibility function (\ref{eqn:lowrank}). The matrix $V$ may be learned from labeled data jointly with $U$. As a simpler alternative, it is possible to first learn the decorrelation, \eg by performing a Singular Value Decomposition (SVD) on the $\Phi^{\cal A}$ matrix, and then to learn $U$. We will study the effect of attribute de-correlation in our experiments.
254
+
255
+ \subsection{Learning algorithm}
256
+ \label{sec:obj}
257
+
258
+ We now turn to the estimation of the model parameters $W$ from a labeled training set ${\cal S}$. The simplest learning strategy is to maximize directly the compatibility between the input and output embeddings:
259
+ \begin{equation}
260
+ \frac{1}{N} \sum_{n=1}^N F(x_n,y_n;W)
261
+ \end{equation}
262
+ with potentially some constraints and regularizations on $W$. This is exactly the strategy adopted in regression \cite{PPH09,SGSBMN13}. However, such an objective function does not optimize directly our end-goal which is image classification. Therefore, we draw inspiration from the WSABIE algorithm \cite{WBU10} that learns jointly image and label embeddings from data to optimize classification accuracy.
263
+ The {\em crucial difference between WSABIE and ALE is the fact that the latter uses attributes as side information}. Note that {\em the proposed ALE is not tied to WSABIE} and that we report results in \ref{sec:zero} with other objective functions including regression and structured SVM (SSVM). We chose to focus on the WSABIE objective function with ALE because it yields good results and is scalable.
264
+
265
+ In what follows, we briefly review the WSABIE objective function~\cite{WBU10}. Then, we present ALE which allows to do (i) zero-shot learning with side information and (ii) learning with few (or more) examples with side information. We, then, detail the proposed learning procedures for ALE. In what follows, $\Phi$ is the matrix which stacks the embeddings $\p(y)$.
266
+
267
+ \vspace{2mm}\noindent
268
+ {\bf WSABIE.}
269
+ Let $\1(u)$ = 1 if $u$ is true and 0 otherwise. Let:
270
+ \begin{equation}
271
+ \ell(x_n,y_n,y) = \D(y_n,y) + \theta(x)'W[\p(y) -\p(y_n)]
272
+ \end{equation}
273
+ Let $r(x_n,y_n)$ be the rank of label $y_n$ for image $x_n$. Finally, let $\a_1, \a_2, \ldots, \a_C$ be a sequence of $C$ non-negative coefficients and let $\b_k = \sum_{j=1}^k \a_j$. Usunier \etal \cite{UBG09} propose to use the following ranking loss for ${\cal S}$:
274
+ \begin{equation} \label{eqn:owa}
275
+ \frac{1}{N} \sum_{n=1}^N \b_{r(x_n,y_n)} \; ,
276
+ \end{equation}
277
+ where $\b_{r(x_n,y_n)} := \sum_{j=1}^{r(x_n,y_n)} \alpha_j$. Since the $\b_k$'s are increasing with $k$, minimizing $\b_{r(x_n,y_n)}$ enforces to minimize the $r(x_n,y_n)$'s, \ie it enforces correct labels to rank higher than incorrect ones. $\a_k$ quantifies the penalty incurred by going from rank $k$ to $k+1$. Hence, a decreasing sequence $\a_1 \geq \a_2 \geq \ldots \geq \a_C \geq 0$ implies that a mistake on the rank when the true rank is at the top of the list incurs a higher loss than a mistake on the rank when the true rank is lower in the list -- a desirable property. Following Usunier \etal, we choose $\a_k = 1/k$.
278
+
279
+ Instead of optimizing an upper-bound on (\ref{eqn:owa}), Weston \etal propose to optimize the following approximation of objective (\ref{eqn:owa}):
280
+ \begin{equation}
281
+ R({\cal S};W,\Phi) = \frac{1}{N} \sum_{n=1}^N \frac{\b_{r_\D(x_n,y_n)}}{r_{\D(x_n,y_n)}} \sum_{y \in \Y} \max\{0,\ell(x_n,y_n,y)\}
282
+ \label{eqn:wsabie}
283
+ \end{equation}
284
+ where
285
+ \begin{equation}
286
+ r_{\D}(x_n,y_n) = \sum_{y \in \Y} \1(\ell(x_n,y_n,y)>0)
287
+ \end{equation}
288
+ is an upper-bound on the rank of label $y_n$ for image $x_n$.
289
+
290
+ The main advantage of the formulation (\ref{eqn:wsabie}) is that it can be optimized efficiently through Stochastic Gradient Descent (SGD), as described in Algorithm \ref{alg:sgd}. The label embedding space dimensionality is a parameter to set, for instance using cross-validation. Note that the previous objective function does not incorporate any regularization term. Regularization is achieved implicitly by early stopping, \ie the learning is terminated once the accuracy stops increasing on the validation set.
291
+
292
+ \vspace{2mm}
293
+ \noindent {\bf ALE: Zero-Shot Learning.}
294
+ We now describe the ALE objective for zero-shot learning. In such a case, we cannot learn $\Phi$ from labeled data, but rely on side information. This is in contrast to WSABIE. Therefore, the matrix $\Phi$ is fixed and set to $\Phi^{\cal A}$ (see section \ref{sec:ale} for details on $\Phi^{\cal A}$). We only optimize the objective (\ref{eqn:wsabie}) with respect to $W$.
295
+ We note that, when $\Phi$ is fixed and only $W$ is learned, the objective (\ref{eqn:wsabie}) is closely related to the (unregularized) structured
296
+ SVM (SSVM) objective \cite{TJH05}:
297
+ \begin{equation}
298
+ \frac{1}{N} \sum_{n=1}^N \max_{y \in \Y} \ell(x_n,y_n,y)
299
+ \label{eqn:ssvm}
300
+ \end{equation}
301
+ The main difference is the loss function, which is the multi-class loss function for SSVM. The multi-class loss function focuses on the score with the highest rank, while ALE considers all scores in a weighted fashion. Similar to WSABIE, a major advantage of ALE is its scalability to large datasets \cite{WBU10,PAH12}.
302
+
303
+ \vspace{2mm}
304
+ \noindent {\bf ALE: Few-Shots Learning.}
305
+ We now describe the ALE objective to the case where we have labeled data and side information. In such a case, we want to learn the class embeddings using as prior information $\Phi^{\cal A}$. We, therefore, add to the objective (\ref{eqn:wsabie}) a regularizer:
306
+ \begin{equation}
307
+ R({\cal S};W,\Phi) +\frac{\mu}{2} ||\Phi - \Phi^{\cal A}||^2
308
+ \label{eqn:ale_fewshots}
309
+ \end{equation}
310
+ and optimize jointly with respect to $W$ and $\Phi$. Note that the previous equation is somewhat reminiscent of the ranking model adaptation of \cite{GYX12}.
311
+
312
+ \vspace{2mm}\noindent
313
+ {\bf Training.}
314
+ For the optimization of the zero-shot as well as the few-shots learning, we follow \cite{WBU10} and use Stochastic Gradient Descent (SGD). Training with SGD consists at each step $t$ in (i) choosing a sample $(x,y)$ at random, (ii) repeatedly sampling a negative class denoted $\bar{y}$ with $\bar{y} \neq y$ until a violating class is found, \ie until $\ell(x,y,\bar{y}) > 0$, and (iii) updating the projection matrix (and the class embeddings in case of few-shots learning) using a sample-wise estimate of the regularized risk. Following \cite{WBU10,PAH12}, we use a constant step size $\eta_t=\eta$.
315
+ The detailed algorithm is provided in Algorithm \ref{alg:sgd}.
316
+ \begin{algorithm}[t]
317
+ \small
318
+ \caption{ALE stochastic training}
319
+ Intitialize $W^{(0)}$ randomly.
320
+ \begin{algorithmic}
321
+ \For {$t=1$ to $T$}
322
+ \State Draw ($x$,$y$) from ${\cal S}$.
323
+ \For {$k=1, 2, \ldots, C-1$}
324
+ \State Draw $\bar{y} \neq y$ from $\Y$
325
+ \If {$\ell(x,y,\bar{y}) > 0$}
326
+ \State {\bf // Update $W$}
327
+ \begin{equation} W^{(t)} = W^{(t-1)} + \eta_t \beta_{\lfloor \frac{C-1}{k} \rfloor} \theta(x) [\p(y) - \p(\bar{y})]' \end{equation}
328
+ \State {\bf // Update $\Phi$ (not applicable to zero-shot)}
329
+ \begin{eqnarray} \p^{(t)}(y) & = & (1 - \eta_t \mu) \p^{(t-1)}(y) + \eta_t \mu \p^{{\cal A}}(y) \nonumber \\
330
+ & + & \eta_t \beta_{\lfloor \frac{C-1}{k} \rfloor} W' \theta(x) \label{eqn:upd1} \end{eqnarray}
331
+ \begin{eqnarray} \p^{(t)}(\bar{y}) & = & (1 - \eta_t \mu) \p^{(t-1)}(\bar{y}) + \eta_t \mu \p^{{\cal A}}(\bar{y}) \nonumber \\
332
+ & - & \eta_t \beta_{\lfloor \frac{C-1}{k} \rfloor} W' \theta(x) \label{eqn:upd2} \end{eqnarray}
333
+ \EndIf
334
+ \EndFor
335
+ \EndFor
336
+ \end{algorithmic}
337
+ \label{alg:sgd}
338
+ \end{algorithm}
339
+
340
+ \section{Label embedding beyond attributes}
341
+ \label{sec:bey}
342
+
343
+ A wealth of label embedding methods have been proposed over the years, in several communities and most often for different purpose. Previous works considered either fixed (data-independent) or learned-from-data embeddings. Data used for learning could be either \emph{restricted to the task-at-hand} or could also be complemented by \emph{side information} from other modalities. The purpose of this paper is to propose a general framework that encompasses all these approaches, and compare the empirical performance on image classification tasks. Label embedding methods could be organized according to two criteria: i) task-focused or using other sources of side information; ii) fixed or data-dependent embedding.
344
+
345
+ \subsection{Side information in label embedding}
346
+ A first criterion to discriminate among the different approaches for label embedding is whether the method is using only the training data for the task at hand, that is the examples (images) along with their class labels, or if it is using other sources of information. In the latter option, side information impacts the outputs, and can rely on several types of modalities. In our setting, these modalities could be i) attributes, ii) class taxonomies or iii) textual corpora. i) was the focus of the previous section (see especially \ref{sec:ale}). In what follows, we focus on ii) and iii).
347
+
348
+ \vspace{2mm}\noindent
349
+ {\bf Class hierarchical structures} explicitly use expert knowledge to group the image classes into a hierarchy, such as knowledge from ornithology for birds datasets. A hierarchical structure on the classes requires an ordering operation $\prec$ in $\Y$: $z \prec y$ means that $z$ is an ancestor of $y$ in the tree hierarchy. Given this tree structure, we can define $\xi_{y,z} = 1$ if $z \prec y$ or $z = y$. The hierarchy embedding $\p^{\cal H}(y)$ can be defined as the $C$ dimensional vector:
350
+ \begin{equation}
351
+ \p^{\cal H}(y) = [\xi_{y,1}, \ldots, \xi_{y,C}] .
352
+ \end{equation}
353
+ Here, $\xi_{y,i}$ is the association measure of the $i^{th}$ node in the hierarchy with class $y$. See Figure~\ref{fig:hie} for an illustration. We refer to this embedding as \textbf{Hierarchy Label Embedding (HLE)}. Note that HLE was first proposed in the context of structured learning \cite{TJH05}. Note also that, if classes are not organized in a tree structure but form a graph, other types of embeddings can be used, for instance by performing a kernel PCA on the commute time kernel \cite{SFY04}.
354
+
355
+ \begin{figure}[t]
356
+ \centering
357
+ \includegraphics[width=0.42\linewidth]{hierarchy}
358
+ \caption{Illustration of Hierarchical Label Embedding (HLE).
359
+ In this example, given 7 classes (including a ``root'' class),
360
+ class $6$ is encoded in a binary 7-dimensional space as
361
+ $\p^{\cal H}(6) = [1, 0, 1, 0, 0, 1, 0]$.}
362
+ \label{fig:hie}
363
+ \vspace{-4mm}
364
+ \end{figure}
365
+
366
+ \vspace{2mm}\noindent
367
+ {\bf The co-occurrence of class names in textual corpora} can be automatically extracted using field guides or public resources such as Wikipedia~\footnote{\url{http://en.wikipedia.org}}. Co-occurences of class names can be leveraged to infer relationships between classes, leading to an embedding of the classes. Standard approaches to produce word embeddings from co-ocurrences include Latent Semantic Analyis (LSA)~\cite{De88}, probabilistic Latent Semantic Analysis (pLSA)~\cite{Ho99} or Latent Dirichlet Allocation (LDA)~\cite{BNJ03}. In this work, we use the recent state-of-the-art approach of Mikolov \etal ~\cite{MSCCD13}, also referred to as ``Word2Vec''. It uses a skip-gram model that enforces a word (or a phrase) to be a good predictor of its surrounding words, \ie it enforces neighboring words (or phrases) to be close to each other in the embedded space. Such an embedding , which we refer to as \textbf{Word2Vec Label Embedding (WLE)}, was recently used for zero-shot recognition~\cite{FCS13} on fine-grained datasets~\cite{ARWLS15}.
368
+
369
+ \vspace{2mm}
370
+ In section \ref{sec:exp}, we compare attributes, class hierarchies and textual information (\ie resp. ALE, HLE and WLE) as sources of side information for zero-shot recognition.
371
+
372
+
373
+ \subsection{Data-dependence of label embedding}
374
+ A second criterion is whether the label embedding used at prediction time was fit to training data at training time or not. Here, being \emph{data-dependent} refers to the \emph{training data}, putting aside all other possibles sources of information. There are several types of approaches in this respect: i) fixed and data-independent label embeddings; ii) data-dependent, learnt solely from training data; iii) data-dependent, learnt jointly from training data and side information.
375
+
376
+ Fixed and data-independent correspond to fixed mappings of the original class labels to a lower-dimensional space. In our experiments, we explore three of such kind of embeddings: i) trivial label embedding corresponding to identity mapping, which boils down to plain one-versus-rest classification (\textbf{OVR}); ii) Gaussian Label Embedding (\textbf{GLE}), using Gaussian random projection matrices and assuming Johnson-Lindenstrauss properties; iii) Hadamard Label embedding, similarly, using Hadamard matrices for building the random projection matrices. None of these three label embedding approaches use the training data (nor any side information) to build the label embedding. It is worthwhile to note that the underlying dimensions of these label embedding do rely on training data, since they are usually cross-validated; we shall however ignore this fact here for simplicity of the exposition.
377
+
378
+ Data-dependent label embedding use the training data to build the label embedding used at prediction time. Popular methods in this family are principal component analysis on the outputs, and canonical correlation analysis, and the plain \textbf{WSABIE} approach.
379
+
380
+ Note that it is possible to use both the available training data {\em and} side information to learn the embedding functions. The proposed family of approaches, Attribute Label Embedding (\textbf{ALE}),
381
+ belongs to this latter category.
382
+
383
+ \vspace{2mm}\noindent
384
+ {\bf Combining embeddings.} Different embeddings can be easily combined in the label embedding framework, \eg through simple concatenation of the different embeddings or through more complex operations such as a CCA of the embeddings. This is to be contrasted with DAP which cannot accommodate so easily other sources of prior information.
385
+
386
+
387
+
388
+ \section{Experiments}
389
+ \label{sec:exp}
390
+
391
+ We now evaluate the proposed ALE framework on two public benchmarks: Animal With Attributes (AWA) and CUB-200-2011 (CUB). AWA \cite{LNH09} contains roughly 30,000 images of 50 animal classes. CUB \cite{WBPB11} contains roughly 11,800 images of 200 bird classes.
392
+
393
+ We first describe in sections~\ref{sec:in} and \ref{sec:out} respectively the input embeddings (\ie image features) and output embeddings that we have used in our experiments. In section~\ref{sec:zero}, we present zero-shot recognition experiments, where training and test classes are disjoint. In section~\ref{sec:few}, we go beyond zero-shot learning and consider the case where we have plenty of training data for some classes and little training data for others. Finally, in section~\ref{sec:full} we report results in the case where we have equal amounts of training data for all classes.
394
+
395
+ \subsection{Input embeddings}
396
+ \label{sec:in}
397
+ Images are resized to 100K pixels if larger while keeping the aspect ratio. We extract 128-dim SIFT descriptors \cite{Lo04} and 96-dim color descriptors \cite{CCP07} from regular grids at multiple scales. Both of them are reduced to 64-dim using PCA. These descriptors are, then, aggregated into an image-level representation using the Fisher Vector (FV) \cite{PSM10}, shown to be a state-of-the-art patch encoding technique in~\cite{CLV11}. Therefore, our input embedding function $\theta$ takes as input an image and outputs a FV representation. Using Gaussian Mixture Models with 16 or 256 Gaussians, we compute one SIFT FV and one color FV per image and concatenate them into either 4,096 (4K) or 65,536-dim (64K) FVs. As opposed to \cite{APHS13}, we do not apply PQ-compression which explains why we report better results in the current work (\eg on average 2\% better with the same output embeddings on CUB).
398
+
399
+ \subsection{Output Embeddings}
400
+ \label{sec:out}
401
+
402
+ In our experiments, we considered three embeddings derived side information: attributes, class taxonomies and textual corpora. When considering attributes, we use the attributes (binary, or continuous) as they are provided with the datasets, with no further side information.
403
+
404
+ \vspace{2mm}
405
+ \noindent {\bf Attribute Label Embedding (ALE).} In AWA, each class was annotated with 85 attributes by 10 students \cite{OSW91}. Continuous class-attribute associations were obtained by averaging the per-student votes and subsequently thresholded to obtain binary attributes. In CUB, 312 attributes were obtained from a bird field guide. Each image was annotated according to the presence/absence of these attributes. The per-image attributes were averaged to obtain continuous-valued class-attribute associations and thresholded with respect to the overall mean to obtain binary attributes. By default, we use continuous attribute embeddings in our experiments on both datasets.
406
+
407
+ \vspace{2mm}
408
+ \noindent {\bf Hierarchical Label Embedding (HLE).} We use the Wordnet hierarchy as a source of prior information to compute output embeddings. We collect the set of ancestors of the 50 AWA (resp. 200 CUB) classes from Wordnet and build a hierarchy with 150 (resp. 299) nodes\footnote{In some cases, some of the nodes have a single child. We did not clean the automatically obtained hierarchy.}. Hence, the output dimensionality is 150 (resp. 299) for AWA (resp. CUB). We compute the binary output codes following~\cite{TJH05}: for a given class, an output dimension is set to $\{0,1\}$ according the absence/presence of the corresponding node among the ancestors. The class embeddings are subsequently $\ell_2$-normalized.
409
+
410
+ \vspace{2mm}
411
+ \noindent {\bf Word2Vec Label Embedding (WLE).}
412
+ We trained the skip-gram model on the 13 February 2014 version of the English-language Wikipedia which was tokenized to 1.5 million words and phrases that contain the names of our visual object classes.
413
+ Additionally we use a hierarchical softmax layer \footnote{We obtain word2vec representations using the publicly available implementation from \texttt{https://code.google.com/p/word2vec/}.}. The dimensionality of the output embeddings was cross-validated on a per-dataset basis.
414
+
415
+ \vspace{2mm}
416
+ We also considered three data-independent embeddings:
417
+
418
+ \vspace{2mm}
419
+ \noindent {\bf One-Vs-Rest embedding (OVR).}
420
+ The embedding dimensionality is $C$ where $C$ is the number of classes and the matrix $\Phi$ is the $C \times C$ identity matrix. This is equivalent to training independently one classifier per class.
421
+
422
+ \vspace{2mm}
423
+ \noindent {\bf Gaussian Label Embedding (GLE).}
424
+ The class embeddings are drawn from a standard normal distribution, similar to random projections in compressed sensing~\cite{DeVore:2007}. Similarly to WSABIE, the label embedding dimensionality $E$ is a parameter of GLE which needs to be cross-validated. For GLE, since the embedding is randomly drawn, we repeat the experiments 10 times and report the average (as well as the standard deviation when relevant).
425
+
426
+ \vspace{2mm}
427
+ \noindent {\bf Hadamard Label Embedding.}
428
+ An Hadamard matrix is a square matrix whose rows/columns are mutually orthogonal and whose entries are $\{-1,1\}$~\cite{DeVore:2007}. Hadamard matrices can be computed iteratively with $H_1 = (1)$ and $H_{2^k} = \left( \begin{array}{cc} H_{2^{k-1}} & H_{2^{k-1}}\\
429
+ H_{2^{k-1}} & -H_{2^{k-1}} \end{array} \right)$. In our experiments Hadamard embedding yielded significantly worse results than GLE. Therefore, we only report GLE results in the following.
430
+
431
+ \vspace{2mm}
432
+ Finally, when labeled training data is available in sufficient quantity, the embeddings can be learned from the training data. In this work, we considered one data-driven approach to label embedding:
433
+
434
+ \vspace{2mm}
435
+ \noindent {\bf Web-Scale Annotation By Image Embedding (WSABIE).}
436
+ The objective function of WSABIE~\cite{WBU10} is provided in (\ref{eqn:wsabie}) and the corresponding optimization algorithm is similar to the one of ALE described in Algorithm 1. The difference is that WSABIE does not use any prior information and, therefore, the regularization value $\mu$ is set to 0 in equations (\ref{eqn:upd1}) and (\ref{eqn:upd2}). Another difference with ALE is that the embedding dimensionality $E$ is a parameter of WSABIE which is obtained through cross-validation. This is an advantage of WSABIE since it provides an additional free parameter compared to ALE. However, the cross-validation procedure is computationally intensive.
437
+
438
+ \vspace{2mm}
439
+ In summary, in the following we report results for six label embedding strategies: ALE, HLE, WLE, OVR, GLE and WSABIE. Note that OVR, GLE and WSABIE are not applicable to zero-shot learning since they do not rely on any source of prior information and consequently do not provide a meaningful way to embed a new class for which we do not have any training data.
440
+
441
+ \subsection{Zero-Shot Learning}
442
+ \label{sec:zero}
443
+
444
+ \vspace{2mm} \noindent
445
+ {\bf Set-up.} In this section, we evaluate the proposed ALE in the zero-shot setting. For AWA, we use the standard zero-shot setup which consists in learning parameters on 40 classes and evaluating accuracy on the 10 remaining ones. We use all the images in 40 learning classes ($\approx$ 24,700 images) to learn and cross-validate the model parameters. We then use all the images in 10 evaluation classes ($\approx$ 6,200 images) to measure accuracy. For CUB, we use 150 classes for learning ($\approx$ 8,900 images) and 50 for evaluation ($\approx$ 2,900 images).
446
+
447
+ \begin{table}[t]
448
+ \begin{center}
449
+ \small
450
+ \resizebox{\linewidth}{!}{
451
+ \begin{tabular}{|l|l||c|c|c|c|c|c|}
452
+ \hline
453
+ & & \multicolumn{6}{c|}{AWA} \\
454
+ \hline
455
+ & & \multicolumn{3}{c|}{FV=4K} & \multicolumn{3}{c|}{FV=64K}\\
456
+ \hline
457
+ $\mu$ & $\ell_2$ & cont & $\{0,1\}$ & $\{-1,+1\}$ & cont & $\{0,1\}$ & $\{-1,+1\}$ \\
458
+ \hline
459
+ no & no & 41.5 & 34.2 & 32.5 & 44.9 & 42.4 & 41.8 \\
460
+ yes & no & 42.2 & 33.8 & 33.8 & 44.9 & 42.4 & 42.4 \\
461
+ no & yes & {\bf 45.7} & 34.2 & 34.8 & {\bf 48.5} & 44.6 & 41.8 \\
462
+ yes & yes & 44.2 & 34.9 & 34.9 & 47.7 & 44.8 & 44.8 \\
463
+ \hline
464
+ \hline
465
+ & & \multicolumn{6}{c|}{CUB} \\
466
+ \hline
467
+ & & \multicolumn{3}{c|}{FV=4K} & \multicolumn{3}{c|}{FV=64K}\\
468
+ \hline
469
+ $\mu$ & $\ell_2$ & cont & $\{0,1\}$ & $\{-1,+1\}$ & cont & $\{0,1\}$ & $\{-1,+1\}$ \\
470
+ \hline
471
+ no & no & 17.2 & 10.4 & 12.8 & 22.7 & 20.5 & 19.6 \\
472
+ yes & no & 16.4 & 10.4 & 10.4 & 21.8 & 20.5 & 20.5 \\
473
+ no & yes & {\bf 20.7} & 15.4 & 15.2 & {\bf 26.9} & 22.3 & 19.6 \\
474
+ yes & yes & 20.0 & 15.6 & 15.6 & 26.3 & 22.8 & 22.8 \\
475
+ \hline
476
+ \end{tabular}
477
+ }
478
+ \end{center}
479
+
480
+ \caption{Comparison of the continuous embedding (cont), the binary $\{0,1\}$ embedding and the binary $\{+1,-1\}$ embedding. We also study the impact of mean-centering ($\mu$) and $\ell_2$-normalization.}
481
+ \label{tab:enc} \vspace{-7mm}
482
+ \end{table}
483
+
484
+ \vspace{2mm}\noindent
485
+ {\bf Comparison of output encodings for ALE.} We first compare three different output encodings: (i) continuous encoding, \ie we do not binarize the class-attribute associations, (ii) binary $\{0,1\}$ encoding and (iii) binary $\{-1,+1\}$ encoding. We also compare two normalizations: (i) mean-centering of the output embeddings and (ii) $\ell_2$-normalization. We use the same embedding and normalization strategies at training and test time.
486
+
487
+ Results are shown in Table \ref{tab:enc}. The conclusions are the following ones. Significantly better results are obtained with continuous embeddings than with thresholded binary embeddings. On AWA with 64K-dim FV, the accuracy is 48.5\% with continuous and 41.8\% with $\{-1,+1\}$ embeddings. Similarly on CUB with 64K-dim FV, we obtain 26.9\% with continuous and 19.6\% with $\{-1,+1\}$ embeddings. This is expected since continuous embeddings encode the strength of association between a class and an attribute and, therefore, carry more information. We believe that this is a major strength of the proposed approach as other algorithms such as DAP cannot accommodate such soft values in a straightforward manner. Mean-centering seems to have little impact with 0.8\% (between 48.5\% and 47.7\%) on AWA and 0.6\% (between 26.9\% and 26.3\%) on CUB using 64K FV as input and continuous attributes as output embeddings. On the other hand, $\ell_2$-normalization makes a significant difference in all configurations except from the $\{-1,+1\}$ encoding (\eg only 2.4\% difference between 44.8\% and 42.4\% on AWA, 2.3\% difference between 22.8\% and 20.5\% on CUB). This is expected, since all class embeddings already have a constant norm for $\{-1,+1\}$ embeddings (the square-root of the number of output dimensions $E$). In what follows, we always use the continuous $\ell_2$-normalized embeddings without mean-centric normalization.
488
+
489
+
490
+ \begin{table}[t]
491
+ \begin{center}
492
+ \small
493
+ \begin{tabular}{|r||c|c|c|}
494
+ \hline
495
+ & RR & SSVM & RNK \\
496
+ \hline
497
+ AWA & 44.5 & 47.9 & \bf{48.5} \\
498
+ \hline
499
+ CUB & 21.6 & {\bf 26.3} & \bf{26.3} \\
500
+ \hline
501
+ \end{tabular}
502
+ \end{center}
503
+ \caption{Comparison of different learning algorithms for ALE: ridge-regression (RR), multi-class SSVM (SSVM) and ranking based on WSABIE (RNK).}
504
+ \label{tab:learn} \vspace{-5mm}
505
+ \end{table}
506
+
507
+ \vspace{2mm}\noindent
508
+ {\bf Comparison of learning algorithms.} We now compare three objective functions to learn the mapping between inputs and outputs. The first one is Ridge Regression (RR) which was used in \cite{PPH09} to map input features to output attribute labels. In a nutshell, RR consists in optimizing a regularized quadratic loss for which there exists a closed form formula. The second one is the standard structured SVM (SSVM) multiclass objective function of \cite{TJH05}. The third one is the ranking objective (RNK) of WSABIE \cite{WBU10} which is described in detail section \ref{sec:obj}. The results are provided in Table \ref{tab:learn}. On AWA, the highest result is 48.5\% obtained with RNK, followed by MUL with 47.9\% whereas RR performs worse with 44.5\%. On CUB, RNK and MUL obtain 26.3\% accuracy whereas RR again performs somewhat worse with 21.6\%. Therefore, the conclusion is that the multiclass and ranking frameworks are on-par and outperform the simple ridge regression. This is not surprising since the two former objective functions are more closely related to our end goal which is classification. In what follows, we always use the ranking framework (RNK) to learn the parameters of our model, since it both performs well and was shown to be scalable \cite{WBU10,PAH12}.
509
+
510
+ \begin{table}[t]
511
+ \begin{center}
512
+ \small
513
+ \begin{tabular}{|r|c|c||c|c|}
514
+ \hline
515
+ & \multicolumn{2}{|c||}{Obj. pred.} & \multicolumn{2}{|c|}{Att. pred.} \\
516
+ \hline
517
+ & DAP & ALE & DAP & ALE \\
518
+ \hline
519
+ \hline
520
+ AWA & 41.0 & \bf{48.5} & \bf{72.7} & \bf{72.7} \\
521
+ \hline
522
+ CUB & 12.3 & \bf{26.9} & \bf{64.8} & 59.4 \\
523
+ \hline
524
+ \end{tabular}
525
+ \end{center}
526
+ \caption{Comparison of DAP \cite{LNH09} with ALE. Left: object classification accuracy (top-1 \%) on the 10 AWA and 50 CUB evaluation classes. Right: attribute prediction accuracy (AUC \%) on the 85 AWA and 312 CUB attributes. We use 64K FVs.}
527
+ \label{tab:dap} \vspace{-8mm}
528
+ \end{table}
529
+
530
+ \begin{figure*}[t]
531
+ \centering
532
+ \subfigure[AWA (FV=4K)] {
533
+ \resizebox{0.23\linewidth}{!}{\includegraphics[trim=35 5 75 35, clip=true]{sampling_att_16_AWA}}
534
+ \label{fig:svd_AWA_4K}
535
+ }
536
+ \subfigure[CUB (FV=4K)] {
537
+ \resizebox{0.23\linewidth}{!}{\includegraphics[trim=35 5 75 20, clip=true]{sampling_att_16_CUB}}
538
+ \label{fig:svd_CUB_4K}
539
+ }
540
+ \subfigure[AWA (FV=64K)] {
541
+ \resizebox{0.23\linewidth}{!}{\includegraphics[trim=35 5 75 20, clip=true]{sampling_att_256_AWA}}
542
+ \label{fig:svd_AWA_64K}
543
+ }
544
+ \subfigure[CUB (FV=64K)] {
545
+ \resizebox{0.23\linewidth}{!}{\includegraphics[trim=35 5 75 20, clip=true]{sampling_att_256_CUB}}
546
+ \label{fig:svd_CUB_64K}
547
+ }
548
+ \caption{Classification accuracy on AWA and CUB as a function of the label embedding dimensionality. We compare the baseline which uses all attributes, with an SVD dimensionality reduction and a sampling of attributes (we report the mean and standard deviation over 10 samplings).}
549
+ \label{fig:svd} \vspace{-4mm}
550
+ \end{figure*}
551
+
552
+ \vspace{2mm}\noindent
553
+ {\bf Comparison with DAP.} In this section we compare our approach to direct attribute prediction (DAP)~\cite{LNH09}. We start by giving a short description of DAP and, then, present the results of the
554
+ comparison.
555
+
556
+ In DAP, an image $x$ is assigned to the class $y$, which has the highest posterior probability:
557
+ \begin{equation}
558
+ p(y|x) \propto \prod_{e=1}^E p(a_e = \rho_{y,e}|x) .
559
+ \end{equation}
560
+ $\rho_{y,e}$ is the binary association measure between attribute $a_e$ and class $y$.
561
+ $p(a_e=1|x)$ is the probability that image $x$ contains attribute
562
+ $e$.
563
+ We train for each attribute one linear classifier on the FVs. We use a (regularized) logistic loss which provides an attribute classification accuracy similar to SVM but with the added benefit that its output is already a probability.
564
+
565
+ Table \ref{tab:dap}(left) compares the proposed ALE to DAP for 64K-dim FVs. Our implementation of DAP obtains 41.0\% accuracy on AWA and 12.3\% on CUB. Our result for DAP on AWA is comparable to the 40.5\% accuracy reported by Lampert. Note however that the features are different. Lampert uses bag-of-features and a non-linear kernel classifier ($\chi^2$ SVMs), whereas we use Fisher vectors and a linear SVM. Linear SVMs enable us to run experiments more efficiently. We observe that on both datasets, the proposed ALE outperforms DAP significantly: 48.5\% \vs 41.0\% top-1 accuracy on AWA and 26.9\% \vs 12.3\% on CUB.
566
+
567
+
568
+ \vspace{2mm}\noindent
569
+ {\bf Attribute Correlation.}
570
+ While correlation in the input space is a well-studied topic, comparatively little work has been done to measure the correlation in the output space. Here, we reduce the output space dimensionality and study the impact on the classification accuracy. It is worth noting that reducing the output dimensionality leads to significant speed-ups at training and test times. We explore two different techniques: Singular Value Decomposition (SVD) and attribute sampling. We learn the SVD on AWA (resp. CUB) on the 50$\times$85 (resp. 200$\times$312) $\Phi^{\cal A}$ matrix. For the sampling, we sub-sample a fixed number of attributes and repeat the experiments 10 times for 10 different random sub-samplings. The results of these experiments are presented in Figure~\ref{fig:svd}.
571
+
572
+ We can conclude that there is a significant amount of correlation between attributes. For instance, on AWA with 4K-dim FVs (Figure \ref{fig:svd_AWA_4K}) when reducing the output dimensionality to 25,
573
+ we lose less than 2\% accuracy and with a reduced dimensionality of 50, we perform
574
+ even slightly better than using all the attributes. On the same dataset with 64K-dim FVs (Figure \ref{fig:svd_AWA_64K})
575
+ the accuracy drops from 48.5\% to approximately 45\% when reducing from an 85-dim space to a 25-dim space.
576
+ More impressively, on CUB with 4K-dim FVs (Figure \ref{fig:svd_CUB_4K}) with a reduced dimensionality to 25, 50 or 100 from 312,
577
+ the accuracy is better than the configuration that uses all the attributes. On the same dataset with 64K-dim FVs
578
+ (Figure \ref{fig:svd_CUB_64K}), with 25 dimensions the accuracy is on par with the 312-dim embedding.
579
+ SVD outperforms a random sampling of the attribute dimensions, although there is no guarantee that
580
+ SVD will select the most informative dimensions (see for instance the small pit in performance
581
+ on CUB at 50 dimensions). In random sampling of output embeddings, the choice of the attributes
582
+ seems to be an important factor that affects the descriptive power of output embeddings.
583
+ Consequently, the variance is higher (\eg see Figures \ref{fig:svd_AWA_4K} and Figure \ref{fig:svd_AWA_64K}
584
+ with a reduced attribute dimensionality of 5 or 10) when a small number of attributes is selected.
585
+ In the following experiments, we do not use dimensionality reduction of the attribute embeddings.
586
+
587
+ \begin{table}[t]
588
+ \begin{center}
589
+ \small
590
+ \begin{tabular}{|r|c|c|c|c|c|c|}
591
+ \hline
592
+ & ALE & HLE & WLE & \specialcell{AHLE \\ early} & \specialcell{AHLE \\late} \\
593
+ \hline
594
+ AWA & 48.5 & 40.4 & 32.5 & 46.8 & \bf{49.4} \\
595
+ \hline
596
+ CUB & 26.9 & 18.5 & 16.8 & 27.1 & \bf{27.3} \\
597
+ \hline
598
+ \end{tabular}
599
+ \end{center}
600
+ \caption{Comparison of attributes (ALE), hierarchies (HLE) and Word2Vec (WLE) for label embedding. We consider the combination of ALE and HLE by simple concatenation (AHLE early) or by the averaging of the scores (AHLE late). We use 64K FVs.}
601
+ \label{tab:hie} \vspace{-7mm}
602
+ \end{table}
603
+
604
+
605
+
606
+
607
+
608
+ \vspace{2mm}\noindent
609
+ {\bf Attribute interpretability.}
610
+ In ALE, each column of $W$ can be interpreted as an attribute classifier and $\theta(x)'W$ as a vector of attribute scores of $x$. However, one major difference with DAP is that we do not optimize for attribute classification accuracy. This might be viewed as a disadvantage of our approach as we might loose interpretability, an important property of attribute-based systems when, for instance, one wants to include a human in the loop \cite{BWB10,WBPB11}. We, therefore, measured the attribute prediction accuracy of DAP and ALE. For each attribute, following \cite{LNH09}, we measure the AUC on the set of the evaluation classes and report the mean.
611
+
612
+ Attribute prediction scores are shown in Table \ref{tab:dap}(right). On AWA, the DAP and ALE methods obtain the same AUC accuracy of 72.7\%. On the other hand, on CUB the DAP method obtains 64.8\% AUC whereas ALE is 5.4\% lower with 59.4\% AUC. As a summary, the attribute prediction accuracy of DAP is at least as high as that of ALE. This is expected since DAP optimizes directly attribute-classification accuracy. However, the AUC for ALE is still reasonable, especially on AWA (performance is on par). Thus, our learned attribute classifiers should still be interpretable. We provide qualitative results on AWA in Figure~\ref{fig:awa_att}: we show the four highest ranked images for some of the attributes with the highest AUC scores (namely $>$90\%) and lowest AUC scores (namely $<$50\%).
613
+
614
+ \begin{figure*}
615
+ \centering
616
+ \includegraphics[width=\textwidth]{qualitative}
617
+
618
+ \caption{Sample attributes recognized with high ($>$ 90\%) accuracy (top) and low (\ie $<$50\%) accuracy (bottom) by ALE on AWA.
619
+ For each attribute we show the images ranked highest.
620
+ Note that a AUC $<$ 50\% means that the prediction is worse than random on average.
621
+ The images whose attribute is predicted correctly are circled in green and those whose attribute is predicted incorrectly
622
+ are circled in red.}
623
+ \label{fig:awa_att} \vspace{-5mm}
624
+ \end{figure*}
625
+
626
+ \vspace{2mm}\noindent
627
+ {\bf Comparison of ALE, HLE and WLE.} We now compare different sources of side information. Results are provided in Table \ref{tab:hie}. On AWA, ALE obtains 48.5\% accuracy, HLE obtains 40.4\% and WLE obtains 32\% accuracy. On CUB, ALE obtains 26.9\% accuracy, HLE obtains 18.5\% and WLE obtains 16.8\% accuracy. Note that in \cite{APHS13}, we reported better results on AWA with HLE compared to ALE. The main difference with the current experiment is that we use continuous attribute encodings while \cite{APHS13} was using a binary encoding. Note also that the comparatively poor performance of WLE with respect to ALE and HLE is not unexpected: while ALE and HLE rely on strong expert supervision, WLE is computed in an unsupervised manner from Wikipedia.
628
+
629
+ We also consider the combination of attributes and hierarchies (we do not consider the combination of WLE with other embeddings given its relatively poor performance). We explore two simple alternatives: the concatenation of the embeddings (AHLE early) and the late fusion of classification scores calculated by averaging the scores obtained using ALE and HLE separately (AHLE late). On both datasets, late fusion has a slight edge over early fusion and leads to a small improvement over ALE alone (+0.9\% on AWA and +0.4\% on CUB).
630
+
631
+ In what follows, we do not report further results with WLE given its relatively poor performance and focus on ALE and HLE.
632
+
633
+ \vspace{2mm} \noindent
634
+ {\bf Comparison with the state-of-the-art.}
635
+ We can compare our results to those published in the literature on AWA since we are using the standard training/testing protocol (there is no such zero-shot protocol on CUB). To the best of our knowledge, the best zero-shot recognition results on AWA are those of Yu \etal~\cite{YCFSC13} with 48.3\% accuracy. We report 48.5\% with ALE and 49.4\% with AHLE (late fusion of ALE and HLE). Note that we use different features.
636
+
637
+ \subsection{Few-Shots Learning}
638
+ \label{sec:few}
639
+
640
+ \begin{figure}[t]
641
+ \centering
642
+ \subfigure[AWA (FV=64K)] {
643
+ \resizebox{0.7\linewidth}{!}{\includegraphics[trim=35 5 75 20, clip=true]{fewshots_256_AWA}}
644
+ \label{fig:few_AWA_256}
645
+ }
646
+ \subfigure[CUB (FV=64K)] {
647
+ \resizebox{0.7\linewidth}{!}{\includegraphics[trim=35 5 75 20, clip=true]{fewshots_256_CUB}}
648
+ \label{fig:few_CUB_256}
649
+ }
650
+ \caption{Classification accuracy on AWA and CUB as a function of the number of training samples per class. To train the classifiers, we use all the images of the training ``background'' classes (used in zero-shot learning), and a small number of images randomly drawn from the relevant evaluation classes. Reported results are 10-way in AWA and 50-way in CUB.}
651
+ \label{fig:few} \vspace{-5mm}
652
+ \end{figure}
653
+
654
+
655
+
656
+
657
+ \noindent {\bf Set-up.}
658
+ In these experiments, we assume that we have few (\eg 2, 5, 10, etc.) training samples for a set of classes of interest (the 10 AWA and 50 CUB evaluation classes) in addition to all the samples from a set of ``background classes'' (the remaining 40 AWA and 150 CUB classes). For each evaluation class, we use approximately half of the images for training (the 2, 5, 10, etc. training samples are drawn from this pool) and the other half for testing. The minimum number of images per class in the evaluation set is 302 (AWA) and 42 (CUB). To have the same number of training samples, we use 100 images (AWA) and 20 images (CUB) per class as training set and the remaining images for testing.
659
+
660
+ \vspace{2mm} \noindent
661
+ {\bf Algorithms.} We compare the proposed ALE with three baselines: OVR, GLE and WSABIE. We are especially interested in analyzing the following factors: (i) the influence of parameter sharing (ALE, GLE, WSABIE) \vs no parameter sharing (OVR), (ii) the influence of learning the embedding (WSABIE) \vs having a fixed embedding (ALE, OVR and GLE), and (iii) the influence of prior information (ALE) \vs no prior information (OVR, GLE and WSABIE)
662
+
663
+ For ALE and WSABIE, $W$ is initialized to the matrix learned in the zero-shot experiments. For ALE, we experimented with three different learning variations:
664
+ \begin{itemize}
665
+ \item ALE($W$) consists in learning the parameters $W$ and keeping the embedding fixed ($\Phi = \Phi^{\cal A}$).
666
+ \item ALE($\Phi$) consists in learning the embedding parameters $\Phi$ and keeping $W$ fixed.
667
+ \item ALE($W\Phi$) consists in learning both $W$ and $\Phi$.
668
+ \end{itemize}
669
+
670
+ While both ALE($W$) and ALE($\Phi$) are implemented by stochastic (sub)gradient descent (see Algorithm~\ref{alg:sgd} in Sec.~\ref{sec:obj}), ALE($W\Phi$) is implemented by stochastic alternating optimization. Stochastic alternating optimization alternates between SGD for optimizing over the variable $W$ and optimizing over the variable $\Phi$. Theoretical convergence of SGD for ALE($W$) and ALE($\Phi$) follows from standard results in stochastic optimization with convex non-smooth objectives~\cite{SSSC11,ShaiShai:2014}. Theoretical convergence of the stochastic alternating optimization is beyond the scope of the paper. Experimental results show that the strategy actually works fine empirically.
671
+
672
+
673
+ \vspace{2mm} \noindent
674
+ {\bf Results.} We show results in Figure~\ref{fig:few} for AWA and CUB using 64K-dim features. We can draw the following conclusions. First, GLE underperforms all other approaches for limited training data which shows that random embeddings are not appropriate in this setting. Second, in general, WSABIE and ALE outperform OVR and GLE for small training sets (\eg for less than 10 training samples) which shows that learned embeddings (WSABIE) or embeddings based on prior information (ALE) can be effective when training data is scarce. Third, for tiny amounts of training data (\eg 2-5 training samples per class), ALE outperforms WSABIE which shows the importance of prior information in this setting. Fourth, all variations of ALE -- ALE($W$), ALE($\Phi$) and ALE($W\Phi$) -- perform somewhat similarly. Fifth, as the number of training samples increases, all algorithms seem to converge to a similar accuracy, \ie as expected parameter sharing and prior information are less crucial when training data is plentiful.
675
+
676
+ \subsection{Learning and testing on the full datasets}
677
+ \label{sec:full}
678
+ In these experiments, we learn and test the classifiers on the 50 AWA (resp. 200 CUB) classes. For each class, we reserve approximately half of the data for training and cross-validation purposes and half of the data for test purposes. On CUB, we use the standard training/test partition provided with the dataset. Since the experimental protocol in this section is significantly different from the one chosen
679
+ for zero-shot and few-shots learning, the results cannot be directly compared with those of the previous sections.
680
+
681
+ \vspace{2mm}
682
+ \noindent {\bf Comparison of output encodings.}
683
+ We first compare different encoding techniques (continuous embedding \vs binary embedding) and normalization strategies (with/without mean centering and with/without $\ell_2$-normalization). The results are provided in Table \ref{tab:disc_cont_all}. We can draw the following conclusions.
684
+
685
+ As is the case for zero-shot learning, mean-centering has little impact and $\ell_2$-normalization consistently improves performance, showing the importance of normalized outputs. On the other hand, a major difference with the zero-shot case is that the $\{0,1\}$ and continuous embeddings perform on par. On AWA, in the 64K-dim FVs case, ALE with continuous embeddings leads to 53.3\% accuracy whereas $\{0,1\}$ embeddings leads to 52.5\% (0.8\% difference). On CUB with 64K-dim FVs, ALE with continuous embeddings leads to 21.6\% accuracy while $\{0,1\}$ embeddings lead to 21.4\% (0.2\% difference).
686
+ This seems to indicate that the quality of the prior information used to perform label embedding has less impact when training data is plentiful.
687
+
688
+ \begin{table}[t]
689
+ \begin{center}
690
+ \small
691
+ \begin{tabular}{|r|r|c|c|c|c|}
692
+ \hline
693
+ & & \multicolumn{4}{c|}{AWA}\\
694
+ \hline
695
+ & & \multicolumn{2}{c|}{FV=4K} & \multicolumn{2}{c|}{FV=64K} \\
696
+ \hline
697
+ $\mu$ & $\ell_2$ & $\{0,1\}$ & cont & $\{0,1\}$ & cont \\
698
+ \hline
699
+ no & no & 42.3 & 41.6 & 45.3 & 46.2 \\
700
+ \hline
701
+ no & yes & 44.3 & 44.6 & 52.5 & {\bf 53.3} \\
702
+ \hline
703
+ yes & no & 42.2 & 41.6 & 45.8 & 46.2 \\
704
+ \hline
705
+ yes & yes & {\bf 44.8} & 44.5 & 51.3 & 52.0 \\
706
+ \hline
707
+ \hline
708
+ & & \multicolumn{4}{c|}{CUB}\\
709
+ \hline
710
+ & & \multicolumn{2}{c|}{FV=4K} & \multicolumn{2}{c|}{FV=64K} \\
711
+ \hline
712
+ $\mu$ & $\ell_2$ & $\{0,1\}$ & cont & $\{0,1\}$ & cont \\
713
+ \hline
714
+ no & no & 13.0 & 13.9 & 16.5 & 16.7 \\
715
+ \hline
716
+ no & yes & 16.2 & {\bf 17.5} & 21.4 & {\bf 21.6} \\
717
+ \hline
718
+ yes & no & 13.2 & 13.9 & 16.5 & 16.7 \\
719
+ \hline
720
+ yes & yes & 16.1 & 17.3 & 17.3 & {\bf 21.6} \\
721
+ \hline
722
+ \end{tabular}
723
+ \end{center}
724
+ \caption{Comparison of different output encodings:
725
+ binary $\{0,1\}$ encoding, continuous encoding,
726
+ with/without mean-centering ($\mu$) and with/without $\ell_2$-normalization }
727
+ \label{tab:disc_cont_all} \vspace{-7mm}
728
+ \end{table}
729
+
730
+ \vspace{2mm}
731
+ \noindent {\bf Comparison of output embedding methods.} We now compare on the full training sets several learning algorithms: OVR, GLE with a costly setting $E=2,500$ output dimensions this was the largest output dimensionality allowing us to run the experiments in a reasonable amount of time), WSABIE (with cross-validated $E$), ALE (we use the ALE($W$) variant where the embedding parameters are kept fixed), HLE and AHLE (with early and late fusion). Results are provided in Table \ref{tab:full}.
732
+
733
+ We can observe that, in this setting, all methods perform somewhat similarly. Especially, the simple OVR and GLE baselines provide a competitive performance: OVR outperforms all other methods on CUB and GLE performs best on AWA. This confirms that the quality of the embedding has little importance when training data is plentiful.
734
+
735
+ \begin{table}[t]
736
+ \begin{center}
737
+ \small
738
+ \resizebox{\linewidth}{!}{
739
+ \begin{tabular}{|r|c|c|c|c|c|c|c|c| }
740
+ \hline
741
+ & OVR & GLE & \footnotesize{WSABIE} & ALE & HLE & \specialcell{AHLE \\early} & \specialcell{AHLE \\ late} \\
742
+ \hline
743
+ AWA & 52.3 & \textbf{56.1} & 51.6 & 52.5 & 55.9 & 55.3 & 55.8\\
744
+ \hline
745
+ CUB & \bf{26.6} & 22.5 & 19.5 & 21.6 & 22.5 & 24.6 & 25.5 \\
746
+ \hline
747
+ \end{tabular}
748
+ }
749
+ \end{center}
750
+ \caption{Comparison of different output embedding methods (OVR, GLE, WSABIE, ALE, HLE, AHLE early and AHLE late ) on the full AWA and CUB datasets (resp. 50 and 200 classes). We use 64K FVs.
751
+ }
752
+ \label{tab:full} \vspace{-7mm}
753
+ \end{table}
754
+
755
+ \vspace{2mm}
756
+ \noindent {\bf Reducing the training set size.} We also studied the effect of reducing the amount of training data by using only 1/4, 1/2 and 3/4 of the full training set. We therefore sampled the corresponding fraction of images from the full training set and repeated the experiments ten times with ten different samples. For these experiments, we report GLE results with two settings:
757
+ using a low-cost setting, \ie using the same number of output dimensions $E$ as ALE (\ie 85 for AWA and 312 for CUB) and using a high-cost setting, \ie using a large number of output dimensions ($E=2,500$ -- see comment above about the choice of the value $2,500$). We show results in Figure \ref{fig:part_seed}.
758
+
759
+ On AWA, GLE outperforms all alternatives, closely followed by AHLE late. On CUB, OVR outperforms all alternatives, closely followed again by AHLE late. ALE, HLE and GLE with high-dimensional embeddings perform similarly. For these experiments, a general conclusion is that, when we use high dimensional features, even simple algorithms such as the OVR which are not well-justified for multi-class classification problems can lead to state-of-the-art performance.
760
+
761
+ \begin{figure}[t]
762
+ \centering
763
+ \subfigure[AWA (FV=64K)] {
764
+ \resizebox{0.7\linewidth}{!}{\includegraphics[trim=35 5 50 20, clip=true]{part_256_AWA}}
765
+ }
766
+ \subfigure[CUB (FV=64K)] {
767
+ \resizebox{0.7\linewidth}{!}{\includegraphics[trim=35 5 50 20, clip=true]{part_256_CUB}}
768
+ }
769
+ \caption{Learning on AWA and CUB using 1/4, 1/2, 3/4 and all the training data. Compared output embeddings: OVR, GLE, WSABIE, ALE, HLE, AHLE early and AHLE late. Experiments repeated 10 times for different sampling of Gaussians. We use 64K FVs.
770
+ }
771
+ \label{fig:part_seed} \vspace{-7mm}
772
+ \end{figure}
773
+
774
+
775
+
776
+
777
+ \section{Conclusion}
778
+ We proposed to cast the problem of attribute-based classification as one of label-embedding. The proposed Attribute Label Embedding (ALE) addresses in a principled fashion the limitations of the original DAP model. First, we solve directly the problem at hand (image classification) without introducing an intermediate problem (attribute classification). Second, our model can leverage labeled training data (if available) to update the label embedding, using the attribute embedding as a prior. Third, the label embedding framework is not restricted to attributes and can accommodate other sources of side information such as class hierarchies or words embeddings derived from textual corpora.
779
+
780
+ In the zero-shot setting, we improved image classification results with respect to DAP without losing attribute interpretability. Continuous attributes can be effortlessly used in ALE, leading to a large boost in zero-shot classification accuracy. As an addition, we have shown that the dimensionality of the output space can be significantly reduced with a small loss of accuracy. In the few-shots setting, we showed improvements with respect to the WSABIE algorithm, which learns the label embedding from labeled data but does not leverage prior information.
781
+
782
+ Another important contribution of this work was to relate different approaches to label embedding: data-independent approaches ({\em e.g.} OVR, GLE), data-driven approaches ({\em e.g.} WSABIE) and approaches based on side information ({\em e.g.} ALE, HLE and WLE).
783
+ We present here a unified framework allowing to compare them in a systematic manner.
784
+
785
+ Learning to combine several inputs has been extensively studied in machine learning and computer vision, whereas learning to combine outputs is still largely unexplored. We believe that it is a worthwhile research path to pursue.
786
+
787
+
788
+ \section*{Acknowledgments}
789
+ {\small The Computer Vision group at XRCE is funded partially by the Project Fire-ID (ANR-12-CORD-0016).
790
+ The LEAR team of Inria is partially funded by ERC Allegro, and European integrated project AXES. }
791
+
792
+ \bibliographystyle{ieee}
793
+ \bibliography{TPAMI}
794
+
795
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.2in,clip,keepaspectratio]{Zeynep}}]{Zeynep Akata}
796
+ received her MSc degree from RWTH Aachen and her PhD degree from Universit\'e de Grenoble within a collaboration between
797
+ XRCE and INRIA. In 2014, she received Lise-Meitner Award for Excellent Women in Computer Science and
798
+ currently working as a post-doctoral researcher in Max Planck Institute of Informatics in Germany.
799
+ Her research interests include machine learning methods for large-scale and fine-grained image classification.
800
+ \end{IEEEbiography} \vspace{-10mm}
801
+
802
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.2in,clip,keepaspectratio]{Florent}}]{Florent Perronnin}
803
+ holds an Engineering degree from the Ecole Nationale
804
+ Sup\'erieure des T\'el\'ecommunications and a Ph.D. degree
805
+ from the Ecole Polytechnique F\'ed\'erale de Lausanne.
806
+ In 2005, he joined the Xerox Research Centre Europe in Grenoble where he currently manages the Computer Vision team.
807
+ His main interests are in the application of machine learning to computer vision tasks such as image classification, retrieval
808
+ or segmentation.
809
+ \end{IEEEbiography} \vspace{-10mm}
810
+
811
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.2in,clip,keepaspectratio]{zaid}}]{Zaid Harchaoui}
812
+ graduated from the Ecol\'e Nationale Superieure des Mines, Saint-Etienne, France, in 2004, and the Ph.D. degree
813
+ from ParisTech, Paris, France. Since 2010, he is a permanent researcher in the LEAR team, INRIA Grenoble, France.
814
+ His research interests include statistical machine learning, kernel-based methods, signal processing, and computer vision.
815
+ \end{IEEEbiography} \vspace{-10mm}
816
+
817
+ \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.2in,clip,keepaspectratio]{Cordelia}}]{Cordelia Schmid}
818
+ holds a M.S. degree in computer science from the
819
+ University of Karlsruhe and a doctorate from the Institut National
820
+ Polytechnique de Grenoble. She is a research director at INRIA
821
+ Grenoble where she directs the LEAR team. She is the author of over a
822
+ hundred technical publications. In 2006 and 2014, she was awarded the
823
+ Longuet-Higgins prize for fundamental contributions in computer vision
824
+ that have withstood the test of time. In 2012, she obtained an ERC
825
+ advanced grant for "Active large-scale learning for visual
826
+ recognition". She is a fellow of IEEE.
827
+ \end{IEEEbiography}
828
+
829
+
830
+ \end{document}
papers/1504/1504.08083.tex ADDED
@@ -0,0 +1,1103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,twocolumn,letterpaper]{article}
2
+
3
+ \usepackage{iccv}
4
+ \usepackage{times}
5
+ \usepackage{epsfig}
6
+ \usepackage{graphicx}
7
+ \usepackage{amsmath}
8
+ \usepackage{amssymb}
9
+ \usepackage{amssymb}
10
+ \usepackage{url}
11
+ \usepackage{booktabs}
12
+ \usepackage{multirow}
13
+ \usepackage{subfigure}
14
+ \usepackage{lib}
15
+ \usepackage{makecell}
16
+ \usepackage{booktabs}
17
+
18
+
19
+
20
+
21
+
22
+
23
+
24
+ \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}
25
+
26
+ \iccvfinalcopy
27
+
28
+ \def\iccvPaperID{1698} \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
29
+
30
+ \newcommand{\X}{$\times$\xspace}
31
+ \newcommand{\pool}[1]{$\textrm{pool}_{#1}$\xspace}
32
+ \newcommand{\conv}[1]{$\textrm{conv}_{#1}$\xspace}
33
+ \newcommand{\maxp}[1]{$\textrm{max}_{#1}$\xspace}
34
+ \newcommand{\fc}[1]{$\textrm{fc}_{#1}$\xspace}
35
+ \newcommand{\vggsixteen}{VGG16\xspace}
36
+ \newcommand{\roi}{RoI\xspace}
37
+ \newcommand{\Sm}{{\bf S}\xspace}
38
+ \newcommand{\Med}{{\bf M}\xspace}
39
+ \newcommand{\Lg}{{\bf L}\xspace}
40
+ \newcommand{\ZF}{{\bf ZF}\xspace}
41
+ \newcommand{\ms}{ms\xspace}
42
+ \newcolumntype{x}{>\small c}
43
+ \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
44
+ \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
45
+ \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
46
+
47
+
48
+ \pagestyle{empty}
49
+
50
+ \begin{document}
51
+
52
+ \title{Fast R-CNN}
53
+
54
+ \author{Ross Girshick\\
55
+ Microsoft Research\\
56
+ {\tt\small rbg@microsoft.com}
57
+ }
58
+
59
+ \maketitle
60
+ \thispagestyle{empty}
61
+
62
+ \begin{abstract}
63
+ This paper proposes a Fast Region-based Convolutional Network method \emph{(Fast R-CNN)} for object detection.
64
+ Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks.
65
+ Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy.
66
+ Fast R-CNN trains the very deep \vggsixteen network 9\X faster than R-CNN, is 213\X faster at test-time, and achieves a higher mAP on PASCAL VOC 2012.
67
+ Compared to SPPnet, Fast R-CNN trains \vggsixteen 3\X faster, tests 10\X faster, and is more accurate.
68
+ Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at \url{https://github.com/rbgirshick/fast-rcnn}.
69
+ \end{abstract}
70
+ \section{Introduction}
71
+
72
+ Recently, deep ConvNets \cite{krizhevsky2012imagenet,lecun89e} have significantly improved image classification \cite{krizhevsky2012imagenet} and object detection \cite{girshick2014rcnn,overfeat} accuracy.
73
+ Compared to image classification, object detection is a more challenging task that requires more complex methods to solve.
74
+ Due to this complexity, current approaches (\eg, \cite{girshick2014rcnn,he2014spp,overfeat,Zhu2015segDeepM}) train models in multi-stage pipelines that are slow and inelegant.
75
+
76
+ Complexity arises because detection requires the accurate localization of objects, creating two primary challenges.
77
+ First, numerous candidate object locations (often called ``proposals'') must be processed.
78
+ Second, these candidates provide only rough localization that must be refined to achieve precise localization.
79
+ Solutions to these problems often compromise speed, accuracy, or simplicity.
80
+
81
+
82
+ In this paper, we streamline the training process for state-of-the-art ConvNet-based object detectors \cite{girshick2014rcnn,he2014spp}.
83
+ We propose a single-stage training algorithm that jointly learns to classify object proposals and refine their spatial locations.
84
+
85
+
86
+ The resulting method can train a very deep detection network (\vggsixteen \cite{simonyan2015verydeep}) 9\X faster than R-CNN \cite{girshick2014rcnn} and 3\X faster than SPPnet \cite{he2014spp}.
87
+ At runtime, the detection network processes images in 0.3s (excluding object proposal time) while achieving top accuracy on PASCAL VOC 2012 \cite{Pascal-IJCV} with a mAP of 66\% (vs. 62\% for R-CNN).\footnote{All timings use one Nvidia K40 GPU overclocked to 875 MHz.}
88
+
89
+
90
+
91
+ \subsection{R-CNN and SPPnet}
92
+ The Region-based Convolutional Network method (R-CNN) \cite{girshick2014rcnn} achieves excellent object detection accuracy by using a deep ConvNet to classify object proposals.
93
+ R-CNN, however, has notable drawbacks:
94
+ \begin{enumerate}
95
+ \itemsep0em
96
+ \item {\bf Training is a multi-stage pipeline.}
97
+ R-CNN first fine-tunes a ConvNet on object proposals using log loss.
98
+ Then, it fits SVMs to ConvNet features.
99
+ These SVMs act as object detectors, replacing the softmax classifier learnt by fine-tuning.
100
+ In the third training stage, bounding-box regressors are learned.
101
+ \item {\bf Training is expensive in space and time.}
102
+ For SVM and bounding-box regressor training, features are extracted from each object proposal in each image and written to disk.
103
+ With very deep networks, such as \vggsixteen, this process takes 2.5 GPU-days for the 5k images of the VOC07 trainval set.
104
+ These features require hundreds of gigabytes of storage.
105
+ \item {\bf Object detection is slow.}
106
+ At test-time, features are extracted from each object proposal in each test image.
107
+ Detection with \vggsixteen takes 47s / image (on a GPU).
108
+ \end{enumerate}
109
+
110
+ R-CNN is slow because it performs a ConvNet forward pass for each object proposal, without sharing computation.
111
+ Spatial pyramid pooling networks (SPPnets) \cite{he2014spp} were proposed to speed up R-CNN by sharing computation.
112
+ The SPPnet method computes a convolutional feature map for the entire input image and then classifies each object proposal using a feature vector extracted from the shared feature map.
113
+ Features are extracted for a proposal by max-pooling the portion of the feature map inside the proposal into a fixed-size output (\eg, $6 \times 6$).
114
+ Multiple output sizes are pooled and then concatenated as in spatial pyramid pooling \cite{Lazebnik2006}.
115
+ SPPnet accelerates R-CNN by 10 to 100\X at test time.
116
+ Training time is also reduced by 3\X due to faster proposal feature extraction.
117
+
118
+ SPPnet also has notable drawbacks.
119
+ Like R-CNN, training is a multi-stage pipeline that involves extracting features, fine-tuning a network with log loss, training SVMs, and finally fitting bounding-box regressors.
120
+ Features are also written to disk.
121
+ But unlike R-CNN, the fine-tuning algorithm proposed in \cite{he2014spp} cannot update the convolutional layers that precede the spatial pyramid pooling.
122
+ Unsurprisingly, this limitation (fixed convolutional layers) limits the accuracy of very deep networks.
123
+
124
+ \subsection{Contributions}
125
+ We propose a new training algorithm that fixes the disadvantages of R-CNN and SPPnet, while improving on their speed and accuracy.
126
+ We call this method \emph{Fast R-CNN} because it's comparatively fast to train and test.
127
+ The Fast R-CNN method has several advantages:
128
+ \begin{enumerate}
129
+ \itemsep0em
130
+ \item Higher detection quality (mAP) than R-CNN, SPPnet
131
+ \item Training is single-stage, using a multi-task loss
132
+ \item Training can update all network layers
133
+ \item No disk storage is required for feature caching
134
+ \end{enumerate}
135
+
136
+ Fast R-CNN is written in Python and C++ (Caffe \cite{jia2014caffe}) and is available under the open-source MIT License at \url{https://github.com/rbgirshick/fast-rcnn}.
137
+ \section{Fast R-CNN architecture and training}
138
+
139
+ \figref{arch} illustrates the Fast R-CNN architecture.
140
+ A Fast R-CNN network takes as input an entire image and a set of object proposals.
141
+ The network first processes the whole image with several convolutional (\emph{conv}) and max pooling layers to produce a conv feature map.
142
+ Then, for each object proposal a region of interest (\emph{\roi}) pooling layer extracts a fixed-length feature vector from the feature map.
143
+ Each feature vector is fed into a sequence of fully connected (\emph{fc}) layers that finally branch into two sibling output layers: one that produces softmax probability estimates over $K$ object classes plus a catch-all ``background'' class and another layer that outputs four real-valued numbers for each of the $K$ object classes.
144
+ Each set of $4$ values encodes refined bounding-box positions for one of the $K$ classes.
145
+
146
+ \begin{figure}[t!]
147
+ \centering
148
+ \includegraphics[width=1\linewidth,trim=0 24em 25em 0, clip]{figs/arch.pdf}
149
+ \caption{Fast R-CNN architecture. An input image and multiple regions of interest ({\roi}s) are input into a fully convolutional network. Each \roi is pooled into a fixed-size feature map and then mapped to a feature vector by fully connected layers (FCs). The network has two output vectors per \roi: softmax probabilities and per-class bounding-box regression offsets. The architecture is trained end-to-end with a multi-task loss.}
150
+ \figlabel{arch}
151
+ \end{figure}
152
+
153
+
154
+ \subsection{The \roi pooling layer}
155
+ The \roi pooling layer uses max pooling to convert the features inside any valid region of interest into a small feature map with a fixed spatial extent of $H \times W$ (\eg, $7 \times 7$), where $H$ and $W$ are layer hyper-parameters that are independent of any particular \roi.
156
+ In this paper, an \roi is a rectangular window into a conv feature map.
157
+ Each \roi is defined by a four-tuple $(r, c, h, w)$ that specifies its top-left corner $(r, c)$ and its height and width $(h, w)$.
158
+
159
+ \roi max pooling works by dividing the $h \times w$ RoI window into an $H \times W$ grid of sub-windows of approximate size $h / H \times w / W$ and then max-pooling the values in each sub-window into the corresponding output grid cell.
160
+ Pooling is applied independently to each feature map channel, as in standard max pooling.
161
+ The \roi layer is simply the special-case of the spatial pyramid pooling layer used in SPPnets \cite{he2014spp} in which there is only one pyramid level.
162
+ We use the pooling sub-window calculation given in \cite{he2014spp}.
163
+
164
+
165
+
166
+
167
+ \subsection{Initializing from pre-trained networks}
168
+ We experiment with three pre-trained ImageNet \cite{imagenet_cvpr09} networks, each with five max pooling layers and between five and thirteen conv layers (see \secref{setup} for network details).
169
+ When a pre-trained network initializes a Fast R-CNN network, it undergoes three transformations.
170
+
171
+ First, the last max pooling layer is replaced by a \roi pooling layer that is configured by setting $H$ and $W$ to be compatible with the net's first fully connected layer (\eg, $H = W = 7$ for \vggsixteen).
172
+
173
+ Second, the network's last fully connected layer and softmax (which were trained for 1000-way ImageNet classification) are replaced with the two sibling layers described earlier (a fully connected layer and softmax over $K + 1$ categories and category-specific bounding-box regressors).
174
+
175
+ Third, the network is modified to take two data inputs: a list of images and a list of {\roi}s in those images.
176
+
177
+
178
+ \subsection{Fine-tuning for detection}
179
+ Training all network weights with back-propagation is an important capability of Fast R-CNN.
180
+ First, let's elucidate why SPPnet is unable to update weights below the spatial pyramid pooling layer.
181
+
182
+ The root cause is that back-propagation through the SPP layer is highly inefficient when each training sample (\ie \roi) comes from a different image, which is exactly how R-CNN and SPPnet networks are trained.
183
+ The inefficiency stems from the fact that each \roi may have a very large receptive field, often spanning the entire input image.
184
+ Since the forward pass must process the entire receptive field, the training inputs are large (often the entire image).
185
+
186
+ We propose a more efficient training method that takes advantage of feature sharing during training.
187
+ In Fast R-CNN training, stochastic gradient descent (SGD) mini-batches are sampled hierarchically, first by sampling $N$ images and then by sampling $R/N$ {\roi}s from each image.
188
+ Critically, {\roi}s from the same image share computation and memory in the forward and backward passes.
189
+ Making $N$ small decreases mini-batch computation.
190
+ For example, when using $N = 2$ and $R = 128$, the proposed training scheme is roughly 64\X faster than sampling one {\roi} from $128$ different images (\ie, the R-CNN and SPPnet strategy).
191
+
192
+ One concern over this strategy is it may cause slow training convergence because {\roi}s from the same image are correlated.
193
+ This concern does not appear to be a practical issue and we achieve good results with $N = 2$ and $R = 128$ using fewer SGD iterations than R-CNN.
194
+
195
+ In addition to hierarchical sampling, Fast R-CNN uses a streamlined training process with one fine-tuning stage that jointly optimizes a softmax classifier and bounding-box regressors, rather than training a softmax classifier, SVMs, and regressors in three separate stages \cite{girshick2014rcnn,he2014spp}.
196
+ The components of this procedure (the loss, mini-batch sampling strategy, back-propagation through \roi pooling layers, and SGD hyper-parameters) are described below.
197
+
198
+ \paragraph{Multi-task loss.}
199
+ A Fast R-CNN network has two sibling output layers.
200
+ The first outputs a discrete probability distribution (per \roi), $p = (p_0,\ldots,p_K)$, over $K + 1$ categories.
201
+ As usual, $p$ is computed by a softmax over the $K + 1$ outputs of a fully connected layer.
202
+ The second sibling layer outputs bounding-box regression offsets, $t^{k} = \left(t^{k}_\textrm{x}, t^{k}_\textrm{y}, t^{k}_\textrm{w}, t^{k}_\textrm{h}\right)$, for each of the $K$ object classes, indexed by $k$.
203
+ We use the parameterization for $t^{k}$ given in \cite{girshick2014rcnn}, in which $t^k$ specifies a scale-invariant translation and log-space height/width shift relative to an object proposal.
204
+
205
+ Each training \roi is labeled with a ground-truth class $u$ and a ground-truth bounding-box regression target $v$.
206
+ We use a multi-task loss $L$ on each labeled {\roi} to jointly train for classification and bounding-box regression:
207
+ \begin{equation}
208
+ \eqlabel{loss}
209
+ L(p, u, t^u, v) = L_\textrm{cls}(p, u) + \lambda [u \ge 1] L_\textrm{loc}(t^u, v),
210
+ \end{equation}
211
+ in which $L_\textrm{cls}(p, u) = -\log p_u$ is log loss for true class $u$.
212
+
213
+ The second task loss, $L_{\textrm{loc}}$, is defined over a tuple of true bounding-box regression targets for class $u$, $v = (v_\textrm{x}, v_\textrm{y}, v_\textrm{w}, v_\textrm{h})$, and a predicted tuple $t^u = (t^u_\textrm{x}, t^u_\textrm{y}, t^u_\textrm{w}, t^u_\textrm{h})$, again for class $u$.
214
+ The Iverson bracket indicator function $[u \ge 1]$ evaluates to 1 when $u \ge 1$ and 0 otherwise.
215
+ By convention the catch-all background class is labeled $u = 0$.
216
+ For background {\roi}s there is no notion of a ground-truth bounding box and hence $L_\textrm{loc}$ is ignored.
217
+ For bounding-box regression, we use the loss
218
+ \begin{equation}
219
+ L_\textrm{loc}(t^u, v) = \sum_{i \in \{\textrm{x},\textrm{y},\textrm{w},\textrm{h}\}} \textrm{smooth}_{L_1}(t^u_i - v_i),
220
+ \end{equation}
221
+ in which
222
+ \begin{equation}
223
+ \eqlabel{smoothL1}
224
+ \textrm{smooth}_{L_1}(x) =
225
+ \begin{cases}
226
+ 0.5x^2& \text{if } |x| < 1\\
227
+ |x| - 0.5& \text{otherwise},
228
+ \end{cases}
229
+ \end{equation}
230
+ is a robust $L_1$ loss that is less sensitive to outliers than the $L_2$ loss used in R-CNN and SPPnet.
231
+ When the regression targets are unbounded, training with $L_2$ loss can require careful tuning of learning rates in order to prevent exploding gradients.
232
+ \eqref{smoothL1} eliminates this sensitivity.
233
+
234
+ The hyper-parameter $\lambda$ in \eqref{loss} controls the balance between the two task losses.
235
+ We normalize the ground-truth regression targets $v_i$ to have zero mean and unit variance.
236
+ All experiments use $\lambda = 1$.
237
+
238
+ We note that \cite{erhan2014scalable} uses a related loss to train a class-agnostic object proposal network.
239
+ Different from our approach, \cite{erhan2014scalable} advocates for a two-network system that separates localization and classification.
240
+ OverFeat \cite{overfeat}, R-CNN \cite{girshick2014rcnn}, and SPPnet \cite{he2014spp} also train classifiers and bounding-box localizers, however these methods use stage-wise training, which we show is suboptimal for Fast R-CNN (\secref{multitask}).
241
+
242
+ \paragraph{Mini-batch sampling.}
243
+ During fine-tuning, each SGD mini-batch is constructed from $N = 2$ images, chosen uniformly at random (as is common practice, we actually iterate over permutations of the dataset).
244
+ We use mini-batches of size $R = 128$, sampling $64$ {\roi}s from each image.
245
+ As in \cite{girshick2014rcnn}, we take 25\% of the {\roi}s from object proposals that have intersection over union (IoU) overlap with a ground-truth bounding box of at least $0.5$.
246
+ These {\roi}s comprise the examples labeled with a foreground object class, \ie $u \ge 1$.
247
+ The remaining {\roi}s are sampled from object proposals that have a maximum IoU with ground truth in the interval $[0.1, 0.5)$, following \cite{he2014spp}.
248
+ These are the background examples and are labeled with $u = 0$.
249
+ The lower threshold of $0.1$ appears to act as a heuristic for hard example mining \cite{lsvm-pami}.
250
+ During training, images are horizontally flipped with probability $0.5$.
251
+ No other data augmentation is used.
252
+
253
+ \paragraph{Back-propagation through \roi pooling layers.}
254
+
255
+
256
+
257
+
258
+ Back-propagation routes derivatives through the \roi pooling layer.
259
+ For clarity, we assume only one image per mini-batch ($N=1$), though the extension to $N > 1$ is straightforward because the forward pass treats all images independently.
260
+
261
+ Let $x_i \in \mathbb{R}$ be the $i$-th activation input into the \roi pooling layer and let $y_{rj}$ be the layer's $j$-th output from the $r$-th \roi.
262
+ The \roi pooling layer computes $y_{rj} = x_{i^*(r,j)}$, in which $i^*(r,j) = \argmax_{i' \in \mathcal{R}(r,j)} x_{i'}$.
263
+ $\mathcal{R}(r,j)$ is the index set of inputs in the sub-window over which the output unit $y_{rj}$ max pools.
264
+ A single $x_i$ may be assigned to several different outputs $y_{rj}$.
265
+
266
+ The \roi pooling layer's \texttt{backwards} function computes partial derivative of the loss function with respect to each input variable $x_i$ by following the argmax switches:
267
+ \begin{equation}
268
+ \frac{\partial L}{\partial x_i} = \sum_{r} \sum_{j}
269
+ \left[i = i^*(r,j)\right] \frac{\partial L}{\partial y_{rj}}.
270
+ \end{equation}
271
+ In words, for each mini-batch \roi $r$ and for each pooling output unit $y_{rj}$, the partial derivative $\partial L/\partial y_{rj}$ is accumulated if $i$ is the argmax selected for $y_{rj}$ by max pooling.
272
+ In back-propagation, the partial derivatives $\partial L/\partial y_{rj}$ are already computed by the \texttt{backwards} function of the layer on top of the \roi pooling layer.
273
+
274
+ \paragraph{SGD hyper-parameters.}
275
+ The fully connected layers used for softmax classification and bounding-box regression are initialized from zero-mean Gaussian distributions with standard deviations $0.01$ and $0.001$, respectively.
276
+ Biases are initialized to $0$.
277
+ All layers use a per-layer learning rate of 1 for weights and 2 for biases and a global learning rate of $0.001$.
278
+ When training on VOC07 or VOC12 trainval we run SGD for 30k mini-batch iterations, and then lower the learning rate to $0.0001$ and train for another 10k iterations.
279
+ When we train on larger datasets, we run SGD for more iterations, as described later.
280
+ A momentum of $0.9$ and parameter decay of $0.0005$ (on weights and biases) are used.
281
+
282
+
283
+
284
+
285
+
286
+ \subsection{Scale invariance}
287
+ We explore two ways of achieving scale invariant object detection: (1) via ``brute force'' learning and (2) by using image pyramids.
288
+ These strategies follow the two approaches in \cite{he2014spp}.
289
+ In the brute-force approach, each image is processed at a pre-defined pixel size during both training and testing.
290
+ The network must directly learn scale-invariant object detection from the training data.
291
+
292
+ The multi-scale approach, in contrast, provides approximate scale-invariance to the network through an image pyramid.
293
+ At test-time, the image pyramid is used to approximately scale-normalize each object proposal.
294
+ During multi-scale training, we randomly sample a pyramid scale each time an image is sampled, following \cite{he2014spp}, as a form of data augmentation.
295
+ We experiment with multi-scale training for smaller networks only, due to GPU memory limits.
296
+
297
+ \section{Fast R-CNN detection}
298
+ Once a Fast R-CNN network is fine-tuned, detection amounts to little more than running a forward pass (assuming object proposals are pre-computed).
299
+ The network takes as input an image (or an image pyramid, encoded as a list of images) and a list of $R$ object proposals to score.
300
+ At test-time, $R$ is typically around $2000$, although we will consider cases in which it is larger ($\approx$ $45$k).
301
+ When using an image pyramid, each \roi is assigned to the scale such that the scaled \roi is closest to $224^2$ pixels in area \cite{he2014spp}.
302
+
303
+
304
+ For each test \roi $r$, the forward pass outputs a class posterior probability distribution $p$ and a set of predicted bounding-box offsets relative to $r$ (each of the $K$ classes gets its own refined bounding-box prediction).
305
+ We assign a detection confidence to $r$ for each object class $k$ using the estimated probability $\textrm{Pr}(\textrm{class} = k~|~r) \stackrel{\Delta}{=} p_k$.
306
+ We then perform non-maximum suppression independently for each class using the algorithm and settings from R-CNN \cite{girshick2014rcnn}.
307
+
308
+ \subsection{Truncated SVD for faster detection}
309
+ \seclabel{svd}
310
+ For whole-image classification, the time spent computing the fully connected layers is small compared to the conv layers.
311
+ On the contrary, for detection the number of {\roi}s to process is large and nearly half of the forward pass time is spent computing the fully connected layers (see \figref{timing}).
312
+ Large fully connected layers are easily accelerated by compressing them with truncated SVD \cite{Denton2014SVD,Xue2013svd}.
313
+
314
+ In this technique, a layer parameterized by the $u \times v$ weight matrix $W$ is approximately factorized as
315
+ \begin{equation}
316
+ W \approx U \Sigma_t V^T
317
+ \end{equation}
318
+ using SVD.
319
+ In this factorization, $U$ is a $u \times t$ matrix comprising the first $t$ left-singular vectors of $W$, $\Sigma_t$ is a $t \times t$ diagonal matrix containing the top $t$ singular values of $W$, and $V$ is $v \times t$ matrix comprising the first $t$ right-singular vectors of $W$.
320
+ Truncated SVD reduces the parameter count from $uv$ to $t(u + v)$, which can be significant if $t$ is much smaller than $\min(u, v)$.
321
+ To compress a network, the single fully connected layer corresponding to $W$ is replaced by two fully connected layers, without a non-linearity between them.
322
+ The first of these layers uses the weight matrix $\Sigma_t V^T$ (and no biases) and the second uses $U$ (with the original biases associated with $W$).
323
+ This simple compression method gives good speedups when the number of {\roi}s is large.
324
+
325
+
326
+ \section{Main results}
327
+
328
+ \begin{table*}[t!]
329
+ \centering
330
+ \renewcommand{\arraystretch}{1.2}
331
+ \renewcommand{\tabcolsep}{1.2mm}
332
+ \resizebox{\linewidth}{!}{
333
+ \begin{tabular}{@{}L{2.5cm}|L{1.2cm}|r*{19}{x}|x@{}}
334
+ method & train set & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & persn & plant & sheep & sofa & train & tv & mAP \\
335
+ \hline
336
+ SPPnet BB \cite{he2014spp}$^\dagger$ &
337
+ 07 $\setminus$ diff &
338
+ 73.9 &
339
+ 72.3 &
340
+ 62.5 &
341
+ 51.5 &
342
+ 44.4 &
343
+ 74.4 &
344
+ 73.0 &
345
+ 74.4 &
346
+ 42.3 &
347
+ 73.6 &
348
+ 57.7 &
349
+ 70.3 &
350
+ 74.6 &
351
+ 74.3 &
352
+ 54.2 &
353
+ 34.0 &
354
+ 56.4 &
355
+ 56.4 &
356
+ 67.9 &
357
+ 73.5 &
358
+ 63.1 \\
359
+ R-CNN BB \cite{rcnn-pami} &
360
+ 07 &
361
+ 73.4 &
362
+ 77.0 &
363
+ 63.4 &
364
+ 45.4 &
365
+ \bf{44.6} &
366
+ 75.1 &
367
+ 78.1 &
368
+ 79.8 &
369
+ 40.5 &
370
+ 73.7 &
371
+ 62.2 &
372
+ 79.4 &
373
+ 78.1 &
374
+ 73.1 &
375
+ 64.2 &
376
+ \bf{35.6} &
377
+ 66.8 &
378
+ 67.2 &
379
+ 70.4 &
380
+ \bf{71.1} &
381
+ 66.0 \\
382
+ \hline
383
+ FRCN [ours] &
384
+ 07 &
385
+ 74.5 &
386
+ 78.3 &
387
+ 69.2 &
388
+ 53.2 &
389
+ 36.6 &
390
+ 77.3 &
391
+ 78.2 &
392
+ 82.0 &
393
+ 40.7 &
394
+ 72.7 &
395
+ 67.9 &
396
+ 79.6 &
397
+ 79.2 &
398
+ 73.0 &
399
+ 69.0 &
400
+ 30.1 &
401
+ 65.4 &
402
+ 70.2 &
403
+ 75.8 &
404
+ 65.8 &
405
+ 66.9 \\
406
+ FRCN [ours] &
407
+ 07 $\setminus$ diff &
408
+ 74.6 &
409
+ \bf{79.0} &
410
+ 68.6 &
411
+ 57.0 &
412
+ 39.3 &
413
+ 79.5 &
414
+ \bf{78.6} &
415
+ 81.9 &
416
+ \bf{48.0} &
417
+ 74.0 &
418
+ 67.4 &
419
+ 80.5 &
420
+ 80.7 &
421
+ 74.1 &
422
+ 69.6 &
423
+ 31.8 &
424
+ 67.1 &
425
+ 68.4 &
426
+ 75.3 &
427
+ 65.5 &
428
+ 68.1 \\
429
+ FRCN [ours] &
430
+ 07+12 &
431
+ \bf{77.0} &
432
+ 78.1 &
433
+ \bf{69.3} &
434
+ \bf{59.4} &
435
+ 38.3 &
436
+ \bf{81.6} &
437
+ \bf{78.6} &
438
+ \bf{86.7} &
439
+ 42.8 &
440
+ \bf{78.8} &
441
+ \bf{68.9} &
442
+ \bf{84.7} &
443
+ \bf{82.0} &
444
+ \bf{76.6} &
445
+ \bf{69.9} &
446
+ 31.8 &
447
+ \bf{70.1} &
448
+ \bf{74.8} &
449
+ \bf{80.4} &
450
+ 70.4 &
451
+ \bf{70.0} \\
452
+ \end{tabular}
453
+ }
454
+ \vspace{0.05em}
455
+ \caption{{\bf VOC 2007 test} detection average precision (\%). All methods use \vggsixteen. Training set key: {\bf 07}: VOC07 trainval, {\bf 07 $\setminus$ diff}: {\bf 07} without ``difficult'' examples, {\bf 07+12}: union of {\bf 07} and VOC12 trainval.
456
+ $^\dagger$SPPnet results were prepared by the authors of \cite{he2014spp}.}
457
+ \tablelabel{voc2007}
458
+ \end{table*}
459
+ \begin{table*}[t!]
460
+ \centering
461
+ \renewcommand{\arraystretch}{1.2}
462
+ \renewcommand{\tabcolsep}{1.2mm}
463
+ \resizebox{\linewidth}{!}{
464
+ \begin{tabular}{@{}L{2.5cm}|L{1.2cm}|r*{19}{x}|x@{}}
465
+ method & train set & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & persn & plant & sheep & sofa & train & tv & mAP \\
466
+ \hline
467
+ BabyLearning &
468
+ Prop. &
469
+ 77.7 &
470
+ 73.8 &
471
+ 62.3 &
472
+ 48.8 &
473
+ 45.4 &
474
+ 67.3 &
475
+ 67.0 &
476
+ 80.3 &
477
+ 41.3 &
478
+ 70.8 &
479
+ 49.7 &
480
+ 79.5 &
481
+ 74.7 &
482
+ 78.6 &
483
+ 64.5 &
484
+ 36.0 &
485
+ 69.9 &
486
+ 55.7 &
487
+ 70.4 &
488
+ 61.7 &
489
+ 63.8 \\
490
+ R-CNN BB \cite{rcnn-pami} &
491
+ 12 &
492
+ 79.3 &
493
+ 72.4 &
494
+ 63.1 &
495
+ 44.0 &
496
+ 44.4 &
497
+ 64.6 &
498
+ 66.3 &
499
+ 84.9 &
500
+ 38.8 &
501
+ 67.3 &
502
+ 48.4 &
503
+ 82.3 &
504
+ 75.0 &
505
+ 76.7 &
506
+ 65.7 &
507
+ 35.8 &
508
+ 66.2 &
509
+ 54.8 &
510
+ 69.1 &
511
+ 58.8 &
512
+ 62.9 \\
513
+ SegDeepM &
514
+ 12+seg &
515
+ \bf{82.3} &
516
+ 75.2 &
517
+ 67.1 &
518
+ 50.7 &
519
+ \bf{49.8} &
520
+ 71.1 &
521
+ 69.6 &
522
+ 88.2 &
523
+ 42.5 &
524
+ 71.2 &
525
+ 50.0 &
526
+ 85.7 &
527
+ 76.6 &
528
+ 81.8 &
529
+ 69.3 &
530
+ \bf{41.5} &
531
+ \bf{71.9} &
532
+ 62.2 &
533
+ 73.2 &
534
+ \bf{64.6} &
535
+ 67.2 \\
536
+ \hline
537
+ FRCN [ours] &
538
+ 12 &
539
+ 80.1 &
540
+ 74.4 &
541
+ 67.7 &
542
+ 49.4 &
543
+ 41.4 &
544
+ 74.2 &
545
+ 68.8 &
546
+ 87.8 &
547
+ 41.9 &
548
+ 70.1 &
549
+ 50.2 &
550
+ 86.1 &
551
+ 77.3 &
552
+ 81.1 &
553
+ 70.4 &
554
+ 33.3 &
555
+ 67.0 &
556
+ 63.3 &
557
+ 77.2 &
558
+ 60.0 &
559
+ 66.1 \\
560
+ FRCN [ours] &
561
+ 07++12 &
562
+ 82.0 &
563
+ \bf{77.8} &
564
+ \bf{71.6} &
565
+ \bf{55.3} &
566
+ 42.4 &
567
+ \bf{77.3} &
568
+ \bf{71.7} &
569
+ \bf{89.3} &
570
+ \bf{44.5} &
571
+ \bf{72.1} &
572
+ \bf{53.7} &
573
+ \bf{87.7} &
574
+ \bf{80.0} &
575
+ \bf{82.5} &
576
+ \bf{72.7} &
577
+ 36.6 &
578
+ 68.7 &
579
+ \bf{65.4} &
580
+ \bf{81.1} &
581
+ 62.7 &
582
+ \bf{68.8} \\
583
+ \end{tabular}
584
+ }
585
+ \vspace{0.05em}
586
+ \caption{{\bf VOC 2010 test} detection average precision (\%).
587
+ BabyLearning uses a network based on \cite{Lin2014NiN}.
588
+ All other methods use \vggsixteen. Training set key: {\bf 12}: VOC12 trainval, {\bf Prop.}: proprietary dataset, {\bf 12+seg}: {\bf 12} with segmentation annotations, {\bf 07++12}: union of VOC07 trainval, VOC07 test, and VOC12 trainval.
589
+ }
590
+ \tablelabel{voc2010}
591
+ \end{table*}
592
+ \begin{table*}[t!]
593
+ \centering
594
+ \renewcommand{\arraystretch}{1.2}
595
+ \renewcommand{\tabcolsep}{1.2mm}
596
+ \resizebox{\linewidth}{!}{
597
+ \begin{tabular}{@{}L{2.5cm}|L{1.2cm}|r*{19}{x}|x@{}}
598
+ method & train set & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & persn & plant & sheep & sofa & train & tv & mAP \\
599
+ \hline
600
+ BabyLearning &
601
+ Prop. &
602
+ 78.0 &
603
+ 74.2 &
604
+ 61.3 &
605
+ 45.7 &
606
+ 42.7 &
607
+ 68.2 &
608
+ 66.8 &
609
+ 80.2 &
610
+ 40.6 &
611
+ 70.0 &
612
+ 49.8 &
613
+ 79.0 &
614
+ 74.5 &
615
+ 77.9 &
616
+ 64.0 &
617
+ 35.3 &
618
+ 67.9 &
619
+ 55.7 &
620
+ 68.7 &
621
+ 62.6 &
622
+ 63.2 \\
623
+ NUS\_NIN\_c2000 &
624
+ Unk. &
625
+ 80.2 &
626
+ 73.8 &
627
+ 61.9 &
628
+ 43.7 &
629
+ \bf{43.0} &
630
+ 70.3 &
631
+ 67.6 &
632
+ 80.7 &
633
+ 41.9 &
634
+ 69.7 &
635
+ 51.7 &
636
+ 78.2 &
637
+ 75.2 &
638
+ 76.9 &
639
+ 65.1 &
640
+ \bf{38.6} &
641
+ \bf{68.3} &
642
+ 58.0 &
643
+ 68.7 &
644
+ 63.3 &
645
+ 63.8 \\
646
+ R-CNN BB \cite{rcnn-pami} &
647
+ 12 &
648
+ 79.6 &
649
+ 72.7 &
650
+ 61.9 &
651
+ 41.2 &
652
+ 41.9 &
653
+ 65.9 &
654
+ 66.4 &
655
+ 84.6 &
656
+ 38.5 &
657
+ 67.2 &
658
+ 46.7 &
659
+ 82.0 &
660
+ 74.8 &
661
+ 76.0 &
662
+ 65.2 &
663
+ 35.6 &
664
+ 65.4 &
665
+ 54.2 &
666
+ 67.4 &
667
+ 60.3 &
668
+ 62.4 \\
669
+ \hline
670
+ FRCN [ours] &
671
+ 12 &
672
+ 80.3 &
673
+ 74.7 &
674
+ 66.9 &
675
+ 46.9 &
676
+ 37.7 &
677
+ 73.9 &
678
+ 68.6 &
679
+ 87.7 &
680
+ 41.7 &
681
+ 71.1 &
682
+ 51.1 &
683
+ 86.0 &
684
+ 77.8 &
685
+ 79.8 &
686
+ 69.8 &
687
+ 32.1 &
688
+ 65.5 &
689
+ 63.8 &
690
+ 76.4 &
691
+ 61.7 &
692
+ 65.7 \\
693
+ FRCN [ours] &
694
+ 07++12 &
695
+ \bf{82.3} &
696
+ \bf{78.4} &
697
+ \bf{70.8} &
698
+ \bf{52.3} &
699
+ 38.7 &
700
+ \bf{77.8} &
701
+ \bf{71.6} &
702
+ \bf{89.3} &
703
+ \bf{44.2} &
704
+ \bf{73.0} &
705
+ \bf{55.0} &
706
+ \bf{87.5} &
707
+ \bf{80.5} &
708
+ \bf{80.8} &
709
+ \bf{72.0} &
710
+ 35.1 &
711
+ \bf{68.3} &
712
+ \bf{65.7} &
713
+ \bf{80.4} &
714
+ \bf{64.2} &
715
+ \bf{68.4} \\
716
+ \end{tabular}
717
+ }
718
+ \vspace{0.05em}
719
+ \caption{{\bf VOC 2012 test} detection average precision (\%).
720
+ BabyLearning and NUS\_NIN\_c2000 use networks based on \cite{Lin2014NiN}.
721
+ All other methods use \vggsixteen. Training set key: see \tableref{voc2010}, {\bf Unk.}: unknown.
722
+ }
723
+ \tablelabel{voc2012}
724
+ \end{table*}
725
+
726
+
727
+ Three main results support this paper's contributions:
728
+ \begin{enumerate}
729
+ \itemsep0em
730
+ \item State-of-the-art mAP on VOC07, 2010, and 2012
731
+ \item Fast training and testing compared to R-CNN, SPPnet
732
+ \item Fine-tuning conv layers in \vggsixteen improves mAP
733
+ \end{enumerate}
734
+
735
+ \subsection{Experimental setup}
736
+ \seclabel{setup}
737
+ Our experiments use three pre-trained ImageNet models that are available online.\footnote{\url{https://github.com/BVLC/caffe/wiki/Model-Zoo}}
738
+ The first is the CaffeNet (essentially AlexNet \cite{krizhevsky2012imagenet}) from R-CNN \cite{girshick2014rcnn}.
739
+ We alternatively refer to this CaffeNet as model \Sm, for ``small.''
740
+ The second network is VGG\_CNN\_M\_1024 from \cite{Chatfield14}, which has the same depth as \Sm, but is wider.
741
+ We call this network model \Med, for ``medium.''
742
+ The final network is the very deep \vggsixteen model from \cite{simonyan2015verydeep}.
743
+ Since this model is the largest, we call it model \Lg.
744
+ In this section, all experiments use \emph{single-scale} training and testing ($s = 600$; see \secref{scale} for details).
745
+
746
+ \subsection{VOC 2010 and 2012 results}
747
+ On these datasets, we compare Fast R-CNN (\emph{FRCN}, for short) against the top methods on the \texttt{comp4} (outside data) track from the public leaderboard (\tableref{voc2010}, \tableref{voc2012}).\footnote{\url{http://host.robots.ox.ac.uk:8080/leaderboard} (accessed April 18, 2015)}
748
+ For the NUS\_NIN\_c2000 and BabyLearning methods, there are no associated publications at this time and we could not find exact information on the ConvNet architectures used; they are variants of the Network-in-Network design \cite{Lin2014NiN}.
749
+ All other methods are initialized from the same pre-trained \vggsixteen network.
750
+
751
+
752
+ Fast R-CNN achieves the top result on VOC12 with a mAP of 65.7\% (and 68.4\% with extra data).
753
+ It is also two orders of magnitude faster than the other methods, which are all based on the ``slow'' R-CNN pipeline.
754
+ On VOC10, SegDeepM \cite{Zhu2015segDeepM} achieves a higher mAP than Fast R-CNN (67.2\% vs. 66.1\%).
755
+ SegDeepM is trained on VOC12 trainval plus segmentation annotations; it is designed to boost R-CNN accuracy by using a Markov random field to reason over R-CNN detections and segmentations from the O$_2$P \cite{o2p} semantic-segmentation method.
756
+ Fast R-CNN can be swapped into SegDeepM in place of R-CNN, which may lead to better results.
757
+ When using the enlarged 07++12 training set (see \tableref{voc2010} caption), Fast R-CNN's mAP increases to 68.8\%, surpassing SegDeepM.
758
+
759
+ \subsection{VOC 2007 results}
760
+ On VOC07, we compare Fast R-CNN to R-CNN and SPPnet.
761
+ All methods start from the same pre-trained \vggsixteen network and use bounding-box regression.
762
+ The \vggsixteen SPPnet results were computed by the authors of \cite{he2014spp}.
763
+ SPPnet uses five scales during both training and testing.
764
+ The improvement of Fast R-CNN over SPPnet illustrates that even though Fast R-CNN uses single-scale training and testing, fine-tuning the conv layers provides a large improvement in mAP (from 63.1\% to 66.9\%).
765
+ R-CNN achieves a mAP of 66.0\%.
766
+ As a minor point, SPPnet was trained \emph{without} examples marked as ``difficult'' in PASCAL.
767
+ Removing these examples improves Fast R-CNN mAP to 68.1\%.
768
+ All other experiments use ``difficult'' examples.
769
+
770
+ \subsection{Training and testing time}
771
+ Fast training and testing times are our second main result.
772
+ \tableref{timing} compares training time (hours), testing rate (seconds per image), and mAP on VOC07 between Fast R-CNN, R-CNN, and SPPnet.
773
+ For \vggsixteen, Fast R-CNN processes images 146\X faster than R-CNN without truncated SVD and 213\X faster with it.
774
+ Training time is reduced by 9\X, from 84 hours to 9.5.
775
+ Compared to SPPnet, Fast R-CNN trains \vggsixteen 2.7\X faster (in 9.5 vs. 25.5 hours) and tests 7\X faster without truncated SVD or 10\X faster with it.
776
+ Fast R-CNN also eliminates hundreds of gigabytes of disk storage, because it does not cache features.
777
+
778
+ \begin{table}[h!]
779
+ \begin{center}
780
+ \setlength{\tabcolsep}{3pt}
781
+ \renewcommand{\arraystretch}{1.2}
782
+ \resizebox{\linewidth}{!}{
783
+ \small
784
+ \begin{tabular}{l|rrr|rrr|r}
785
+ & \multicolumn{3}{c|}{Fast R-CNN} & \multicolumn{3}{c|}{R-CNN} & \multicolumn{1}{c}{SPPnet} \\
786
+ & \Sm & \Med & \Lg & \Sm & \Med & \Lg & $^\dagger$\Lg \\
787
+ \hline
788
+ train time (h) & \bf{1.2} & 2.0 & 9.5 &
789
+ 22 & 28 & 84 & 25 \\
790
+ train speedup & \bf{18.3\X} & 14.0\X & 8.8\X &
791
+ 1\X & 1\X & 1\X & 3.4\X \\
792
+ \hline
793
+ test rate (s/im) & 0.10 & 0.15 & 0.32 &
794
+ 9.8 & 12.1 & 47.0 & 2.3 \\
795
+ ~$\rhd$ with SVD & \bf{0.06} & 0.08 & 0.22 &
796
+ - & - & - & - \\
797
+ \hline
798
+ test speedup & 98\X & 80\X & 146\X &
799
+ 1\X & 1\X & 1\X & 20\X \\
800
+ ~$\rhd$ with SVD & 169\X & 150\X & \bf{213\X} &
801
+ - & - & - & - \\
802
+ \hline
803
+ VOC07 mAP & 57.1 & 59.2 & \bf{66.9} &
804
+ 58.5 & 60.2 & 66.0 & 63.1 \\
805
+ ~$\rhd$ with SVD & 56.5 & 58.7 & 66.6 &
806
+ - & - & - & - \\
807
+ \end{tabular}
808
+ }
809
+ \end{center}
810
+ \caption{Runtime comparison between the same models in Fast R-CNN, R-CNN, and SPPnet.
811
+ Fast R-CNN uses single-scale mode.
812
+ SPPnet uses the five scales specified in \cite{he2014spp}.
813
+ $^\dagger$Timing provided by the authors of \cite{he2014spp}.
814
+ Times were measured on an Nvidia K40 GPU.
815
+ }
816
+ \tablelabel{timing}
817
+ \vspace{-1em}
818
+ \end{table}
819
+
820
+
821
+
822
+
823
+
824
+
825
+
826
+
827
+
828
+
829
+
830
+
831
+ \paragraph{Truncated SVD.}
832
+ Truncated SVD can reduce detection time by more than 30\% with only a small (0.3 percentage point) drop in mAP and without needing to perform additional fine-tuning after model compression.
833
+ \figref{timing} illustrates how using the top $1024$ singular values from the $25088 \times 4096$ matrix in {\vggsixteen}'s fc6 layer and the top $256$ singular values from the $4096 \times 4096$ fc7 layer reduces runtime with little loss in mAP.
834
+ Further speed-ups are possible with smaller drops in mAP if one fine-tunes again after compression.
835
+
836
+
837
+ \begin{figure}[h!]
838
+ \centering
839
+ \includegraphics[width=0.49\linewidth,trim=3em 2em 0 0, clip]{figs/layer_timing.pdf}
840
+ \includegraphics[width=0.49\linewidth,trim=3em 2em 0 0, clip]{figs/layer_timing_svd.pdf}
841
+ \caption{Timing for \vggsixteen before and after truncated SVD.
842
+ Before SVD, fully connected layers fc6 and fc7 take 45\% of the time.}
843
+ \figlabel{timing}
844
+ \end{figure}
845
+
846
+
847
+
848
+ \subsection{Which layers to fine-tune?}
849
+ For the less deep networks considered in the SPPnet paper \cite{he2014spp}, fine-tuning only the fully connected layers appeared to be sufficient for good accuracy.
850
+ We hypothesized that this result would not hold for very deep networks.
851
+ To validate that fine-tuning the conv layers is important for \vggsixteen, we use Fast R-CNN to fine-tune, but \emph{freeze} the thirteen conv layers so that only the fully connected layers learn.
852
+ This ablation emulates single-scale SPPnet training and \emph{decreases mAP from 66.9\% to 61.4\%} (\tableref{whichlayers}).
853
+ This experiment verifies our hypothesis: training through the \roi pooling layer is important for very deep nets.
854
+
855
+ \begin{table}[h!]
856
+ \begin{center}
857
+ \setlength{\tabcolsep}{3pt}
858
+ \renewcommand{\arraystretch}{1.1}
859
+ \small
860
+ \begin{tabular}{l|rrr|r}
861
+ & \multicolumn{3}{c|}{layers that are fine-tuned in model \Lg} & SPPnet \Lg \\
862
+ & $\ge$ fc6 & $\ge$ conv3\_1 & $\ge$ conv2\_1 & $\ge$ fc6 \\
863
+ \hline
864
+ VOC07 mAP & 61.4 & 66.9 & \bf{67.2} & 63.1 \\
865
+ test rate (s/im) & \bf{0.32} & \bf{0.32} & \bf{0.32} & 2.3 \\
866
+ \end{tabular}
867
+ \end{center}
868
+ \caption{Effect of restricting which layers are fine-tuned for \vggsixteen.
869
+ Fine-tuning $\ge$ fc6 emulates the SPPnet training algorithm \cite{he2014spp}, but using a single scale.
870
+ SPPnet \Lg results were obtained using five scales, at a significant (7\X) speed cost.}
871
+ \tablelabel{whichlayers}
872
+ \vspace{-0.5em}
873
+ \end{table}
874
+
875
+ Does this mean that \emph{all} conv layers should be fine-tuned? In short, \emph{no}.
876
+ In the smaller networks (\Sm and \Med) we find that conv1 is generic and task independent (a well-known fact \cite{krizhevsky2012imagenet}).
877
+ Allowing conv1 to learn, or not, has no meaningful effect on mAP.
878
+ For \vggsixteen, we found it only necessary to update layers from conv3\_1 and up (9 of the 13 conv layers).
879
+ This observation is pragmatic: (1) updating from conv2\_1 slows training by 1.3\X (12.5 vs. 9.5 hours) compared to learning from conv3\_1;
880
+ and (2) updating from conv1\_1 over-runs GPU memory.
881
+ The difference in mAP when learning from conv2\_1 up was only $+0.3$ points (\tableref{whichlayers}, last column).
882
+ All Fast R-CNN results in this paper using \vggsixteen fine-tune layers conv3\_1 and up; all experiments with models \Sm and \Med fine-tune layers conv2 and up.
883
+ \begin{table*}[t!]
884
+ \begin{center}
885
+ \setlength{\tabcolsep}{5pt}
886
+ \renewcommand{\arraystretch}{1.1}
887
+ \small
888
+ \begin{tabular}{l|rrrr|rrrr|rrrr}
889
+ & \multicolumn{4}{c|}{\Sm} & \multicolumn{4}{c|}{\Med} & \multicolumn{4}{c}{\Lg} \\
890
+ \hline
891
+ multi-task training? &
892
+ &
893
+ \checkmark &
894
+ &
895
+ \checkmark &
896
+ &
897
+ \checkmark &
898
+ &
899
+ \checkmark &
900
+ &
901
+ \checkmark &
902
+ &
903
+ \checkmark
904
+ \\
905
+ stage-wise training? &
906
+ &
907
+ &
908
+ \checkmark &
909
+ &
910
+ &
911
+ &
912
+ \checkmark &
913
+ &
914
+ &
915
+ &
916
+ \checkmark &
917
+ \\
918
+ test-time bbox reg? & & & \checkmark & \checkmark & & & \checkmark & \checkmark & & & \checkmark & \checkmark \\
919
+ VOC07 mAP & 52.2 & 53.3 & 54.6 & \bf{57.1} & 54.7 & 55.5 & 56.6 & \bf{59.2} & 62.6 & 63.4 & 64.0 & \bf{66.9} \\
920
+ \end{tabular}
921
+ \end{center}
922
+ \caption{Multi-task training (forth column per group) improves mAP over piecewise training (third column per group).}
923
+ \tablelabel{multitask}
924
+ \vspace{-0.5em}
925
+ \end{table*}
926
+
927
+ \section{Design evaluation}
928
+
929
+ We conducted experiments to understand how Fast R-CNN compares to R-CNN and SPPnet, as well as to evaluate design decisions.
930
+ Following best practices, we performed these experiments on the PASCAL VOC07 dataset.
931
+
932
+
933
+ \subsection{Does multi-task training help?}
934
+ \seclabel{multitask}
935
+ Multi-task training is convenient because it avoids managing a pipeline of sequentially-trained tasks.
936
+ But it also has the potential to improve results because the tasks influence each other through a shared representation (the ConvNet) \cite{caruana1997multitask}.
937
+ Does multi-task training improve object detection accuracy in Fast R-CNN?
938
+
939
+ To test this question, we train baseline networks that use only the classification loss, $L_\textrm{cls}$, in \eqref{loss} (\ie, setting $\lambda = 0$).
940
+ These baselines are printed for models \Sm, \Med, and \Lg in the first column of each group in \tableref{multitask}.
941
+ Note that these models \emph{do not} have bounding-box regressors.
942
+ Next (second column per group), we take networks that were trained with the multi-task loss (\eqref{loss}, $\lambda = 1$), but we \emph{disable} bounding-box regression at test time.
943
+ This isolates the networks' classification accuracy and allows an apples-to-apples comparison with the baseline networks.
944
+
945
+
946
+ Across all three networks we observe that multi-task training improves pure classification accuracy relative to training for classification alone.
947
+ The improvement ranges from $+0.8$ to $+1.1$ mAP points, showing a consistent positive effect from multi-task learning.
948
+
949
+ Finally, we take the baseline models (trained with only the classification loss), tack on the bounding-box regression layer, and train them with $L_{loc}$ while keeping all other network parameters frozen.
950
+ The third column in each group shows the results of this \emph{stage-wise} training scheme: mAP improves over column one, but stage-wise training underperforms multi-task training (forth column per group).
951
+
952
+ \subsection{Scale invariance: to brute force or finesse?}
953
+ \seclabel{scale}
954
+ We compare two strategies for achieving scale-invariant object detection: brute-force learning (single scale) and image pyramids (multi-scale).
955
+ In either case, we define the scale $s$ of an image to be the length of its \emph{shortest} side.
956
+
957
+ All single-scale experiments use $s = 600$ pixels;
958
+ $s$ may be less than $600$ for some images as we cap the longest image side at $1000$ pixels and maintain the image's aspect ratio.
959
+ These values were selected so that \vggsixteen fits in GPU memory during fine-tuning.
960
+ The smaller models are not memory bound and can benefit from larger values of $s$; however, optimizing $s$ for each model is not our main concern.
961
+ We note that PASCAL images are $384 \times 473$ pixels on average and thus the single-scale setting typically upsamples images by a factor of 1.6.
962
+ The average effective stride at the \roi pooling layer is thus $\approx 10$ pixels.
963
+
964
+ In the multi-scale setting, we use the same five scales specified in \cite{he2014spp} ($s \in \{480, 576, 688, 864, 1200\}$) to facilitate comparison with SPPnet.
965
+ However, we cap the longest side at $2000$ pixels to avoid exceeding GPU memory.
966
+
967
+ \begin{table}[h!]
968
+ \begin{center}
969
+ \setlength{\tabcolsep}{4.7pt}
970
+ \renewcommand{\arraystretch}{1.1}
971
+ \small
972
+ \begin{tabular}{l|rr|rr|rr|r}
973
+ & \multicolumn{2}{c|}{SPPnet \ZF} & \multicolumn{2}{c|}{\Sm} & \multicolumn{2}{c|}{\Med} & \Lg \\
974
+ \hline
975
+ scales & 1 & 5 & 1 & 5 & 1 & 5 & 1 \\
976
+ test rate (s/im) & 0.14 & 0.38 & \bf{0.10} & 0.39 & 0.15 & 0.64 & 0.32 \\
977
+ VOC07 mAP & 58.0 & 59.2 & 57.1 & 58.4 & 59.2 & 60.7 & \bf{66.9}
978
+ \end{tabular}
979
+ \end{center}
980
+ \caption{Multi-scale vs. single scale.
981
+ SPPnet \ZF (similar to model \Sm) results are from \cite{he2014spp}.
982
+ Larger networks with a single-scale offer the best speed / accuracy tradeoff.
983
+ (\Lg cannot use multi-scale in our implementation due to GPU memory constraints.)
984
+ }
985
+ \tablelabel{scales}
986
+ \vspace{-0.5em}
987
+ \end{table}
988
+
989
+ \tableref{scales} shows models \Sm and \Med when trained and tested with either one or five scales.
990
+ Perhaps the most surprising result in \cite{he2014spp} was that single-scale detection performs almost as well as multi-scale detection.
991
+ Our findings confirm their result: deep ConvNets are adept at directly learning scale invariance.
992
+ The multi-scale approach offers only a small increase in mAP at a large cost in compute time (\tableref{scales}).
993
+ In the case of \vggsixteen (model \Lg), we are limited to using a single scale by implementation details. Yet it achieves a mAP of 66.9\%, which is slightly higher than the 66.0\% reported for R-CNN \cite{rcnn-pami}, even though R-CNN uses ``infinite'' scales in the sense that each proposal is warped to a canonical size.
994
+
995
+ Since single-scale processing offers the best tradeoff between speed and accuracy, especially for very deep models, all experiments outside of this sub-section use single-scale training and testing with $s = 600$ pixels.
996
+
997
+ \subsection{Do we need more training data?}
998
+ \seclabel{moredata}
999
+ A good object detector should improve when supplied with more training data.
1000
+ Zhu \etal \cite{devaMoreData} found that DPM \cite{lsvm-pami} mAP saturates after only a few hundred to thousand training examples.
1001
+ Here we augment the VOC07 trainval set with the VOC12 trainval set, roughly tripling the number of images to 16.5k, to evaluate Fast R-CNN.
1002
+ Enlarging the training set improves mAP on VOC07 test from 66.9\% to 70.0\% (\tableref{voc2007}).
1003
+ When training on this dataset we use 60k mini-batch iterations instead of 40k.
1004
+
1005
+ We perform similar experiments for VOC10 and 2012, for which we construct a dataset of 21.5k images from the union of VOC07 trainval, test, and VOC12 trainval.
1006
+ When training on this dataset, we use 100k SGD iterations and lower the learning rate by $0.1\times$ each 40k iterations (instead of each 30k).
1007
+ For VOC10 and 2012, mAP improves from 66.1\% to 68.8\% and from 65.7\% to 68.4\%, respectively.
1008
+
1009
+
1010
+ \subsection{Do SVMs outperform softmax?}
1011
+ Fast R-CNN uses the softmax classifier learnt during fine-tuning instead of training one-vs-rest linear SVMs post-hoc, as was done in R-CNN and SPPnet.
1012
+ To understand the impact of this choice, we implemented post-hoc SVM training with hard negative mining in Fast R-CNN.
1013
+ We use the same training algorithm and hyper-parameters as in R-CNN.
1014
+ \begin{table}[h!]
1015
+ \begin{center}
1016
+ \setlength{\tabcolsep}{6pt}
1017
+ \renewcommand{\arraystretch}{1.1}
1018
+ \small
1019
+ \begin{tabular}{l|l|r|r|r}
1020
+ method & classifier & \Sm & \Med & \Lg \\
1021
+ \hline
1022
+ R-CNN \cite{girshick2014rcnn,rcnn-pami} & SVM & \bf{58.5} & \bf{60.2} & 66.0 \\
1023
+ \hline
1024
+ FRCN [ours] & SVM & 56.3 & 58.7 & 66.8 \\
1025
+ FRCN [ours] & softmax & 57.1 & 59.2 & \bf{66.9} \\
1026
+ \end{tabular}
1027
+ \end{center}
1028
+ \caption{Fast R-CNN with softmax vs. SVM (VOC07 mAP).}
1029
+ \tablelabel{svm}
1030
+ \vspace{-0.5em}
1031
+ \end{table}
1032
+
1033
+ \tableref{svm} shows softmax slightly outperforming SVM for all three networks, by $+0.1$ to $+0.8$ mAP points.
1034
+ This effect is small, but it demonstrates that ``one-shot'' fine-tuning is sufficient compared to previous multi-stage training approaches.
1035
+ We note that softmax, unlike one-vs-rest SVMs, introduces competition between classes when scoring a \roi.
1036
+
1037
+ \subsection{Are more proposals always better?}
1038
+
1039
+
1040
+
1041
+ There are (broadly) two types of object detectors: those that use a \emph{sparse} set of object proposals (\eg, selective search \cite{UijlingsIJCV2013}) and those that use a \emph{dense} set (\eg, DPM \cite{lsvm-pami}).
1042
+ Classifying sparse proposals is a type of \emph{cascade} \cite{Viola01} in which the proposal mechanism first rejects a vast number of candidates leaving the classifier with a small set to evaluate.
1043
+ This cascade improves detection accuracy when applied to DPM detections \cite{UijlingsIJCV2013}.
1044
+ We find evidence that the proposal-classifier cascade also improves Fast R-CNN accuracy.
1045
+
1046
+ Using selective search's \emph{quality mode}, we sweep from 1k to 10k proposals per image, each time \emph{re-training} and \emph{re-testing} model \Med.
1047
+ If proposals serve a purely computational role, increasing the number of proposals per image should not harm mAP.
1048
+ \begin{figure}[h!]
1049
+ \centering
1050
+ \includegraphics[width=1\linewidth,trim=0em 0em 0 0, clip]{figs/proposals.pdf}
1051
+ \caption{VOC07 test mAP and AR for various proposal schemes.}
1052
+ \figlabel{proposals}
1053
+ \end{figure}
1054
+
1055
+ We find that mAP rises and then falls slightly as the proposal count increases (\figref{proposals}, solid blue line).
1056
+ This experiment shows that swamping the deep classifier with more proposals does not help, and even slightly hurts, accuracy.
1057
+
1058
+ This result is difficult to predict without actually running the experiment.
1059
+ The state-of-the-art for measuring object proposal quality is Average Recall (AR) \cite{Hosang15proposals}.
1060
+ AR correlates well with mAP for several proposal methods using R-CNN, \emph{when using a fixed number of proposals per image}.
1061
+ \figref{proposals} shows that AR (solid red line) does not correlate well with mAP as the number of proposals per image is varied.
1062
+ AR must be used with care; higher AR due to more proposals does not imply that mAP will increase.
1063
+ Fortunately, training and testing with model \Med takes less than 2.5 hours.
1064
+ Fast R-CNN thus enables efficient, direct evaluation of object proposal mAP, which is preferable to proxy metrics.
1065
+
1066
+
1067
+ We also investigate Fast R-CNN when using \emph{densely} generated boxes (over scale, position, and aspect ratio), at a rate of about 45k boxes / image.
1068
+ This dense set is rich enough that when each selective search box is replaced by its closest (in IoU) dense box, mAP drops only 1 point (to 57.7\%, \figref{proposals}, blue triangle).
1069
+
1070
+ The statistics of the dense boxes differ from those of selective search boxes.
1071
+ Starting with 2k selective search boxes, we test mAP when \emph{adding} a random sample of $1000 \times \{2,4,6,8,10,32,45\}$ dense boxes.
1072
+ For each experiment we re-train and re-test model \Med.
1073
+ When these dense boxes are added, mAP falls more strongly than when adding more selective search boxes, eventually reaching 53.0\%.
1074
+
1075
+ We also train and test Fast R-CNN using \emph{only} dense boxes (45k / image).
1076
+ This setting yields a mAP of 52.9\% (blue diamond).
1077
+ Finally, we check if SVMs with hard negative mining are needed to cope with the dense box distribution.
1078
+ SVMs do even worse: 49.3\% (blue circle).
1079
+
1080
+
1081
+
1082
+ \subsection{Preliminary MS COCO results}
1083
+ We applied Fast R-CNN (with \vggsixteen) to the MS COCO dataset \cite{coco} to establish a preliminary baseline.
1084
+ We trained on the 80k image training set for 240k iterations and evaluated on the ``test-dev'' set using the evaluation server.
1085
+ The PASCAL-style mAP is 35.9\%; the new COCO-style AP, which also averages over IoU thresholds, is 19.7\%.
1086
+ \section{Conclusion}
1087
+
1088
+ This paper proposes Fast R-CNN, a clean and fast update to R-CNN and SPPnet.
1089
+ In addition to reporting state-of-the-art detection results, we present detailed experiments that we hope provide new insights.
1090
+ Of particular note, sparse object proposals appear to improve detector quality.
1091
+ This issue was too costly (in time) to probe in the past, but becomes practical with Fast R-CNN.
1092
+ Of course, there may exist yet undiscovered techniques that allow dense boxes to perform as well as sparse proposals.
1093
+ Such methods, if developed, may help further accelerate object detection.
1094
+
1095
+ \paragraph{Acknowledgements.}
1096
+ I thank Kaiming He, Larry Zitnick, and Piotr Doll{\'a}r for helpful discussions and encouragement.
1097
+
1098
+ {\small
1099
+ \bibliographystyle{ieee}
1100
+ \bibliography{main}
1101
+ }
1102
+
1103
+ \end{document}
papers/1505/1505.01197.tex ADDED
@@ -0,0 +1,388 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,twocolumn,letterpaper]{article}
2
+
3
+ \usepackage{iccv}
4
+ \usepackage{times}
5
+ \usepackage{epsfig}
6
+ \usepackage{graphicx}
7
+ \usepackage{amsmath}
8
+ \usepackage{amssymb}
9
+ \usepackage[usenames,dvipsnames]{xcolor}
10
+ \usepackage{lib}
11
+
12
+
13
+
14
+ \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}
15
+
16
+ \iccvfinalcopy
17
+
18
+ \def\iccvPaperID{0750} \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
19
+
20
+
21
+ \newcommand{\vecwp}{\bfw_\textrm{p}^{\alpha}}
22
+ \newcommand{\vecws}{\bfw_\textrm{s}^{\alpha}}
23
+
24
+ \ificcvfinal\pagestyle{empty}\fi
25
+ \begin{document}
26
+
27
+ \title{Contextual Action Recognition with R*CNN}
28
+
29
+ \author{Georgia Gkioxari\\
30
+ UC Berkeley\\
31
+ {\tt\small gkioxari@eecs.berkeley.edu}
32
+ \and
33
+ Ross Girshick\\
34
+ Microsoft Research\\
35
+ {\tt\small rbg@microsoft.com}
36
+ \and
37
+ Jitendra Malik\\
38
+ UC Berkeley\\
39
+ {\tt\small malik@eecs.berkeley.edu}
40
+ }
41
+
42
+ \maketitle
43
+
44
+
45
+
46
+ \begin{abstract}
47
+ There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (\eg road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R$^*$CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R$^*$CNN achieves 90.2\% mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R$^*$CNN is not limited to action recognition. In particular, R$^*$CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.\footnote{Source code and models are available at \url{https://github.com/gkioxari/RstarCNN}}
48
+ \end{abstract}
49
+
50
+ \section{Introduction}
51
+ \seclabel{intro}
52
+
53
+ Consider \figref{fig:Fig1} (a). How do we know that the person highlighted with the red box is working on a computer? Could it be that the computer is visible in the image, is it that the person in question has a very specific pose or is it that he is sitting in an office environment? Likewise, how do we know that the person in \figref{fig:Fig1} (b) is running? Is it the running-specific pose of her arms and legs or do the scene and the other people nearby also convey the action?
54
+
55
+ For the task of action recognition from still images, the pose of the person in question, the identity of the objects surrounding them and the way they interact with those objects and the scene are vital cues. In this work, our objective is to use all available cues to perform activity recognition.
56
+
57
+ Formally, we adapt the Region-based Convolutional Network method (RCNN) \cite{girshick2014rcnn} to use more than one region when making a prediction. We call our method \emph{R$^*$CNN}. In R$^*$CNN, we have a primary region that contains the person in question and a secondary region that automatically discovers contextual cues.
58
+
59
+ \begin{figure}
60
+ \begin{center}
61
+ \includegraphics[width=1.0\linewidth]{figures/FR-CNN_fig1.pdf}
62
+ \end{center}
63
+ \caption{Examples of people performing actions.}
64
+ \figlabel{fig:Fig1}
65
+ \end{figure}
66
+
67
+ \begin{figure*}[t!]
68
+ \begin{center}
69
+ \includegraphics[width=1.0\linewidth,trim=0 4em 0 0,clip=true]{figures/FR-CNN_main.pdf}
70
+ \end{center}
71
+ \caption{Schematic overview of our approach. Given image $I$, we select the primary region to be the bounding box containing the person (\textcolor{red}{red box}) while region proposals define the set of candidate secondary regions (\textcolor{green}{green boxes}). For each action $\alpha$, the most informative secondary region is selected (\emph{max} operation) and its score is added to the primary. The \emph{softmax} operation transforms scores into probabilities and forms the final prediction.}
72
+ \figlabel{fig:Overview}
73
+ \end{figure*}
74
+
75
+
76
+ How do we select the secondary region? In other words, how to we decide which region contains information about the action being performed? Inspired by multiple-instance learning (MIL) \cite{Viola05, Maron98} and Latent SVM \cite{lsvm-pami}, if $I$ is an image and $r$ is a region in $I$ containing the target person, we define the $\textrm{score}$ of action $\alpha$ as
77
+
78
+ \begin{equation}
79
+ \textrm{score}(\alpha ; I,r) = \vecwp \cdot \bphi(r ; I) + \max_{s \in R(r ; I)} \vecws \cdot \bphi(s ; I),
80
+ \eqlabel{eq:MIL}
81
+ \end{equation}
82
+ where $\bphi(r ; I)$ is a vector of features extracted from region $r$ in $I$, while $\vecwp$ and $\vecws$ are the primary and secondary weights for action $\alpha$ respectively. $R(r ; I)$ defines the set of candidates for the secondary region. For example, $R(r ; I)$ could be the set of regions in the proximity of $r$, or even the whole set of regions in $I$. Given scores for each action, we use a softmax to compute the probability that the person in $r$ is performing action $\alpha$:
83
+
84
+ \begin{equation}
85
+ P(\alpha | I, r) = \frac{\exp(\textrm{score}(\alpha ; I, r) )}{\sum_{\alpha' \in A} \exp(\textrm{score}(\alpha' ; I, r) )}.
86
+ \eqlabel{eq:Softmax}
87
+ \end{equation}
88
+
89
+ The feature representation $\bphi(\cdot)$ and the weight vectors $\vecwp$ and $\vecws$ in \eqref{eq:MIL} are learned \emph{jointly} for all actions $\alpha \in A$ using a CNN trained with stochastic gradient descent (SGD). We build on the Fast RCNN implementation \cite{FastRCNN}, which efficiently processes a large number of regions per image. \figref{fig:Overview} shows the architecture of our network.
90
+
91
+
92
+ We quantify the performance of R$^*$CNN for action recognition using two datasets: PASCAL VOC Actions \cite{PASCAL-ijcv} and the MPII Human Pose dataset \cite{andriluka14cvpr}.
93
+ On PASCAL VOC, R$^*$CNN yields 90.2\% mean AP, improving the previous state-of-the-art approach \cite{Simonyan2015} by 6 percentage points, according to the leaderboard \cite{leaderboard}. We visualize the selected secondary regions in \figref{fig:VOC_test} and show that indeed the secondary models learn to pick auxiliary cues as desired.
94
+ On the larger MPII dataset, R$^*$CNN yields 26.7\% mean AP, compared to 5.5\% mean AP achieved by the best performing approach, as reported by \cite{mpii-action}, which uses holistic \cite{wang2013} and pose-specific features along with motion cues.
95
+
96
+ In addition to the task of action recognition, we show that R$^*$CNN can successfully be used for fine-grained tasks. We experiment with the task of attribute recognition and achieve state-of-the-art performance on the Berkeley Attributes of People dataset \cite{BourdevAttributesICCV11}. Our visualizations in \figref{fig:Attributes} show that the secondary regions capture the parts specific to the attribute class being considered. \section{Related Work}
97
+ \seclabel{related}
98
+
99
+ \paragraph{Action recognition.} There is a variety of work in the field of action recognition in static images. The majority of the approaches use holistic cues, by extracting features on the person bounding box and combining them with contextual cues from the whole image and object models.
100
+
101
+ Maji \etal \cite{MajiActionCVPR11} train action specific poselets and for each instance create a poselet activation vector that is classified using SVMs. They capture contextual cues in two ways: they explicitly detect objects using pre-trained models for the \emph{bicycle, motorbike, horse} and \emph{tvmonitor} categories and exploit knowledge of actions of other people in the image.
102
+ Hoai \etal \cite{Hoai-BMVC14} use body-part detectors and align them with respect to the parts of a similar instance, thus aligning their feature descriptors. They combine the part based features with object detection scores and train non-linear SVMs.
103
+ Khosla \etal \cite{CVPR11_0254} densely sample image regions at arbitrary locations and scales with reference to the ground-truth region. They train a random forest classifier to discriminate between different actions.
104
+ Prest \etal \cite{prest2012} learn human-object interactions using only action labels. They localize the action object by finding recurring patterns on images of actions and then capture their relative spatial relations.
105
+ The aforementioned approaches are based on hand-engineered features such as HOG \cite{Dalal05} and SIFT \cite{SIFT}.
106
+
107
+ CNNs achieve state-of-the-art performance on handwritten digit classification \cite{lecun-89e}, and have recently been applied to various tasks in computer vision such as image classification \cite{krizhevsky2012imagenet, Simonyan2015} and object detection \cite{girshick2014rcnn} with impressive results.
108
+ For the task of action recognition, Oquab \etal \cite{Oquab14} use a CNN on ground-truth boxes for the task of action classification, but observe a small gain in performance compared to previous methods.
109
+ Hoai \cite{Hoai-BMVC14-RMP} uses a geometrical distribution of regions placed in the image and in the ground-truth box and weights their scores to make a single prediction, using fc7 features from a network trained on the ImageNet-1k dataset \cite{ILSVRC12}.
110
+ Gkioxari \etal \cite{deepparts} train body part detectors (\emph{head, torso, legs}) on pool5 features in a sliding-window manner and combine them with the ground-truth box to jointly train a CNN.
111
+
112
+ Our work is different than the above mentioned approaches in the following ways.
113
+ We use bottom up region proposals \cite{UijlingsIJCV2013} as candidates for secondary regions, instead of anchoring regions of specific aspect ratios and at specific locations in the image, and without relying on the reference provided by the ground-truth bounding box. Region proposals have been shown to be effective object candidates allowing for detection of objects irrespective of occlusion and viewpoint.
114
+ We jointly learn the feature maps and the weights of the scoring models, allowing for action specific representations to emerge. These representations might refer to human-object relations, human-scene relations and human-human relations. This approach is contrary to work that predefines the relations to be captured or that makes use of hand-engineered features, or features from networks trained for different tasks. We allow the classifier to pick the most informative secondary region for the task at hand. As we show in \secref{results}, the selected secondary region is instance specific and can be an object (\eg, cell phone), a part of the scene (\eg, nearby bicycles), the whole scene, or part of the human body.
115
+
116
+
117
+ \paragraph{Scene and Context.} The scene and its role in vision and perception have been studied for a long time. Biederman \etal \cite{biederman1982scene} identify five classes of relationships (presence, position, size, support and interposition) between an object and its setting and conduct experiments to measure how well humans identify objects when those relationships are violated. They found that the ability to recognize objects is much weaker and it becomes worse as violations become more severe. More recently, Oliva and Torralba \cite{oliva2007} study the contextual associations of objects with their scene and link various forms of context cues with computer vision.
118
+
119
+ \paragraph{Multiple-Instance Learning.} Multiple instance learning (MIL) provides a framework for training models when full supervision is not available at train time. Instead of accurate annotations, the data forms bags, with a positive or a negative label \cite{Maron98}. There is a lot of work on MIL for computer vision tasks. For object detection, Viola \etal \cite{Viola05} use MIL and boosting to obtain face detectors when ground truth object face locations are not accurately provided at train time. More recently, Song \etal \cite{song14slsvm} use MIL to localize objects with binary image-level labels (is the object present in the image or not). For the task of image classification, Oquab \etal \cite{oquab2015} modify the CNN architecture \cite{krizhevsky2012imagenet}, which divides the image into equal sized regions and combines their scores via a final max pooling layer to classify the whole image. Fang \etal \cite{haoCVPR15} follow a similar technique to localize concepts useful for image caption generation.
120
+
121
+ In this work, we treat the secondary region for each training example as an unknown latent variable.
122
+ During training, each time an example is sampled, the forward pass of the CNN infers the current value of this latent variable through a max operation.
123
+ This is analogous to latent parts locations and component models in DPM \cite{lsvm-pami}.
124
+ However, here we perform end-to-end optimization with an online algorithm (SGD), instead of optimizing a Latent SVM.
125
+ \section{Implementation}
126
+ \seclabel{approach}
127
+
128
+ \figref{fig:Overview} shows the architecture of our network. Given an image $I$, we select the primary region to be the bounding box containing the person (knowledge of this box is given at test time in all action datasets). Bottom up region proposals form the set of candidate secondary regions. For each action $\alpha$, the most informative region is selected through the \emph{max} operation and its score is added to the primary (\eqref{eq:MIL}). The \emph{softmax} operation transforms scores into estimated posterior probabilities (\eqref{eq:Softmax}), which are used to predict action labels.
129
+
130
+ \subsection{R$^*$CNN}
131
+ We build on Fast RCNN (FRCN) \cite{FastRCNN}. In FRCN, the input image is upsampled and passed through the convolutional layers. An adaptive max pooling layer takes as input the output of the last convolutional layer and a list of regions of interest (ROIs).
132
+ It outputs a feature map of fixed size (\eg $7\times7$ for the 16-layer CNN by \cite{Simonyan2015}) specific to each ROI. The ROI-pooled features are subsequently passed through the fully connected layers to make the final prediction. This implementation is efficient, since the computationally intense convolutions are performed at an image-level and are subsequently being reused by the ROI-specific operations.
133
+
134
+ The test-time operation of FRCN is similar to SPPnet \cite{sppnets}. However, the training algorithm is different and enables fine-tuning all network layers, not just those above the final ROI pooling layer, as in \cite{sppnets}.
135
+ This property is important for maximum classification accuracy with very deep networks.
136
+
137
+ In our implementation, we extend the FRCN pipeline. Each primary region $r$ of an image $I$ predicts a score for each action $\alpha \in A$ (top stream in \figref{fig:Overview}). At the same time, each region within the set of candidate secondary regions $R(r ; I)$ independently makes a prediction.
138
+ These scores are combined, for each primary region $r$, by a \emph{max} operation over $r$'s candidate regions (bottom stream in \figref{fig:Overview}).
139
+
140
+
141
+ We define the set of candidate secondary regions $R(r ; I)$ as
142
+ \begin{equation}
143
+ R(r ; I) = \{ s \in S(I) : \textrm{overlap}(s,r) \in [l,u] \} ,
144
+ \eqlabel{eq:R}
145
+ \end{equation}
146
+ where $S(I)$ is the set of region proposals for image $I$. In our experiments, we use Selective Search \cite{UijlingsIJCV2013}. The lower and upper bounds for the overlap, which here is defined as the intersection over union between the boxes, defines the set of the regions that are considered as secondary for each primary region. For example, if $l=0$ and $u=1$ then $R(r ; I) = S(I)$, for each $r$, meaning that all bottom up proposals are candidates for secondary regions.
147
+
148
+ \subsection{Learning}
149
+
150
+ We train R$^*$CNN with stochastic gradient descent (SGD) using backpropagation. We adopt the 16-layer network architecture from \cite{Simonyan2015}, which has been shown to perform well for image classification and object detection.
151
+
152
+ During training, we minimize the log loss of the predictions. If $P(\alpha~|~I, r)$ is the softmax probability that action $\alpha$ is performed in region $r$ in image $I$ computed by \eqref{eq:Softmax}, then the loss over a batch of training examples $B = \{ I_i, r_i, l_i\}_{i=1}^{M}$ is given by
153
+
154
+ \begin{equation}
155
+ \textrm{loss}(B) = -\frac{1}{M}\sum_{i=1}^{M} \log P(\alpha = l_i~|~I_i, r_i),
156
+ \eqlabel{eq:Loss}
157
+ \end{equation}
158
+ where $l_i$ is the true label of example $r_i$ in image $I_i$.
159
+
160
+
161
+ Rather than limiting training to the ground-truth person locations, we use all regions that overlap more than $0.5$ with a ground-truth box. This condition serves as a form of data augmentation. For every primary region, we randomly select $N$ regions from the set of candidate secondary regions. $N$ is a function of the GPU memory limit (we use a Nvidia K40 GPU) and the batch size.
162
+
163
+ We fine-tune our network starting with a model trained on ImageNet-1K for the image classification task. We tie the weights of the fully connected primary and secondary layers (\emph{fc6, fc7}), but not for the final scoring models. We set the learning rate to $0.0001$, the batch size to $30$ and consider 2 images per batch. We pick $N=10$ and train for 10K iterations.
164
+ Larger learning rates prevented fine-tuning from converging.
165
+
166
+ Due to the architecture of our network, most computation time is spent during the initial convolutions, which happen over the whole image. Computation does not scale much with the number of boxes, contrary to the original implementation of RCNN \cite{girshick2014rcnn}. Training takes 1s per iteration, while testing takes 0.4s per image.
167
+ \section{Results}
168
+ \seclabel{results}
169
+
170
+ We demonstrate the effectiveness of R$^*$CNN on action recognition from static images on the PASCAL VOC Actions dataset \cite{PASCAL-ijcv}, the MPII Human Pose dataset \cite{andriluka14cvpr} and the Stanford 40 Actions dataset \cite{yao2011human}.
171
+
172
+ \subsection{PASCAL VOC Action}
173
+
174
+ The PASCAL VOC Action dataset consists of 10 different actions, \emph{Jumping, Phoning, Playing Instrument, Reading, Riding Bike, Riding Horse, Running, Taking Photo, Using Computer, Walking} as well as examples of people not performing some of the above action, which are marked as \emph{Other}. The ground-truth boxes containing the people are provided both at train and test time. During test time, for every example we estimate probabilities for all actions and compute AP.
175
+
176
+ \subsubsection{Control Experiments}
177
+
178
+ We experiment with variants of our system to show the effectiveness of R$^*$CNN.
179
+
180
+ \begin{itemize}
181
+
182
+ \item {\bf{RCNN}}. As a baseline approach we train Fast R-CNN for the task of action classification. This network exploits only the information provided from the primary region, which is defined as the ground-truth region.
183
+
184
+ \item {\bf{Random-RCNN}}. We use the ground-truth box as a primary region and a box randomly selected from the secondary regions. We train a network for this task similar to R$^*$CNN with the \emph{max} operation replaced by \emph{rand}
185
+
186
+ \item {\bf{Scene-RCNN}}. We use the ground-truth box as the primary region and the whole image as the secondary. We jointly train a network for this task, similar to R$^*$CNN, where the secondary model learns action specific weights solely from the scene (no \emph{max} operation is performed in this case)
187
+
188
+ \item {\bf{R$^*$CNN $(l,u)$}}. We experiment with various combinations of values for the only free parameters of our pipeline, namely the bounds $(l,u)$ of the overlaps used when defining the secondary regions $R(r; I)$, where $r$ is the primary region
189
+
190
+ \item {\bf{R$^*$CNN $(l,u,n_S)$}}. In this setting, we use $n_S>1$ secondary regions instead of one. The secondary regions are selected in a greedy manner. First we select the secondary region $s_1$ exactly as in R$^*$CNN. The $i$-th secondary region $s_i$ is selected via the \emph{max} operation from the set $R(r ; I) \cap R(s_1 ; I) \cap ... \cap R(s_{i-1} ; I)$, where $r$ is the primary region.
191
+
192
+ \end{itemize}
193
+
194
+ The Random- and Scene- settings show the value of selecting the most informative region, rather than forcing the secondary region to be the scene or a region selected at random.
195
+
196
+ \tableref{tab:Action_val} shows the performance of all the variants on the val set of the PASCAL VOC Actions. Our experiments show that R$^*$CNN performs better across all categories. In particular, \emph{Phoning}, \emph{Reading}, \emph{Taking Photo} perform significantly better than the baseline approach and Scene-RCNN. \emph{Riding Bike}, \emph{Riding Horse} and \emph{Running} show the smallest improvement, probably due to scene bias of the images containing those actions. Another interesting observation is that our approach is not sensitive to the bounds of overlap $(l,u)$. R$^*$CNN is able to perform very well even for the unconstrained setting where all regions are allowed to be picked by the secondary model, $(l=0, u=1)$. In our basic R$^*$CNN setting, we use one secondary region. However, one region might not be able to capture all the modes of contextual cues present in the image. Therefore, we extend R$^*$CNN to include $n_S$ secondary regions. Our experiments show that for $n_S=2$ the performance is the same as with R$^*$CNN for the optimal set of parameters of $(l=0.2, u=0.75)$.
197
+
198
+ \begin{table*}
199
+ \centering
200
+ \renewcommand{\arraystretch}{1.2}
201
+ \renewcommand{\tabcolsep}{1.2mm}
202
+ \resizebox{\linewidth}{!}{
203
+ \begin{tabular}{@{}l|r*{9}{c}|cc@{}}
204
+ AP (\%) & Jumping & Phoning & Playing Instrument & Reading & Riding Bike & Riding Horse & Running & Taking Photo & Using Computer & Walking & mAP \\
205
+ \hline
206
+ RCNN & 88.7 & 72.6 & 92.6 & 74.0 & 96.1 & 96.9 & 86.1 & 83.3 & 87.0 & 71.5 & 84.9 \\
207
+ Random-RCNN & 89.1 & 72.7 & 92.9 & 74.4 & 96.1 & 97.2 & 85.0 & 84.2 & 87.5 & 70.4 & 85.0 \\
208
+ Scene-RCNN & 88.9 & 72.5 & 93.4 & 75.0 & 95.6 & 98.1 & 88.6 & 83.2 & 90.4 & 71.5 & 85.7 \\
209
+ R$^*$CNN (0.0, 0.5) & 89.1 & 80.0 & \bf{95.6} & 81.0 & \bf{97.3} & 98.7 & 85.5 & \bf{85.6} & 93.4 & 71.5 & 87.8 \\
210
+ R$^*$CNN (0.2, 0.5) & 88.1 & 75.4 & 94.2 & 80.1 & 95.9 & 97.9 & 85.6 & 84.5 & 92.3 & \bf{71.6} & 86.6 \\
211
+ R$^*$CNN (0.0, 1.0) & \bf{89.2} & 77.2 & 94.9 & \bf{83.7} & 96.7 & \bf{98.6} & 87.0 & 84.8 & 93.6 & 70.1 & 87.6 \\
212
+ R$^*$CNN (0.2, 0.75) & 88.9 & 79.9 & 95.1 & 82.2 & 96.1 & 97.8 & \bf{87.9} & 85.3 & 94.0 & 71.5 & \bf{87.9} \\
213
+ R$^*$CNN (0.2, 0.75, 2) & 87.7 & \bf{80.1} & 94.8 & 81.1 & 95.5 & 97.2 & 87.0 & 84.7 & \bf{94.6} & 70.1 & 87.3
214
+ \end{tabular}
215
+ }
216
+ \vspace{0.1em}
217
+ \caption{AP on the PASCAL VOC Action 2012 val set. \emph{RCNN} is the baseline approach, with the ground-truth region being the primary region. \emph{Random-RCNN} is a network trained with primary the ground-truth region and secondary a random region. \emph{Scene-RCNN} is a network trained with primary the ground-truth region and secondary the whole image. \emph{R$^*$CNN $(l, u)$} is our system where $l,u$ define the lower and upper bounds of the allowed overlap of the secondary region with the ground truth. \emph{R$^*$CNN $(l, u, n_S)$} is a variant in which $n_S$ secondary regions are used, instead of one.}
218
+ \tablelabel{tab:Action_val}
219
+ \end{table*}
220
+
221
+ \begin{table*}
222
+ \centering
223
+ \renewcommand{\arraystretch}{1.2}
224
+ \renewcommand{\tabcolsep}{1.2mm}
225
+ \resizebox{\linewidth}{!}{
226
+ \begin{tabular}{@{}l|c|r*{9}{c}|cc@{}}
227
+ AP (\%) & CNN layers & Jumping & Phoning & Playing Instrument & Reading & Riding Bike & Riding Horse & Running & Taking Photo & Using Computer & Walking & mAP \\
228
+ \hline
229
+ Oquab \etal \cite{Oquab14} & 8 & 74.8 & 46.0 & 75.6 & 45.3 & 93.5 & 95.0 & 86.5 & 49.3 & 66.7 & 69.5 & 70.2 \\
230
+ Hoai \cite{Hoai-BMVC14-RMP} & 8 & 82.3 & 52.9 & 84.3 & 53.6 & 95.6 & 96.1 & 89.7 & 60.4 & 76.0 & 72.9 & 76.3\\
231
+ Gkioxari \etal \cite{deepparts} & 16 & 84.7 & 67.8 & 91.0 & 66.6 & 96.6 & 97.2 & 90.2 & 76.0 & 83.4 & 71.6 & 82.6 \\
232
+ Simonyan \& Zisserman \cite{Simonyan2015} & 16 \& 19 & 89.3 & 71.3 & \bf{94.7} & 71.3 & \bf{97.1} & 98.2 & 90.2 & 73.3 & 88.5 & 66.4 & 84.0 \\
233
+ R$^*$CNN & 16 & \bf{91.5} & \bf{84.4} & 93.6 & \bf{83.2} & 96.9 & \bf{98.4} & \bf{93.8} & \bf{85.9} & \bf{92.6} & \bf{81.8} & \bf{90.2}
234
+ \end{tabular}
235
+ }
236
+ \vspace{0.1em}
237
+ \caption{AP on the PASCAL VOC Action 2012 test set. Oquab \etal \cite{Oquab14} train an 8-layer network on ground-truth boxes. Gkioxari \etal \cite{deepparts} use part detectors for \emph{head, torso, legs} and train a CNN. Hoai \cite{Hoai-BMVC14-RMP} uses an 8-layer network to extract fc7 features from regions at multiple locations and scales. Simonyan and Zisserman \cite{Simonyan2015} combine a 16-layer and a 19-layer network and train SVMs on fc7 features from the image and the ground-truth box. R$^*$CNN (with $(l=0.2, u = 0.75)$) outperforms all other approaches by a significant margin. }
238
+ \tablelabel{tab:Action_test}
239
+ \end{table*}
240
+
241
+
242
+ \subsubsection{Comparison with published results}
243
+
244
+ We compare R$^*$CNN to other approaches on the PASCAL VOC Action test set. \tableref{tab:Action_test} shows the results.
245
+ Oquab \etal \cite{Oquab14} train an 8-layer network on ground-truth boxes.
246
+ Gkioxari \etal \cite{deepparts} use part detectors for \emph{head, torso, legs} and train a CNN on the part regions and the ground-truth box.
247
+ Hoai \cite{Hoai-BMVC14-RMP} uses an 8-layer network to extract fc7 features from regions at multiple locations and scales inside the image and and the box and accumulates their scores to get the final prediction.
248
+ Simonyan and Zisserman \cite{Simonyan2015} combine a 16-layer and a 19-layer network and train SVMs on fc7 features from the image and the ground-truth box. R$^*$CNN (with $(l=0.2, u = 0.75)$) outperforms all other approaches by a substantial margin.
249
+ R$^*$CNN seems to be performing significantly better for actions which involve small objects and action-specific pose appearance, such as \emph{Phoning}, \emph{Reading}, \emph{Taking Photo}, \emph{Walking}.
250
+
251
+ \subsubsection{Visualization of secondary regions}
252
+
253
+ \figref{fig:VOC_test} shows examples from the top predictions for each action on the test set. Each block corresponds to a different action. Red highlights the person to be classified while green the automatically selected secondary region. For actions \emph{Jumping}, \emph{Running} and \emph{Walking} the secondary region is focused either on body parts (\eg legs, arms) or on more instances surrounding the instance in question (\eg joggers). For \emph{Taking Photo}, \emph{Phoning}, \emph{Reading} and \emph{Playing Instrument} the secondary region focuses almost exclusively on the object and its interaction with the arms. For \emph{Riding Bike}, \emph{Riding Horse} and \emph{Using Computer} it focuses on the object, or the presence of similar instances and the scene.
254
+
255
+ Interestingly, the secondary region seems to be picking different cues depending on the instance in question. For example in the case of \emph{Running}, the selected region might highlight the scene (\eg road), parts of the human body (\eg legs, arms) or a group of people performing the action, as shown in \figref{fig:VOC_test}.
256
+
257
+ \begin{figure*}
258
+ \begin{center}
259
+ \includegraphics[width=0.9\linewidth]{figures/R-CNN_actions_new.png}
260
+ \end{center}
261
+ \caption{Top predictions on the PASCAL VOC Action test set. The instance in question is shown with a \textcolor{red}{red box}, while the selected secondary region with a \textcolor{green}{green box}. The nature of the secondary regions depends on the action and the image itself. Even within the same action category, the most informative cue can vary.}
262
+ \figlabel{fig:VOC_test}
263
+ \end{figure*}
264
+
265
+ \figref{fig:VOC_errs} shows erroneous predictions for each action on the val set (in descending score). Each block corresponds to a different action. The misclassified instance is shown in red and the corresponding secondary region with green. For \emph{Riding Bike} and \emph{Riding Horse}, which achieve a very high AP, the mistakes are of very low score. For \emph{Jumping}, \emph{Phoning} and \emph{Using Computer} the mistakes occur due to confusions with instances of similar pose. In addition, for \emph{Playing Instrument} most of the misclassifications are people performing in concert venues, such as singers. For \emph{Taking Photo} and \emph{Playing Instrument} the presence of the object seems to be causing most misclassifications. For \emph{Running} and \emph{Walking} they seem to often get confused with each other as well as with standing people (an action which is not present explicitly in the dataset).
266
+
267
+ \begin{figure}
268
+ \begin{center}
269
+ \includegraphics[width=0.9\linewidth]{figures/R-CNN_action_errors_new.png}
270
+ \end{center}
271
+ \caption{Top mistakes on the PASCAL VOC Action val set. The misclassified instance is shown in \textcolor{red}{red}, while the selected secondary region in \textcolor{green}{green}.}
272
+ \figlabel{fig:VOC_errs}
273
+ \end{figure}
274
+
275
+ \subsection{MPII Human Pose Dataset}
276
+
277
+ The MPII Human Pose dataset contains 400 actions and consists of approximately 40,000 instances and 24,000 images. The images are extracted from videos from YouTube. The training set consists of 15,200 images and 22,900 instances performing 393 actions. The number of positive training examples per category varies drastically \cite{mpii-action}. The amount of training data ranges from 3 to 476 instances, with an average of 60 positives per action. The annotations do not include a ground-truth bounding box explicitly, but provide a point (anywhere in the human body) and a rough scale of the human. This information can be used to extract a rough location of the instance, which is used as input in our algorithm.
278
+
279
+ \subsubsection{R$^*$CNN vs. RCNN}
280
+
281
+ We split the training set into train and val sets. We make sure that frames of the same video belong to the same split to avoid overfitting. This results in 12,500 instances in train and 10,300 instances in val. We train the baseline RCNN network and R$^*$CNN. We pick $(l=0.2, u=0.5)$ due to the large number of region proposals generated by \cite{UijlingsIJCV2013} (on average 8,000 regions per image).
282
+
283
+ On the val set, RCNN achieves 16.5\% mean AP while R$^*$CNN achieves 21.7\% mean AP, across all actions. \figref{fig:MPII_val} shows the performance on MPII val for RCNN and R$^*$CNN. On the \textbf{left}, we show a scatter plot of the AP for all actions as a function of their training size. On the \textbf{right}, we show the mean AP across actions belonging to one out of three categories, depending on their training size.
284
+
285
+ The performance reported in \figref{fig:MPII_val} is instance-specific. Namely, each instance is evaluated. One could evaluate the performance at the frame-level (as done in \cite{mpii-action}), \ie classify the frame and not the instance. We can generate frame-level predictions by assigning for each action the maximum score across instances in the frame. That yields 18.2\% mean AP for RCNN and 23\% mean AP for R$^*$CNN.
286
+
287
+ \begin{figure}
288
+ \begin{center}
289
+ \includegraphics[width=1.0\linewidth]{figures/mpii_val_2.pdf}
290
+ \end{center}
291
+ \caption{Performance on MPII val for RCNN (\textcolor{ProcessBlue}{blue} ) and R$^*$CNN (\textcolor{Brown}{brown}). \textbf{Left:} AP (\%) for all actions as a function of their training size ($x$-axis). \textbf{Right:} Mean AP (\%) for three discrete ranges of training size ($x$-axis).}
292
+ \figlabel{fig:MPII_val}
293
+ \end{figure}
294
+
295
+
296
+ \subsubsection{Comparison with published results}
297
+
298
+ In \cite{mpii-action}, various approaches for action recognition are reported on the test set. All the approaches mentioned use motion features, by using frames in the temporal neighborhood of the frame in question. The authors test variants of Dense Trajectories (DT) \cite{wang2013} which they combine with pose specific features. The best performance on the test set is 5.5\% mean AP (frame-level) achieved by the DT combined with a pose specific approach.
299
+
300
+ We evaluate R$^*$CNN on the test set\footnote{We sent our results to the authors of \cite{mpii-action} for evaluation since test annotations are not publicly available.} and achieve 26.7\% mAP for frame-level recognition. Our approach does not use motion, which is a strong cue for action recognition in video, and yet manages to outperform DT by a significant margin. Evaluation on the test set is performed only at the frame-level.
301
+
302
+ \figref{fig:MPII_test} shows the mean AP across actions in a descending order of training size. This figure allows for a direct comparison with the published results, as shown in Figure 1(b) in \cite{mpii-action}.
303
+
304
+ \figref{fig:MPII_test_res} shows some results on the test set. We highlight the instance in question with red, and the secondary box with green. The boxes for the instances were derived from the point annotations (some point on the person) and the rough scale provided at train and test time. The predicted action label is overlaid in each image.
305
+
306
+ \begin{figure}
307
+ \begin{center}
308
+ \includegraphics[width=0.9\linewidth]{figures/mpii_test.pdf}
309
+ \end{center}
310
+ \caption{Mean AP (\%) on MPII test for R$^*$CNN across actions in descending order of their training size. A direct comparison with published results, as shown in Figure 1(b) in \cite{mpii-action}, can be drawn.}
311
+ \figlabel{fig:MPII_test}
312
+ \end{figure}
313
+
314
+ Even though R$^*$CNN outperforms DT, there is still need of movement to boost performance for many categories. For example, even though the MPII dataset has a many examples for actions such as \emph{Yoga}, \emph{Cooking or food preparation} and \emph{Video exercise workout}, R$^*$CNN performs badly on those categories (1.1\% mean AP). We believe that a hybrid approach which combines image and motion features, similar to \cite{simonyan2014, actiontubes}, would perform even better.
315
+
316
+ \begin{figure*}
317
+ \begin{center}
318
+ \includegraphics[width=0.42\linewidth]{figures/FR-CNN_MPII_res.png}
319
+ \includegraphics[width=0.45\linewidth]{figures/FR-CNN_MPII_res_part2.png}
320
+ \end{center}
321
+ \caption{Predictions on the MPII test set. We highlight the person in question with a \textcolor{red}{red} box, and the secondary region with a \textcolor{green}{green} box. The predicted action label is overlaid. }
322
+ \figlabel{fig:MPII_test_res}
323
+ \end{figure*}
324
+
325
+ \subsection{Stanford 40 Actions Dataset}
326
+
327
+ We run R$^*$CNN on the Stanford 40 Actions dataset \cite{yao2011human}. This dataset consists of 9532 images of people performing 40 different actions. The dataset is split in half to comprise the training and test split. Bounding boxes are provided for all people performing actions. R$^*$CNN achieves an average AP of 90.9\% on the test set, with performance varying from 70.5\% for \emph{texting message} to 100\% for \emph{playing violin}. \figref{fig:stanford40} shows the AP performance per action on the test set. Training code and models are publicly available.
328
+
329
+ \begin{figure}
330
+ \begin{center}
331
+ \includegraphics[width=1.0\linewidth]{figures/stanford40.png}
332
+ \end{center}
333
+ \caption{AP (\%) of R$^*$CNN on the Stanford 40 dataset per action. Performance varies from 70.5\% for \emph{texting message} to 100\% for \emph{playing violin}. The average AP across all actions achieved by our model is 90.9\%.}
334
+ \figlabel{fig:stanford40}
335
+ \end{figure}
336
+
337
+ \subsection{Attribute Classification}
338
+
339
+ \begin{table*}
340
+ \centering
341
+ \renewcommand{\arraystretch}{1.2}
342
+ \renewcommand{\tabcolsep}{1.2mm}
343
+ \resizebox{\linewidth}{!}{
344
+ \begin{tabular}{@{}l|c|r*{8}{c}|cc@{}}
345
+ AP (\%) & CNN layers & Is Male & Has Long Hair & Has Glasses & Has Hat & Has T-Shirt & Has Long Sleeves & Has Shorts & Has Jeans & Has Long Pants & mAP \\
346
+ \hline
347
+ PANDA \cite{panda} & 5 & 91.7 & 82.7 & 70.0 & 74.2 & 49.8 & 86.0 & 79.1 & 81.0 & 96.4 & 79.0 \\
348
+ Gkioxari \etal \cite{deepparts} & 16 & \bf{92.9} & \bf{90.1} & 77.7 & \bf{93.6} & 72.6 & \bf{93.2} & \bf{93.9} & \bf{92.1} & \bf{98.8} & \bf{89.5} \\
349
+ RCNN & 16 & 91.8 & 88.9 & 81.0 & 90.4 & 73.1 & 90.4 & 88.6 & 88.9 & 97.6 & 87.8 \\
350
+ R$^*$CNN & 16 & 92.8 & 88.9 & \bf{82.4} & 92.2 & \bf{74.8} & 91.2 & 92.9 & 89.4 & 97.9 & 89.2
351
+ \end{tabular}
352
+ }
353
+ \vspace{0.1em}
354
+ \caption{AP on the Berkeley Attributes of People test set. PANDA \cite{panda} uses CNNs trained for each poselet type. Gkioxari \etal \cite{deepparts} detect parts and train a CNN jointly on the whole and the parts. RCNN is our baseline approach based on FRCN. Both RCNN and R$^*$CNN do not use any additional part annotations at training time. \cite{deepparts} and R$^*$CNN perform equally well, with the upside that R$^*$CNN does not need use keypoint annotations during training.}
355
+ \tablelabel{tab:Attributes_test}
356
+ \end{table*}
357
+
358
+ Finally, we show that R$^*$CNN can also be used for the task of attribute classification. On the Berkeley Attributes of People dataset \cite{BourdevAttributesICCV11}, which consists of images of people and their attributes, \eg \emph{wears hat}, \emph{is male} etc, we train R$^*$CNN as described above. The only difference is that our loss is no longer a log loss over softmax probabilities, but the cross entropy over independent logistics because attribute prediction is a multi-label task. \tableref{tab:Attributes_test} reports the performance in AP of our approach, as well as other competing methods. \figref{fig:Attributes} shows results on the test set. From the visualizations, the secondary regions learn to focus on the parts that are specific to the attribute being considered. For example, for the \emph{Has Long Sleeves} class, the secondary regions focus on the arms and torso of the instance in question, while for \emph{Has Hat} focus is on the face of the person.
359
+
360
+ \begin{figure}
361
+ \begin{center}
362
+ \includegraphics[width=1.0\linewidth]{figures/R-CNN_attributes.png}
363
+ \end{center}
364
+ \caption{Results on the Berkeley Attributes of People test set. We highlight the person in question with a \textcolor{red}{red} box, and the secondary region with a \textcolor{green}{green} box. The predicted attribute is overlaid. }
365
+ \figlabel{fig:Attributes}
366
+ \end{figure}
367
+
368
+
369
+
370
+
371
+
372
+ \section*{Conclusion}
373
+ We introduce a simple yet effective approach for action recognition. We adapt RCNN to use more than one region in order to make a prediction, based on the simple observation that contextual cues are significant when deciding what action a person is performing. We call our system \emph{R$^*$CNN}. In our setting, both features and models are learnt jointly, allowing for action-specific representations to emerge. R$^*$CNN outperforms all published approaches on two datasets. More interestingly, the auxiliary information selected by R$^*$CNN for prediction captures different contextual modes depending on the instance in question.
374
+
375
+ R$^*$CNN is not limited to action recognition. We show that R$^*$CNN can be used successfully for tasks such as attribute classification. Our visualizations show that the secondary regions capture the region relevant to the attribute considered.
376
+
377
+ \section*{Acknowledgments}
378
+ This work was supported by the Intel Visual Computing Center and the ONR SMARTS MURI N000140911051. The GPUs used in this research were generously donated by
379
+ the NVIDIA Corporation.
380
+
381
+
382
+
383
+ {\small
384
+ \bibliographystyle{ieee}
385
+ \bibliography{refs}
386
+ }
387
+
388
+ \end{document}
papers/1505/1505.04474.tex ADDED
@@ -0,0 +1,890 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,twocolumn,letterpaper]{article}
2
+
3
+ \usepackage{iccv}
4
+ \usepackage{times}
5
+ \usepackage{epsfig}
6
+ \usepackage{graphicx}
7
+ \usepackage{amsmath}
8
+ \usepackage{amssymb}
9
+ \usepackage{lib}
10
+ \usepackage{caption}
11
+ \usepackage{subcaption}
12
+
13
+ \usepackage{lipsum}
14
+ \usepackage{epigraph}
15
+ \usepackage{multirow}
16
+ \usepackage{booktabs}
17
+ \usepackage{lib}
18
+ \usepackage{tabularx}
19
+ \usepackage{colortbl}
20
+ \usepackage{epstopdf}
21
+ \usepackage{flushend}
22
+ \usepackage{tabularx}
23
+
24
+
25
+ \newcommand{\vertical}[1]{\rotatebox[origin=l]{90}{\parbox{2.5cm}{#1}}}
26
+ \newcommand{\instr}[0]{{instr}}
27
+ \newcommand{\object}[0]{{obj}}
28
+ \newcommand{\arbelaez}[0]{Arbel\'{a}ez\xspace}
29
+ \newcommand{\rgb}[0]{RGB\xspace}
30
+ \newcommand{\alexnet}[0]{\emph{alexnet} }
31
+ \newcommand{\regionAP}[0]{$AP^r$\xspace}
32
+ \newcommand{\vsrlAP}[0]{$AP^m$\xspace}
33
+ \newcommand{\boxAP}[0]{$AP^b$\xspace}
34
+ \newcommand{\vgg}[0]{{\small VGG}\xspace}
35
+ \newcommand{\rcnn}[0]{{\small R-CNN}\xspace}
36
+ \newcommand{\fastrcnn}[0]{{\small Fast R-CNN}\xspace}
37
+ \newcommand{\cnn}[0]{{\small CNN}\xspace}
38
+ \newcommand{\vcoco}[0]{{\small V-COCO}\xspace}
39
+ \newcommand{\vb}[1]{{\small \texttt{#1}}\xspace}
40
+ \newcommand{\vsrl}[0]{{\small VSRL}\xspace}
41
+ \newcommand{\train}[0]{\textit{train}\xspace}
42
+ \newcommand{\test}[0]{\textit{test}\xspace}
43
+ \newcommand{\val}[0]{\textit{val}\xspace}
44
+ \newcommand{\coco}[0]{{\small COCO}\xspace}
45
+ \newcommand{\amt}[0]{{\small AMT}\xspace}
46
+ \newcommand{\insertA}[2]{\IfFileExists{#2}{\includegraphics[width=#1\textwidth]{#2}}{\includegraphics[width=#1\textwidth]{figures/blank.jpg}}}
47
+ \newcommand{\insertB}[2]{\IfFileExists{#2}{\includegraphics[height=#1\textwidth]{#2}}{\includegraphics[height=#1\textwidth]{figures/blank.jpg}}}
48
+
49
+
50
+
51
+
52
+
53
+
54
+ \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}
55
+
56
+ \iccvfinalcopy
57
+
58
+ \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
59
+
60
+ \begin{document}
61
+
62
+ \title{Visual Semantic Role Labeling}
63
+
64
+ \author{Saurabh Gupta \\ UC Berkeley \\ {\tt\small sgupta@eecs.berkeley.edu}
65
+ \and Jitendra Malik \\ UC Berkeley \\ {\tt\small malik@eecs.berkeley.edu} }
66
+
67
+ \maketitle
68
+
69
+ \begin{abstract} In this paper we introduce the problem of Visual Semantic Role
70
+ Labeling: given an image we want to detect people doing actions and localize
71
+ the objects of interaction. Classical approaches to action recognition either
72
+ study the task of action classification at the image or video clip level or at
73
+ best produce a bounding box around the person doing the action. We believe such
74
+ an output is inadequate and a complete understanding can only come when we are
75
+ able to associate objects in the scene to the different semantic roles of the
76
+ action. To enable progress towards this goal, we annotate a dataset of 16K
77
+ people instances in 10K images with actions they are doing and associate
78
+ objects in the scene with different semantic roles for each action. Finally, we
79
+ provide a set of baseline algorithms for this task and analyze error modes
80
+ providing directions for future work. \end{abstract}
81
+
82
+ \section{Introduction}
83
+ Current state of the art on action recognition consists of classifying a video
84
+ clip containing the action, or marking a bounding box around the approximate
85
+ location of the agent doing the action. Most current action recognition
86
+ datasets classify each person into doing one of $k$ different activities and
87
+ focus on coarse activities (like `playing baseball', `cooking', `gardening').
88
+ We argue that such a coarse understanding is incomplete and a complete visual
89
+ understanding of an activity can only come when we can reason about fine
90
+ grained actions constituting each such activity (like `hitting' the ball with
91
+ a bat, `chopping' onions with a knife, `mowing' the lawn with a a lawn mower),
92
+ reason about people doing multiple such actions at the same time, and are able
93
+ to associate objects in the scene to the different semantic roles for each of these
94
+ actions.
95
+
96
+ \begin{figure}[t] \insertA{0.48}{figures/baseball.jpg} \caption{\small
97
+ \textbf{Visual Semantic Role Labeling}: We want to go beyond classifying the
98
+ action occurring in the image to being able to localize the agent, and the
99
+ objects in various semantic roles associated with the action.}
100
+ \figlabel{visual-srl} \end{figure}
101
+
102
+ \figref{visual-srl} shows our desired output. We want to go
103
+ beyond coarse activity labels such as `playing baseball', and be able to reason
104
+ about fine-grained actions such as `hitting' and detect the various
105
+ semantic roles for this action namely: the agent (pink box), the instrument
106
+ (blue box) and the object (orange box). Such an output can help us answer
107
+ various questions about the image. It tells us more about the current state of
108
+ the scene depicted in the image (association of objects in the image with each
109
+ other and with actions happening in the image), helps us better predict the
110
+ future (the ball will leave the image from the left edge of the image, the
111
+ baseball bat will swing clockwise), help us to learn commonsense about the
112
+ world (naive physics, that a bat hitting a ball impacts momentum), and in turn
113
+ help us in understanding `activities' (a baseball game is an outdoor sport
114
+ played in a field and involves hitting a round ball with a long cylindrical
115
+ bat).
116
+
117
+ We call this problem as `Visual Semantic Role Labeling'. Semantic Role Labeling
118
+ in a Natural Language Processing context refers to labeling words in a sentence
119
+ with different semantic roles for the verb in the sentence
120
+ \cite{carreras2005introduction}. NLP research on this and related areas has
121
+ resulted in FrameNet~\cite{baker1998berkeley} and
122
+ VerbNet~\cite{KipperSchuler2006} which catalogue verbs and their semantic
123
+ roles. What is missing from such catalogues is visual grounding. Our work here
124
+ strives to achieve this grounding of verbs and their various semantic roles to
125
+ images. The set of actions we study along with the various roles are listed in
126
+ \tableref{list}.
127
+
128
+ \begin{figure*}
129
+ \centering
130
+ \renewcommand{\arraystretch}{0.5}
131
+ \setlength{\tabcolsep}{1.0pt}
132
+ \insertB{0.15}{figures/people/2_1752195.jpg}
133
+ \insertB{0.15}{figures/people/2_2151556.jpg}
134
+ \insertB{0.15}{figures/people/2_2154597.jpg}
135
+ \insertB{0.15}{figures/people/3_424383.jpg}
136
+ \insertB{0.15}{figures/people/2_2157357.jpg}
137
+ \insertB{0.15}{figures/people/2_2151818.jpg}
138
+ \insertB{0.15}{figures/people/3_191596.jpg}
139
+ \insertB{0.15}{figures/people/3_478163.jpg}
140
+ \insertB{0.15}{figures/people/2_2151213.jpg}
141
+ \insertB{0.15}{figures/people/2_2155536.jpg}
142
+ \insertB{0.15}{figures/people/3_192398.jpg}
143
+ \insertB{0.15}{figures/people/2_485273.jpg}
144
+ \insertB{0.15}{figures/people/2_483755.jpg}
145
+ \insertB{0.15}{figures/people/3_453288.jpg}
146
+ \insertB{0.15}{figures/people/3_492607.jpg}
147
+ \insertB{0.15}{figures/people/3_475491.jpg}
148
+ \insertB{0.15}{figures/people/3_465597.jpg}
149
+ \insertB{0.15}{figures/people/2_2151551.jpg}
150
+ \insertB{0.15}{figures/people/2_490595.jpg}
151
+ \insertB{0.15}{figures/people/2_489031.jpg}
152
+ \caption{\textbf{Visualizations of the images in the dataset}: We show examples
153
+ of annotations in the dataset. We show the agent in the blue box and objects in
154
+ various semantic roles in red. People can be doing multiple actions at the same
155
+ time. First row shows: person skateboarding, person sitting and riding on horse, person
156
+ sitting on chair and riding on elephant, person drinking from a glass and
157
+ sitting on a chair, person sitting on a chair eating a doughnut.}
158
+ \figlabel{dataset-vis}
159
+ \end{figure*}
160
+
161
+
162
+ Visual semantic role labeling is a new task which has not been studied before;
163
+ thus we start by annotating a dataset of 16K people instances with action
164
+ labels from 26 different action classes and associating objects in various
165
+ semantic roles for each person labeled with a particular action. We do this
166
+ annotation on the challenging Microsoft COCO (Common Objects in COntext)
167
+ dataset \cite{mscoco}, which contains a wide variety of objects in complex and
168
+ cluttered scenes. \figref{dataset-vis} shows some examples from our dataset.
169
+ Unlike most existing datasets which either have objects or actions labeled, as
170
+ a result of our annotation effort, \coco now has detailed action labels in
171
+ addition to the detailed object instance segmentations, and we believe will
172
+ form an interesting test bed for studying related problems. We also provide
173
+ baseline algorithms for addressing this task using CNN based object detectors,
174
+ and provide a discussion on future directions of research.
175
+
176
+ \section{Related Work}
177
+ \seclabel{related}
178
+ There has been a lot of research in computer vision to understand activities and
179
+ actions happening in images and videos. Here we review popular action analysis
180
+ datasets, exact tasks people have studied and basic overview of techniques.
181
+
182
+ PASCAL VOC \cite{PASCAL-ijcv} is one of the popular datasets for static action
183
+ classification. The primary task here is to classify bounding box around people
184
+ instances into 9 categories. This dataset was used in the VOC challenge.
185
+ Recently, Gkioxari \etal \cite{poseactionrcnn} extended the dataset for action
186
+ detection where the task is to detect and localize people doing actions. MPII
187
+ Human Pose dataset is a more recent and challenging dataset for studying action
188
+ \cite{andriluka14cvpr}. The MPII Human Pose dataset contains 23K images
189
+ containing over 40K people with 410 different human activities. These images
190
+ come from YouTube videos and in addition to the activity label also have
191
+ extensive body part labels. The PASCAL dataset has enabled tremendous progress
192
+ in the field of action classification, and the MPII human pose dataset has
193
+ enabled studying human pose in a very principled manner, but both these
194
+ datasets do not have annotations for the object of interaction which is the
195
+ focus of our work here.
196
+
197
+ Gupta \etal \cite{gupta2009understanding,gupta2009observing}, Yao \etal
198
+ \cite{yao2011human,yao2012recognizing,yao2011classifying}, Prest \etal
199
+ \cite{prest2012weakly} collect and analyze the Sports, People Playing Musical
200
+ Instruments (PPMI) and the Trumpets, Bikes and Hats (TBH) datasets but the
201
+ focus of these works is on modeling human pose and object context. While Gupta
202
+ \etal and Yao \etal study the problem in supervised contexts, Prest \etal also
203
+ propose weakly supervised methods. While these methods significantly boost
204
+ performance over not using human object context and produce localization for
205
+ the object of interaction as learned by their model, they do not quantify
206
+ performance at the joint task of detecting people, classifying what they are
207
+ doing and localizing the object of interaction. Our proposed dataset will be a
208
+ natural test bed for making such quantitative measurements.
209
+
210
+ There are also a large number of video datasets for activity analysis. Some of
211
+ these study the task of full video action classification
212
+ \cite{schuldt2004recognizing,ActionsAsSpaceTimeShapes_pami07,laptev:08,marszalek09,KarpathyCVPR14,rohrbach2012database},
213
+ while some \cite{yuan2009discriminative,Jhuang:ICCV:2013,rodriguez2008action}
214
+ also study the task of detecting the agent doing the action. In particular the
215
+ J-HMDB \cite{rodriguez2008action} and UCF Sports dataset
216
+ \cite{Jhuang:ICCV:2013} are popular test beds for algorithms that study this
217
+ task \cite{actiontubes}. More recently, \cite{rohrbach15cvpr} proposed a new
218
+ video dataset where annotations come from DVS scripts. Given annotations can be
219
+ generated automatically, this will be a large dataset, but is inadequate for us
220
+ as it does not have the visual grounding which is our interest here.
221
+
222
+ There have been a number of recent papers which generate captions for images
223
+ \cite{fangCVPR15,karpathy2014deep,kelvin2015show,ryan2014multimodal,mao2014explain,vinyals2014show,kiros2014unifying,donahue2014long,chen2014learning}.
224
+ Some of them also produce localization for various words that occur in the
225
+ sentence \cite{fangCVPR15,kelvin2015show}.
226
+ While this maybe sufficient to generate a caption for the image, the
227
+ understanding is often limited only to the most salient action happening in the
228
+ image (based on biases of the captions that were available for training).
229
+ A caption like `A baseball match' is completely correct for the image in
230
+ \figref{visual-srl}, but it is far from the detailed understanding we are striving
231
+ for here: an explicit action label for each person in the image along with
232
+ accurate localization for all objects in various semantic roles for the action.
233
+
234
+ \section{V-COCO Dataset} In this section, we describe the Verbs in COCO
235
+ (\vcoco) dataset. Our annotation process consisted of the following stages.
236
+ Example images from the dataset are shown in \figref{dataset-vis}.
237
+
238
+ \begin{figure}[t]
239
+ \insertA{0.48}{figures/stage2.png}
240
+ \insertA{0.48}{figures/stage3.png}
241
+ \caption{\small
242
+ \textbf{Interface for collecting annotations}: The top row shows the interface for
243
+ annotating the action the person is doing, and the bottom row shows the
244
+ interface for associating the roles.}
245
+ \figlabel{interface} \end{figure}
246
+
247
+ We build off the \coco dataset, for the following reasons, a) \coco is the most
248
+ challenging object detection dataset, b) it has complete annotations for 160K
249
+ images with 80 different object classes along with segmentation mask for all
250
+ objects and five human written captions for each image, c) \vcoco will get
251
+ richer if \coco gets richer \eg with additional annotations like human pose and
252
+ key points.
253
+
254
+ \textbf{Identifying verbs}
255
+ The first step is to identify a set of verbs to study. We do this in a data
256
+ driven manner. We use the captions in the \coco dataset and obtain a list of
257
+ words for which the subject is a person (we use the Stanford dependency parser
258
+ to determine the subject associated with each verb and check this subject
259
+ against a list of 62 nouns and pronouns to determine if the subject is a
260
+ person). We also obtain counts for each of these verbs (actions). Based on this
261
+ list, we manually select a set of 30 basic verbs (actions). We picked these
262
+ words with the consideration if these would be in the vocabulary of a 5-year
263
+ old child. The list of verbs is tabulated in \tableref{list}. Based on
264
+ visual inspection of images with these action words, we dropped \texttt{pick},
265
+ \texttt{place}, \texttt{give}, \texttt{take} because they were ambiguous from a
266
+ single image.
267
+
268
+ \textbf{Identifying interesting images}
269
+ With this list of verbs, the next step is to identify a set of images
270
+ containing people doing these actions. We do this independently for each verb.
271
+ We compute two scores for each image: a) does this image have a person
272
+ associated with the target verb (or its synonyms) (based on the captions for
273
+ the image), b) does this image contain objects associated with the target verb
274
+ (using the \coco object instance annotations, the list of associated objects
275
+ was picked manually). Query expansion using the set of objects associated with
276
+ the target verb was necessary to obtain enough examples.
277
+
278
+ We sum these scores to obtain a ranked list of images for each verb, and
279
+ consider the top 8000 images independently for each verb (in case the above
280
+ two scores do not yield enough images we take additional images that contain
281
+ people). We then use \amt to obtain annotations for people in these 8000
282
+ images (details on the mechanical turk annotation procedure are provided in
283
+ \secref{amt}). We thus obtain a set of positive instances for each verb. The
284
+ next step is to come up with a common set of images across all action
285
+ categories. We do this by solving an integer program. This step of obtaining
286
+ annotations separately for each action and then merging positive instances into
287
+ a common pool of images was important to get enough examples for each action
288
+ class.
289
+
290
+ \textbf{Salient People} Given this task requires detailed reasoning (consider
291
+ localizing the spoon and fork being used to \vb{eat}), instead of working with
292
+ all people in the image, we work with people instances which have sufficient
293
+ pixel area in the image. In addition, we also discard all people with pixel
294
+ area less than half the pixel area of the largest person in the image. This
295
+ helps with images which have a lot of by-standers (who may not be doing
296
+ anything interesting). Doing this speeds up the annotation process
297
+ significantly, and allows us to use the annotation effort more effectively.
298
+ Note that we can still study the \vsrl problem in a detection setting. Given
299
+ the complete annotations in \coco, even if we don't know the action that a non
300
+ salient person is doing, we still know its location and appropriately adjust
301
+ the training and evaluation procedures to take this into account.
302
+
303
+ \textbf{Annotating salient people with all action labels}
304
+ Given this set of images we annotate all `salient people' in these images with
305
+ binary label for each action category. We again use \amt for obtaining these
306
+ annotations but obtain annotations from 5 different workers for each person for
307
+ each action.
308
+
309
+ \textbf{Annotating object in various roles}
310
+ Finally, we obtain annotations for objects in various roles for each action. We
311
+ first enumerate the various roles for each verb, and identify object
312
+ categories that are appropriate for these roles (see \tableref{list}). For
313
+ each positively annotated person (with 3 or more positive votes from the
314
+ previous stage) we obtain YES/NO annotation for questions of the form:
315
+ `{\small Is the \textit{person} in the blue box \textit{holding} the
316
+ \textit{banana} in the red box?}'
317
+
318
+ \textbf{Splits}
319
+ To minimize any differences in statistics between the different splits of the
320
+ data, we combined 40K images from the \coco training set with the \coco
321
+ validation set and obtained annotations on this joint set. After the
322
+ annotation, we construct 3 splits for the \vcoco dataset: the images coming
323
+ from the validation set in \coco were put into the \vcoco \textit{test} set,
324
+ the rest of the images were split into \vcoco \textit{train} and \textit{val}
325
+ sets.
326
+
327
+ \subsection{Annotation Procedure} \seclabel{amt} In this section, we describe
328
+ the Amazon Mechanical Turk (\amt) \cite{AMT} annotation procedure that we use
329
+ during the various stages of dataset annotation.
330
+
331
+ We follow insights from Zhou \etal \cite{zhou2014places} and use their
332
+ annotation interface. We frame each annotation task as a binary {\small
333
+ {YES}/{NO}} task. This has the following advantages: the user interface for
334
+ such a task is simple, it is easy to insert test questions, it is easy to
335
+ asses consensus, and such a task ends up getting done faster on \amt.
336
+
337
+ \textbf{User Interface} We use the interface from Zhou \etal
338
+ \cite{zhou2014places}. The interface is shown in \figref{interface}. We show
339
+ two images: the original image on the left, and the image highlighting the
340
+ person being annotated on the right. All images were marked with a default NO
341
+ answer and the turker flips the answer using a key press. We inserted test
342
+ questions to prevent spamming (see below) and filtered out inaccurate turkers.
343
+ We composed HITs (Human Intelligence Tasks) with 450 questions (including 50
344
+ test questions) of the form: `{\small Is the person highlighted in the blue box
345
+ \textit{holding} something?}' In a given HIT, the action was kept fixed. On
346
+ average turkers spent 15 minutes per HIT, although this varied from action to
347
+ action. For annotating the roles, an additional box was highlighted
348
+ corresponding to the object, and the question was changed appropriately.
349
+
350
+ \textbf{Test Questions} We had to insert test questions to ensure
351
+ reliability of turker annotations. We inserted two sets of test questions. The
352
+ first set was used to determine accuracy at the time of submission and this
353
+ prevented turkers from submitting answers if their accuracy was too low (below
354
+ 90\%). The second set of questions were used to guard from turkers who hacked
355
+ the client side testing to submit incorrect results (surprisingly, we did find
356
+ turkers who did this). HITs for which the average accuracy on the test set was
357
+ lower than 95\% were relaunched. The set of test questions were bootstrapped
358
+ from the annotation process, we started with a small set of hand labeled test
359
+ questions, and enriched that set based on annotations obtained on a small set
360
+ of images. We manually inspected the annotations and augmented the test set to
361
+ penalize common mistakes (skiing \vs snowboarding) and excluded ambiguous
362
+ examples.
363
+
364
+ \subsection{Dataset Statistics}
365
+
366
+ \renewcommand{\arraystretch}{1.3}
367
+ \begin{table}
368
+ \caption{\textbf{List of actions in \vcoco.} We list the different actions, number of
369
+ semantic roles associated with the action, number of examples for each action,
370
+ the different roles associated with each action along with their counts and the
371
+ different objects that can be take each role. Annotations for cells marked with * are
372
+ currently underway.} \vspace{2mm}
373
+ \tablelabel{list}
374
+ \scalebox{0.6}{
375
+ \begin{tabular}{>{\raggedright}p{0.08\textwidth}crrrp{0.35\textwidth}r} \toprule
376
+ Action & Roles & \# & Role & \# & Objects in role & \\ \midrule
377
+ carry & 1 & 970 & \object & * & & \\
378
+ catch & 1 & 559 & \object & 457 & sports ball, frisbee, & \\
379
+ cut & 2 & 569 & \instr & 477 & scissors, fork, knife, & \\
380
+ & & & \object & * & & \\
381
+ drink & 1 & 215 & \instr & 203 & wine glass, bottle, cup, bowl, & \\
382
+ eat & 2 & 1198 & \object & 737 & banana, apple, sandwich, orange, carrot, broccoli, hot dog, pizza, cake, donut, & \\
383
+ & & & \instr & * & & \\
384
+ hit & 2 & 716 & \instr & 657 & tennis racket, baseball bat, & \\
385
+ & & & \object & 454 & sports ball & \\
386
+ hold & 1 & 7609 & \object & * & & \\
387
+ jump & 1 & 1335 & \instr & 891 & snowboard, skis, skateboard, surfboard, & \\
388
+ kick & 1 & 322 & \object & 297 & sports ball, & \\
389
+ lay & 1 & 858 & \instr & 513 & bench, dining table, toilet, bed, couch, chair, & \\
390
+ look & 1 & 7172 & \object & * & & \\
391
+ point & 1 & 69 & \object & * & & \\
392
+ read & 1 & 227 & \object & 172 & book, & \\
393
+ ride & 1 & 1044 & \instr & 950 & bicycle, motorcycle, bus, truck, boat, train, airplane, car, horse, elephant, & \\
394
+ run & 0 & 1309 & - & - & & \\
395
+ sit & 1 & 3905 & \instr & 2161 & bicycle, motorcycle, horse, elephant, bench, chair, couch, bed, toilet, dining table, suitcase, handbag, backpack, & \\
396
+ skateboard & 1 & 906 & \instr & 869 & skateboard, & \\
397
+ ski & 1 & 924 & \instr & 797 & skis, & \\
398
+ smile & 0 & 2960 & - & - & & \\
399
+ snowboard & 1 & 665 & \instr & 628 & snowboard, & \\
400
+ stand & 0 & 8716 & - & - & & \\
401
+ surf & 1 & 984 & \instr & 949 & surfboard, & \\
402
+ talk on phone & 1 & 639 & \instr & 538 & cell phone, & \\
403
+ throw & 1 & 544 & \object & 475 & sports ball, frisbee, & \\
404
+ walk & 0 & 1253 & - & - & & \\
405
+ work on computer & 1 & 868 & \instr & 773 & laptop, & \\
406
+ \bottomrule
407
+ \end{tabular}}
408
+ \end{table}
409
+
410
+ \begin{table}
411
+ \setlength{\tabcolsep}{1.2pt}
412
+ \begin{center}
413
+ \caption{List and counts of actions that co-occur in \vcoco.}
414
+ \tablelabel{co-occur-stats}
415
+ \scriptsize{
416
+ \begin{tabular}{>{\textbf}lp{0.13\textwidth}lp{0.13\textwidth}lp{0.13\textwidth}} \toprule
417
+ 597 & look, stand & & & 411 & carry, hold, stand, walk \\
418
+ 340 & hold, stand, ski & & & 329 & ride, sit \\
419
+ 324 & look, sit, work on computer & 302 & look, jump, skateboard & 296 & hold, ride, sit \\
420
+ 280 & look, surf & 269 & hold, stand & 269 & hold, look, stand \\
421
+ 259 & hold, smile, stand & 253 & stand, walk & 253 & hold, sit, eat \\
422
+ 238 & smile, stand & 230 & look, run, stand & 209 & hold, look, stand, hit \\
423
+ & & 193 & hold, smile, stand, ski & 189 & look, sit \\
424
+ 189 & look, run, stand, kick & 183 & smile, sit & & \\
425
+ 160 & look, stand, surf & 159 & hold, look, sit, eat & 152 & hold, stand, talk on phone \\
426
+ 150 & stand, snowboard & 140 & hold, look, smile, stand, cut & & \\
427
+ 129 & hold, stand, throw & 128 & look, stand, jump, skateboard & 127 & hold, smile, sit, eat \\
428
+ 124 & hold, look, stand, cut & 121 & hold, look, sit, work on computer & 117 & hold, stand, hit \\
429
+ 115 & look, jump, snowboard & 115 & look, stand, skateboard & 113 & stand, surf \\
430
+ 107 & hold, look, run, stand, hit & 105 & look, stand, throw & 104 & hold, look, stand, ski \\
431
+ \bottomrule
432
+ \end{tabular}}
433
+ \end{center}
434
+ \end{table}
435
+
436
+
437
+
438
+
439
+
440
+
441
+ In this section we list statistics on the dataset. The \vcoco
442
+ dataset contains a total of 10346 images containg 16199 people
443
+ instances. Each annotated person has binary labels for 26 different actions.
444
+ The set of actions and the semantic roles associated with each action are
445
+ listed in \tableref{list}. \tableref{list} also lists the number of positive
446
+ examples for each action, the set of object categories for various roles for each
447
+ ation, the number of instances with annotations for the object of interaction.
448
+
449
+ We split the \vcoco dataset into a \train, \val and \test split. The \train and
450
+ \val splits come from the \coco \train set while the \test set comes from the
451
+ \val set. Number of images and annotated people instances in each of these
452
+ splits are tabulated in \tableref{stat-splits}.
453
+
454
+ Note that all images in \vcoco inherits all the annotations from the \coco
455
+ dataset \cite{mscoco}, including bounding boxes for non-salient people, crowd regions,
456
+ allowing us to study all tasks in a detection setting. Moreover, each image
457
+ also has annotations for 80 object categories which can be used to study the
458
+ role of context in such tasks.
459
+
460
+ \figref{hists} (left) shows the distribution of the number of people instances
461
+ in each image. Unlike past datasets which mostly have only one annotated person
462
+ per image, the \vcoco dataset has a large number of images with more than one
463
+ person. On average these have 1.57 people annotated with action labels per
464
+ image. There are about 2000 images with two, and 800 images with three people
465
+ annotated.
466
+
467
+ \figref{hists} (right) shows a distribution of the number of different actions
468
+ a person is doing in \vcoco. Unlike past datasets where each person can only be
469
+ doing one action, people in \vcoco do on average 2.87 actions at the same time.
470
+ \tableref{co-occur-stats} lists the set of actions which co-occur more than 100
471
+ times along with their counts. We also analyse human agreement for different
472
+ actions to quantify the ambiguity in labeling actions from a single image by
473
+ benchmarking annotations from one turker with annotations from the other
474
+ turkers for each HIT for each action and produce points on precision and recall
475
+ plot. \figref{human-agreement} presents these plots for the \vb{walk}, \vb{run}
476
+ and \vb{surf} actions. We can see that there is high human agreement for
477
+ actions like \vb{surf}, where as there is lower human agreement for verbs like
478
+ \vb{walk} and \vb{run}, as expected.
479
+
480
+ \begin{figure}
481
+ \centering
482
+ \insertA{0.23}{figures/hist_people_per_image.pdf}
483
+ \insertA{0.23}{figures/hist_actions_per_person.pdf}
484
+ \caption{\textbf{Statistics on \vcoco}: The bar plot on left shows the
485
+ distribution of the number of annotated people per image. The bar plot on right
486
+ shows the distribution of the number of actions a person is doing. Note that
487
+ X-axis is on $\log$ scale.}
488
+ \figlabel{hists}
489
+ \end{figure}
490
+
491
+ \begin{figure}
492
+ \insertA{0.1492}{figures/human_agreement/walk.pdf}
493
+ \insertA{0.1492}{figures/human_agreement/run.pdf}
494
+ \insertA{0.1492}{figures/human_agreement/surf.pdf}
495
+ \caption{Human Agreement}
496
+ \figlabel{human-agreement}
497
+ \end{figure}
498
+
499
+
500
+
501
+ \renewcommand{\arraystretch}{1.2}
502
+ \begin{table}
503
+ \caption{Statistics of various splits of \vcoco.}
504
+ \tablelabel{stat-splits}
505
+ \begin{center}
506
+ \scalebox{0.8}{
507
+ \begin{tabular}{lcccc}
508
+ \toprule
509
+ & \train & \val & \test & \textit{all} \\ \midrule
510
+ Number of Image & 2533 & 2867 & 4946 & 10346 \\
511
+ Number of People Instance & 3932 & 4499 & 7768 & 16199 \\ \bottomrule
512
+ \end{tabular}}
513
+ \end{center}
514
+ \end{table}
515
+
516
+ \subsection{Tasks and Metrics}
517
+ These annotations enable us to study a variety of new fine grained tasks about
518
+ action understanding which have not been studied before. We describe these
519
+ tasks below.
520
+
521
+ \textbf{Agent Detection} The agent detection task is to detect instances of
522
+ people engaging in a particular action. We use the standard average precision
523
+ metric as used for PASCAL VOC object detection \cite{PASCAL-ijcv} to measure
524
+ performance at this task - people labeled positively with the action category
525
+ are treated as positive, un-annotated non-salient people are marked as difficult.
526
+
527
+ \textbf{Role Detection} The role detection task is to detect the agent and
528
+ the objects in the various roles for the action. An algorithm produces as
529
+ output bounding boxes for the locations of the agent and each semantic role. A
530
+ detection is correct if the location of the agent and each role is correct
531
+ (correctness is measured using bounding box overlap as is standard). As an
532
+ example, consider the role detection task for the action class `hold'. An
533
+ algorithm will have to produce as output a bounding box for the person
534
+ `holding', and the object being `held', and both these boxes must be correct
535
+ for this detection to be correct. We follow the same precision recall
536
+ philosophy and use average precision as the metric.
537
+ \section{Methods}
538
+ \seclabel{method}
539
+ In this section, we describe the baseline approaches we investigated for
540
+ studying this task. As a first step, we train object detectors for the 80
541
+ different classes in the \coco dataset. We use \rcnn \cite{girshickCVPR14} to
542
+ train these detectors and use the 16-layer CNN from Simonyan and Zisserman
543
+ \cite{simonyan2014very} (we denote this as \vgg). This \cnn has been shown to be
544
+ very effective at a variety of tasks like object detection
545
+ \cite{girshickCVPR14}, image captioning \cite{fangCVPR15}, action
546
+ classification \cite{pascal_leaderboard}. We finetune this detector using the
547
+ fast version of R-CNN \cite{fastrcnn} and train on 77K images from the \coco
548
+ train split (we hold out the 5K \vcoco \train and \val images). We use the
549
+ precomputed MCG bounding boxes from \cite{pont2015multiscale}.
550
+
551
+ \paragraph{Agent detection model} Our model for agent detection starts by
552
+ detecting people, and then classifies the detected people into different action
553
+ categories. We train this classification model using MCG bounding boxes which
554
+ have an intersection over union of more than 0.5 with the ground truth bounding
555
+ box for the person. Since each person can be doing multiple actions at the same
556
+ time, we frame this as a multi-label classification problem, and finetune the
557
+ \vgg representation for this task. We denote this model as $A$.
558
+
559
+ At test time, each person detection (after non-maximum suppression), is scored
560
+ with classifiers for different actions, to obtain a probability for each
561
+ action. These action probabilities are multiplied with the probability from the
562
+ person detector to obtain the final score for each action class.
563
+
564
+ \paragraph{Regression to bounding box for the role} Our first attempt to
565
+ localize the object in semantic roles associated with an action involves
566
+ training a regression model to regress to the location of the semantic role.
567
+ This regression is done in the coordinate frame of the detected agent (detected
568
+ using model $A$ as described above). We use the following 4 regression targets
569
+ \cite{girshickCVPR14}. $(\bar{x}_{t}, \bar{y}_{t})$ denotes the center of the
570
+ target box $t$, $(\bar{x}_o, \bar{y}_{o})$ denotes the center of the detected
571
+ person box $o$, and $(w_{t}, h_{t})$, $(w_o, h_o)$ are the width and height of
572
+ the target and person box.
573
+ \begin{eqnarray}
574
+ \delta(t,o) = \left(\frac{\bar{x}_{t} - \bar{x}_{o}}{w_{o}}, \frac{\bar{y}_{t} -
575
+ \bar{y}_{o}}{h_{o}},
576
+ \log\left(\frac{{w}_{t}}{w_{o}}\right), \log\left(\frac{{h}_{t}}{h_{o}}\right) \right)
577
+ \eqlabel{delta}
578
+ \end{eqnarray}
579
+ We denote this model as $B$.
580
+
581
+ \paragraph{Using Object Detectors} Our second method for localizing these
582
+ objects uses object detectors for the categories that can be a part of the
583
+ semantic role as described in \tableref{list}. We start with the detected agent
584
+ (using model $A$ above) and for each detected agent attach the highest scoring
585
+ box according to the following score function:
586
+ \begin{eqnarray}
587
+ P_D\left(\delta(t_{c},o)\right) \times sc_{c}(t_{c})
588
+ \end{eqnarray}
589
+ where $o$ refers to the box of the detected agent, box $t_c$ comes from all
590
+ detection boxes for the relevant object categories $c \in \mathcal{C}$ for that
591
+ action class, and $sc_c(t_c)$ refers to the detection probability for object
592
+ category $c$ for box $t_{c}$. $P_D$ is the probability distribution of
593
+ deformations $\delta$ computed from the training set, using the annotated agent
594
+ and role boxes. We model this probability distribution using a Gaussian. The
595
+ detection probabilities for different object categories $c \in \mathcal{C}$ are
596
+ already calibrated using the softmax in the \fastrcnn training \cite{fastrcnn}.
597
+ We denote this model as $C$.
598
+ \section{Experiments}
599
+ \seclabel{exp}
600
+ \renewcommand{\arraystretch}{1.3}
601
+ \begin{table}
602
+ \centering
603
+ \caption{Performance on actions in \vcoco. We report the recall for MCG
604
+ candidates for objects that are part of different semantic roles for each
605
+ action, AP for agent detection and role detection for 4 baselines using VGG CNN
606
+ (with and without finetuning for this task). See \secref{exp} for more
607
+ details.}
608
+ \vspace{2mm}
609
+ \tablelabel{ap}
610
+ \scalebox{0.6}{
611
+ \begin{tabular}{>{\raggedleft}p{0.08\textwidth}crrrrrrrrrr} \toprule
612
+ Action & Role & \multicolumn{3}{c}{MCG Recall} & & \multicolumn{5}{c}{Average Precision}\\
613
+ \cmidrule(r){3-5} \cmidrule(r){7-11}
614
+ & & & & & & $A$ & $B_0$ & $B$ & $C_0$ & $C$ & \\
615
+ & & mean & R[0.5] & R[0.7] & & agent & role & role & role & role & \\ \midrule
616
+ carry & \object* & & & & & 54.2 & & & & & \\
617
+ catch & \object & 73.7 & 91.6 & 67.2 & & 41.4 & 1.1 & 1.2 & 24.1 & 22.5 & \\
618
+ cut & \instr & 58.6 & 61.4 & 30.7 & & 44.5 & 1.6 & 2.3 & 4.6 & 3.9 & \\
619
+ & \object* & & & & & & & & & & \\
620
+ drink & \instr & 69.5 & 82.1 & 58.2 & & 25.1 & 0.3 & 0.7 & 3.1 & 6.4 & \\
621
+ eat & \object & 84.4 & 97.8 & 89.7 & & 70.2 & 8.0 & 11.0 & 37.0 & 46.2 & \\
622
+ & \instr* & & & & & & & & & & \\
623
+ hit & \instr & 72.0 & 88.1 & 60.5 & & 82.6 & 0.2 & 0.7 & 31.0 & 31.0 & \\
624
+ & \object & 62.0 & 73.8 & 53.3 & & & 11.3 & 11.8 & 41.3 & 44.6 & \\
625
+ hold & \object* & & & & & 73.4 & & & & & \\
626
+ jump & \instr & 76.0 & 88.7 & 68.7 & & 69.2 & 4.0 & 17.0 & 33.9 & 35.3 & \\
627
+ kick & \object & 82.7 & 100.0 & 94.4 & & 61.6 & 0.3 & 0.8 & 48.8 & 48.3 & \\
628
+ lay & \instr & 94.6 & 100.0 & 97.7 & & 39.3 & 19.9 & 28.0 & 32.8 & 34.3 & \\
629
+ look & \object* & & & & & 65.0 & & & & & \\
630
+ point & \object* & & & & & 1.4 & & & & & \\
631
+ read & \object & 83.5 & 96.2 & 82.7 & & 10.6 & 0.9 & 2.1 & 2.2 & 4.7 & \\
632
+ ride & \instr & 84.7 & 99.1 & 87.9 & & 45.4 & 1.2 & 9.7 & 12.5 & 27.6 & \\
633
+ run & - & & & & & 59.7 & & & & & \\
634
+ sit & \instr & 82.2 & 94.7 & 82.6 & & 64.1 & 20.0 & 22.3 & 24.3 & 29.2 & \\
635
+ skateboard & \instr & 73.2 & 87.3 & 63.7 & & 83.7 & 3.0 & 12.2 & 32.7 & 40.2 & \\
636
+ ski & \instr & 49.1 & 46.5 & 20.9 & & 81.9 & 4.9 & 5.5 & 5.9 & 8.2 & \\
637
+ smile & - & & & & & 61.9 & & & & & \\
638
+ snowboard & \instr & 67.8 & 75.1 & 51.7 & & 75.8 & 4.3 & 13.6 & 20.2 & 28.1 & \\
639
+ stand & - & & & & & 81.0 & & & & & \\
640
+ surf & \instr & 66.7 & 76.0 & 53.2 & & 94.0 & 1.5 & 4.8 & 28.1 & 27.3 & \\
641
+ talk on phone & \instr & 59.9 & 69.3 & 37.3 & & 46.6 & 1.1 & 0.6 & 5.8 & 5.8 & \\
642
+ throw & \object & 72.5 & 88.0 & 73.6 & & 50.1 & 0.4 & 0.5 & 25.7 & 25.4 & \\
643
+ walk & - & & & & & 56.3 & & & & & \\
644
+ work on computer & \object & 85.6 & 98.6 & 88.5 & & 56.9 & 1.4 & 4.9 & 29.8 & 32.3 & \\ \midrule
645
+ mean & & 73.6 & 85.0 & 66.4 & & 57.5 & 4.5 & 7.9 & 23.4 & 26.4 & \\ \bottomrule
646
+ \end{tabular}}
647
+ \end{table}
648
+
649
+ \begin{figure*}
650
+ \centering
651
+ \renewcommand{\arraystretch}{0.3}
652
+ \setlength{\tabcolsep}{1.0pt}
653
+ \begin{tabular}{p{0.02\textwidth}>{\raggedright\arraybackslash}p{0.98\textwidth}}
654
+ \vertical{Correct} &
655
+ \insertB{0.13}{figures/error_modes_ft//correct/cut_instr_tp_0041.jpg}
656
+ \insertB{0.13}{figures/error_modes_ft//correct/eat_obj_tp_0005.jpg}
657
+ \insertB{0.13}{figures/error_modes_ft//correct/hit_instr_tp_0013.jpg}
658
+ \insertB{0.13}{figures/error_modes_ft//correct/kick_obj_tp_0009.jpg}
659
+ \insertB{0.13}{figures/error_modes_ft//correct/read_obj_tp_0042.jpg}
660
+ \insertB{0.13}{figures/error_modes_ft//correct/ride_instr_tp_0025.jpg}
661
+ \insertB{0.13}{figures/error_modes_ft//correct/skateboard_instr_tp_0011.jpg}
662
+ \insertB{0.13}{figures/error_modes_ft//correct/snowboard_instr_tp_0007.jpg}
663
+ \insertB{0.13}{figures/error_modes_ft//correct/surf_instr_tp_0003.jpg}
664
+ \insertB{0.13}{figures/error_modes_ft//correct/read_obj_tp_0011.jpg}
665
+ \insertB{0.13}{figures/error_modes_ft//correct/talk_on_phone_instr_tp_0002.jpg}
666
+ \insertB{0.13}{figures/error_modes_ft//correct/throw_obj_tp_0002.jpg}
667
+ \\ \vertical{Incorrect Class} &
668
+ \insertB{0.13}{figures/error_modes_ft//bad_class/catch_obj_wrong_label_0033.jpg}
669
+ \insertB{0.13}{figures/error_modes_ft//bad_class/cut_instr_wrong_label_0036.jpg}
670
+ \insertB{0.13}{figures/error_modes_ft//bad_class/hit_instr_wrong_label_0080.jpg}
671
+ \insertB{0.13}{figures/error_modes_ft//bad_class/kick_obj_wrong_label_0040.jpg}
672
+ \insertB{0.13}{figures/error_modes_ft//bad_class/lay_instr_wrong_label_0040.jpg}
673
+ \insertB{0.13}{figures/error_modes_ft//bad_class/read_obj_wrong_label_0012.jpg}
674
+ \insertB{0.13}{figures/error_modes_ft//bad_class/ride_instr_wrong_label_0034.jpg}
675
+ \insertB{0.13}{figures/error_modes_ft//bad_class/snowboard_instr_wrong_label_0038.jpg}
676
+ \insertB{0.13}{figures/error_modes_ft//bad_class/surf_instr_wrong_label_0150.jpg}
677
+ \insertB{0.13}{figures/error_modes_ft//bad_class/talk_on_phone_instr_wrong_label_0015.jpg}
678
+ \insertB{0.13}{figures/error_modes_ft//bad_class/throw_obj_wrong_label_0008.jpg}
679
+ \insertB{0.13}{figures/error_modes_ft//bad_class/work_on_computer_instr_wrong_label_0067.jpg}
680
+ \\ \vertical{Mis-Grouping} &
681
+ \insertB{0.13}{figures/error_modes_ft//misgroup/cut_instr_mis_group_0023.jpg}
682
+ \insertB{0.13}{figures/error_modes_ft//misgroup/drink_instr_mis_group_0056.jpg}
683
+ \insertB{0.13}{figures/error_modes_ft//misgroup/eat_obj_mis_group_0031.jpg}
684
+ \insertB{0.13}{figures/error_modes_ft//misgroup/hit_instr_mis_group_0078.jpg}
685
+ \insertB{0.13}{figures/error_modes_ft//misgroup/jump_instr_mis_group_0178.jpg}
686
+ \insertB{0.13}{figures/error_modes_ft//misgroup/ride_instr_mis_group_0205.jpg}
687
+ \insertB{0.13}{figures/error_modes_ft//misgroup/sit_instr_mis_group_0097.jpg}
688
+ \insertB{0.13}{figures/error_modes_ft//misgroup/skateboard_instr_mis_group_0155.jpg}
689
+ \insertB{0.13}{figures/error_modes_ft//misgroup/ski_instr_mis_group_0051.jpg}
690
+ \insertB{0.13}{figures/error_modes_ft//misgroup/snowboard_instr_mis_group_0147.jpg}
691
+ \insertB{0.13}{figures/error_modes_ft//misgroup/work_on_computer_instr_mis_group_0015.jpg}
692
+ \insertB{0.13}{figures/error_modes_ft//misgroup/work_on_computer_instr_mis_group_0045.jpg}
693
+ \\ \vertical{Mis-Localization} &
694
+ \insertB{0.13}{figures/error_modes_ft//misloc/cut_instr_o_misloc_0019.jpg}
695
+ \insertB{0.13}{figures/error_modes_ft//misloc/drink_instr_o_misloc_0032.jpg}
696
+ \insertB{0.13}{figures/error_modes_ft//misloc/lay_instr_o_misloc_0120.jpg}
697
+ \insertB{0.13}{figures/error_modes_ft//misloc/ride_instr_o_misloc_0035.jpg}
698
+ \insertB{0.13}{figures/error_modes_ft//misloc/talk_on_phone_instr_o_misloc_0011.jpg}
699
+ \\ \vertical{Hallucination} &
700
+ \insertB{0.13}{figures/error_modes_ft//halucination/catch_obj_o_hall_0031.jpg}
701
+ \insertB{0.13}{figures/error_modes_ft//halucination/cut_instr_o_hall_0070.jpg}
702
+ \insertB{0.13}{figures/error_modes_ft//halucination/eat_obj_o_hall_0134.jpg}
703
+ \insertB{0.13}{figures/error_modes_ft//halucination/skateboard_instr_o_hall_0142.jpg}
704
+ \insertB{0.13}{figures/error_modes_ft//halucination/ski_instr_o_hall_0202.jpg}
705
+ \insertB{0.13}{figures/error_modes_ft//halucination/snowboard_instr_o_hall_0204.jpg}
706
+ \insertB{0.13}{figures/error_modes_ft//halucination/talk_on_phone_instr_o_hall_0039.jpg}
707
+ \\
708
+ \end{tabular}
709
+ \caption{Visualizations of detections from our best performing baseline
710
+ algorithm. We show the detected agent in the blue box and the detected object
711
+ in the semantic role in the red box and indicate the inferred action class in
712
+ the test at the bottom of the image. We show some correct detections in the top
713
+ two rows, and common error modes in subsequent rows. `Incorrect Class': when
714
+ the inferred action class label is wrong; `Mis-Grouping': correctly localized
715
+ and semantically feasible but incorrectly matched to the agent; and
716
+ `Mis-localization' and `Hallucination' of the object of interaction.}
717
+ \figlabel{vis-detections}
718
+ \end{figure*}
719
+
720
+
721
+ We summarize our results here. We report all results on the \vcoco \val set.
722
+ Since we use bounding box proposals, we analyze the recall for these proposals
723
+ on the objects that are part of various semantic roles. For each semantic role
724
+ for each action class, we compute the coverage (measured as the intersection
725
+ over union of the best overlapping MCG bounding box with the ground truth
726
+ bounding box for the object in the role) for each instance and report the
727
+ mean coverage, recall at 50\% overlap and recall at 70\% overlap
728
+ (\tableref{ap} columns three to five). We see reasonable recall whenever the
729
+ object in the semantic role is large (\eg bed, bench for \vb{lay}, horse,
730
+ elephant, train, buses for \vb{ride}) or small but highly distinctive (\eg
731
+ football for \vb{kick}, doughnuts, hot dogs for \vb{eat obj}) but worse when
732
+ the object can be in drastic motion (\eg tennis rackets and baseball bats for
733
+ \vb{hit instr}), or small and not distinctive (\eg tennis ball for \vb{hit
734
+ obj}, cell phone for \vb{talk on phone}, ski for \vb{ski}, scissors and knife
735
+ for \vb{cut}).
736
+
737
+ Given that our algorithms start with a person detection, we report the
738
+ average precision of the person detector we are using. On the \vcoco \val set
739
+ our person detector which uses the 16-layer \vgg network in the Fast R-CNN
740
+ \cite{fastrcnn} framework gives an average precision of 62.54\%.
741
+
742
+ We next report the performance at the task of agent detection (\tableref{ap})
743
+ using model $A$ as described in \secref{method}. We observe a mean average
744
+ precision of 57.5\%. Performance is high for action classes which occur in a
745
+ distinctive scene like \vb{surf} (94.0\%, occurring in water) \vb{ski},
746
+ \vb{snowboard} (81.9\% and 75.8\%, occurring in snow) and \vb{hit} (82.6\%,
747
+ occurring in sports fields). Performance is also high for classes which have a
748
+ distinctive object associated with the action like \vb{eat} (70.2\%).
749
+ Performance for classes which are identified by an object which is not easy to
750
+ identify is lower \eg wine glasses for \vb{drink} (25.1\%), books for \vb{read}
751
+ (10.6\%), \eg object being cut and the instrument being used for \vb{cut}
752
+ (44.5). Performance is also worse for action classes which require reasoning
753
+ about large spatial relationships \eg 61.6\% for \vb{kick}, and fine grained
754
+ reasoning of human pose \eg 41.4\%, 50.1\% for \vb{catch} and \vb{throw}.
755
+ Finetuning the \vgg representation for this task improves performance
756
+ significantly and just training a SVM on the \vgg \texttt{fc7} features
757
+ (finetuned for object detection on \coco) performs much worse at 46.8\%.
758
+
759
+ We now report performance of the two baseline algorithms on the role detection
760
+ task. We first report performance of algorithm $B_0$ which simply pastes the box
761
+ at the mean deformation location and scale (determined using the mean of the
762
+ $\delta$ vector as defined in \eqref{delta} across the training set for each
763
+ action class separately). This does poorly and gives a mean average precision
764
+ for the role detection task of 4.5\%. Using the regression model $B$ as
765
+ described in \secref{method} to predict the location and scale of the
766
+ semantic role does better giving a mAP of 7.9\%, with high performing classes
767
+ being \vb{sit}, and \vb{lay} for which the object of interaction is always
768
+ below the person. Using object detector output from \vgg and using the
769
+ location of the highest scoring object detection (from the set of relevant
770
+ categories for the semantic roles for the action class) without any spatial
771
+ model (denoted as $C_0$) gives a mAP of 23.4\%. Finally, model $C$ which
772
+ also uses a spatial consistency term in addition to the score of the objects
773
+ detected in the image performs the best among these four baseline algorithms
774
+ giving a mAP of 26.4\%. Modeling the spatial relationship helps for cases
775
+ when there are multiple agents in the scene doing similar things \eg
776
+ performance for \vb{eat} goes up from 37.0\% to 46.2\%, for \vb{ride} goes up
777
+ from 12.5\% to 27.6\%.
778
+
779
+ \paragraph{Visualizations}
780
+ Finally, we visualize the output from our best performing baseline algorithm in
781
+ \figref{vis-detections}. We show some correct detections and various error
782
+ modes. One of the common error modes is incorrect labeling of the action
783
+ (\vb{ski} \vs \vb{snowboard}, \vb{catch} \vs \vb{throw}). Even with a spatial
784
+ model, there is very often a mis grouping for the incorrect role with the
785
+ agent. This is common when there are multiple people doing the same action in
786
+ an image \eg multiple people \vb{riding} horses, or \vb{skateboarding}, or
787
+ \vb{working} on a laptops. Finally, a lot of errors are also due to
788
+ mis-localization and hallucination of object of interaction in particular when
789
+ the object is small \eg ski for \vb{skiing}, books for \vb{reading}.
790
+ \paragraph{Error Modes}
791
+ Having such annotations also enables us to analyze different error modes.
792
+ Following \cite{hoiem2012diagnosing}, we consider the top
793
+ $num\_inst$ detections for each class ($num\_inst$ is the number of instances
794
+ for that action class), and classify the false positives in these top
795
+ detections into the following error modes:
796
+ \begin{enumerate}
797
+ \item \textbf{bck}: when the agent is detected on the background. (IU with
798
+ any labeled person is less than 0.1).
799
+ \item \textbf{bck person}: when the agent is detected on the background, close
800
+ to people in the background. Detections on background people are not
801
+ penalized, however detections which have overlap between 0.1 and 0.5 with
802
+ people in the background are still penalized and this error mode computes
803
+ that fraction.
804
+ \item \textbf{incorrect label}: when the agent is detected around a person
805
+ that is labeled to be not doing this action.
806
+ \item \textbf{person misloc}: when the agent is detected around a person
807
+ doing the action but is not correctly localized (IU between 0.1 and 0.5) (the
808
+ object is correctly localized).
809
+ \item \textbf{obj misloc}: when the object in the semantic role is not
810
+ properly localized (IU between 0.1 and 0.5) (the agent is correctly
811
+ localized).
812
+ \item \textbf{both misloc}: when both the object and the agent are improperly
813
+ localized (IU for both is between 0.1 and 0.5).
814
+ \item \textbf{mis pairing}: when the object is of the correct semantic class
815
+ but not in the semantic role associated with this agent.
816
+ \item \textbf{obj hallucination}: when the object is detected on the
817
+ background.
818
+ \end{enumerate}
819
+ \figref{fp_distr} shows the distribution of these errors for the 2 best
820
+ performing baselines that we experimented with, model $C_0$ and $C$.
821
+
822
+ The most dominant error mode for these models is incorrect classification of
823
+ the action, \figref{vis-detections} shows some examples. Another error mode is
824
+ mis localization of the object for categories like \vb{ski}, \vb{surf},
825
+ \vb{skateboard}, and \vb{snowboard}. This is also evident from the poor recall
826
+ of the region proposals for objects categories associated with these actions. A
827
+ large number of errors also come from `person misloc' for categories like
828
+ \vb{lay} which is because of unusual agent pose. We also observe that the `mis
829
+ pairing' errors decrease as we start modeling the deformation between the agent
830
+ and the object. Finally, a large number of error for \vb{cut} and \vb{hit-obj}
831
+ come from hallucinations of the object in the background.
832
+
833
+ \begin{figure*}
834
+ \centering
835
+ \begin{subfigure}{0.48\textwidth}
836
+ \insertA{1.0}{figures/analysis_ft/snap_detections_no_deform_fp_distr.png}
837
+ \caption{Full Model without deformations ($C_0$)} \end{subfigure}
838
+ \begin{subfigure}{0.48\textwidth}
839
+ \insertA{1.0}{figures/analysis_ft/snap_detections_fp_distr.png} \caption{Full
840
+ model ($C$)} \end{subfigure}
841
+ \caption{Distribution of the false positives in the top $num\_inst$ detections
842
+ for each action class (with roles if applicable). `bck' and `bck person'
843
+ indicate when the detected agent is on background (IU with any person less
844
+ than 0.1) or around people in the background (IU with background people
845
+ between 0.1 and 0.5), `incorrect label' refers to when the detected agent is
846
+ not doing the relevant action, `person misloc' refers to when the agent
847
+ detection is mis localized, `obj misloc' refers to when the object in the
848
+ specific semantic role is mis localized, `both misloc' refers to when both
849
+ the agent and the object are mis localized (mis localization means the IU is
850
+ between 0.1 to 0.5). Finally, `mis pairing' refers to when the object of
851
+ interaction is of the correct semantic class but not in the semantic role for
852
+ the detected agent, and `obj hallucination' refers to when the object of
853
+ interaction is hallucinated. The first figure shows the distribution for the
854
+ $C_0$ model (which does not model deformation between agent and object),
855
+ and the second figure shows the distribution for model $C$ (which models
856
+ deformation between the agent and the object).}
857
+ \figlabel{fp_distr}
858
+ \end{figure*}
859
+
860
+ \paragraph{Conclusions and Future Directions}
861
+ In this work, we have proposed the task of visual semantic role labeling in
862
+ images. The goal of this task is to be able to detect people, classify what
863
+ they are doing and localize the different objects in various semantic roles
864
+ associated with the inferred action. We have collected an extensive dataset
865
+ consisting of 16K people in 10K images. Each annotated person is labeled with
866
+ 26 different actions labels and has been associated with different objects in
867
+ the different semantic roles for each action. We have presented and analyzed
868
+ the performance of four simple baseline algorithms. Our analysis shows the
869
+ challenging nature of this problem and points to some natural directions of
870
+ future research. We believe our proposed dataset and tasks will enable us to
871
+ achieve a better understanding of actions and activities than current
872
+ algorithms.
873
+
874
+ \epigraph{Concepts without percepts are empty, percepts without concepts are
875
+ blind.}{Immanuel Kant}
876
+
877
+
878
+
879
+ \paragraph{Acknowledgments: }
880
+ This work was supported by {ONR SMARTS MURI N00014-09-1-1051}, and a Berkeley
881
+ Graduate Fellowship. We gratefully acknowledge {NVIDIA} corporation for the
882
+ donation of Tesla and Titan GPUs used for this research.
883
+
884
+
885
+ {\small
886
+ \bibliographystyle{ieee}
887
+ \bibliography{refs-rbg}
888
+ }
889
+
890
+ \end{document}
papers/1505/1505.05192.tex ADDED
@@ -0,0 +1,643 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,twocolumn,letterpaper]{article}
2
+
3
+ \usepackage{iccv}
4
+ \usepackage{times}
5
+ \usepackage{epsfig}
6
+ \usepackage{graphicx}
7
+ \usepackage{amsmath}
8
+ \usepackage{amssymb}
9
+ \usepackage{afterpage}
10
+ \usepackage{rotating}
11
+ \usepackage{xcolor,colortbl}
12
+ \usepackage{placeins}
13
+ \usepackage[outercaption]{sidecap}
14
+ \usepackage{makecell}
15
+ \newcommand{\todo}[1]{\textcolor{red}{TODO: #1}\PackageWarning{TODO:}{#1!}}
16
+
17
+
18
+
19
+ \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}
20
+
21
+ \iccvfinalcopy
22
+
23
+ \def\iccvPaperID{1777} \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
24
+
25
+ \setcounter{page}{1}
26
+ \begin{document}
27
+
28
+ \title{Unsupervised Visual Representation Learning by
29
+ Context Prediction}
30
+
31
+
32
+ \author{
33
+ \vspace{-0.2in}
34
+ \begin{tabular}[t]{c @{\extracolsep{1em}} c @{\extracolsep{1em}} c}
35
+ Carl Doersch$^{1,2}$ &
36
+ Abhinav Gupta$^{1}$ &
37
+ Alexei A. Efros$^{2}$
38
+ \\
39
+ \end{tabular}
40
+ \cr
41
+ \cr
42
+ \small
43
+ \begin{tabular}[t]{c@{\extracolsep{4em}}c}
44
+ $^1$ School of Computer Science &
45
+ $^2$ Dept. of Electrical Engineering and Computer Science \\
46
+ Carnegie Mellon University &
47
+ University of California, Berkeley \\
48
+ \end{tabular}
49
+ }
50
+ \maketitle
51
+
52
+
53
+
54
+ \begin{abstract}
55
+ This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned
56
+ using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework~\cite{girshick2014rich} and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.
57
+
58
+ \end{abstract}
59
+
60
+ \vspace{-0.2in}
61
+ \section{Introduction}
62
+ \vspace{-0.05in}
63
+ Recently,
64
+ new computer vision methods have leveraged large datasets of millions of labeled examples to learn rich, high-performance visual representations~\cite{krizhevsky2012imagenet}.
65
+ Yet efforts to scale these methods to truly Internet-scale datasets (i.e. hundreds of {\bf b}illions of images) are hampered by the sheer expense of the human annotation required.
66
+ A natural way to address this difficulty would be to employ unsupervised learning, which aims to use data without any annotation. Unfortunately, despite several decades of sustained effort, unsupervised methods
67
+ have not yet been shown to extract useful information from
68
+ large collections of full-sized, real images.
69
+ After all, without labels, it is not even clear \textit{what} should be represented. How can one write an objective function to encourage a representation to capture, for example, objects, if none of the objects are labeled?
70
+
71
+
72
+
73
+ \begin{figure}[t]
74
+ \begin{center}
75
+
76
+ \includegraphics[width=0.9\linewidth]{figs/quiz.pdf}
77
+ \vspace{-.2cm}
78
+ \end{center}
79
+ \caption{Our task for learning patch representations involves randomly sampling a patch (blue) and then one of eight possible neighbors (red). Can you guess the spatial configuration for the two pairs of patches? Note that the task is much easier once you have recognized the object!
80
+ }
81
+ \hfill\protect\rotatebox[origin=c]{180}{\begin{small}Answer key: Q1: Bottom right Q2: Top center\end{small}}
82
+ \vspace{-.2in}
83
+ \label{fig:quiz}
84
+ \end{figure}
85
+
86
+
87
+
88
+
89
+ Interestingly, in the text domain, \textit{context} has proven to be a powerful source of automatic supervisory signal for learning representations~\cite{ando2005framework,tsujiiythu2007discriminative,collobert2008unified,mikolov2013distributed}.
90
+ Given a large text corpus, the idea is to train a model that maps each word to a feature vector, such that it is easy to predict the words in the context (i.e., a few words before and/or after) given the vector. This converts an apparently unsupervised problem (finding a good similarity metric between words) into a
91
+ ``self-supervised'' one: learning a function from a given word to the words surrounding it. Here the context prediction task is just a ``pretext'' to force the model to learn a good word embedding, which, in turn, has been shown to be useful in a number of real tasks, such as semantic word similarity~\cite{mikolov2013distributed}.
92
+
93
+
94
+
95
+
96
+
97
+
98
+
99
+
100
+
101
+ Our paper aims to provide a similar ``self-supervised'' formulation for image data: a supervised task involving predicting the context for a patch. Our task
102
+ is illustrated in Figures~\ref{fig:quiz} and~\ref{fig:task}. We sample random pairs of patches in one of eight spatial configurations, and present each pair to a machine learner, providing no information about the patches' original position within the image. The algorithm must then guess the position of one patch relative to the other. Our underlying hypothesis is that doing well on this task requires understanding scenes and objects, {\em i.e.} a good visual representation for this task will need to extract objects and their parts in order to reason about their relative spatial location. ``Objects,'' after all, consist of multiple parts that can be detected independently of one another, and which occur in a specific spatial configuration (if there is no specific configuration of the parts, then it is ``stuff''~\cite{adelson2001seeing}).
103
+ We present a ConvNet-based approach to learn a visual representation from this task. We demonstrate that the resulting visual representation is good for both object detection, providing a significant boost on PASCAL VOC 2007 compared to learning from scratch, as well as for unsupervised object discovery / visual data mining. This means, surprisingly, that our representation generalizes {\em across} images, despite being trained using an objective function that operates on a single image at a time. That is, instance-level supervision appears to improve performance on category-level tasks.
104
+
105
+
106
+
107
+
108
+ \vspace{-0.05in}
109
+ \section{Related Work}
110
+ \vspace{-0.05in}
111
+ One way to think of a good image representation is as the latent variables of an appropriate generative model.
112
+ An ideal generative model of natural images would both generate images according to their natural distribution, and be concise in the sense that it would seek common causes for different images and share information between them.
113
+ However, inferring the latent structure given an image is intractable for even relatively simple models.
114
+ To deal with these computational issues, a number of works, such as the wake-sleep algorithm~\cite{hinton1995wake}, contrastive divergence~\cite{hinton2006fast}, deep Boltzmann machines~\cite{salakhutdinov2009deep}, and variational Bayesian methods~\cite{kingma2014,rezende2014stochastic} use sampling to perform approximate inference. Generative models have shown promising performance on smaller datasets such as handwritten digits~\cite{hinton1995wake,hinton2006fast,salakhutdinov2009deep,kingma2014,rezende2014stochastic}, but none have proven effective for high-resolution natural images.
115
+
116
+
117
+
118
+
119
+
120
+
121
+
122
+
123
+ Unsupervised representation learning can also be formulated as learning an embedding (i.e. a feature vector for each image) where images that are semantically similar are close, while semantically different ones are far apart.
124
+ One way to build such a representation is to create a supervised ``pretext'' task such that an embedding which solves the task will also be useful for other real-world tasks. For example, denoising autoencoders~\cite{vincent2008extracting,bengio2013deep} use reconstruction from noisy data as a pretext task: the algorithm must connect images to other images with similar objects to tell the difference between noise and signal. Sparse autoencoders also use reconstruction as a pretext task, along with a sparsity penalty~\cite{olshausen1996emergence}, and such autoencoders may be stacked to form a deep representation~\cite{lee2006efficient,le2013building}.
125
+ (however, only~\cite{le2013building} was successfully applied to full-sized images, requiring a million CPU hours to discover just three objects).
126
+ We believe that current reconstruction-based algorithms struggle with low-level phenomena, like stochastic textures, making it hard to even measure whether a model is generating well.
127
+
128
+
129
+
130
+
131
+
132
+
133
+
134
+
135
+
136
+
137
+
138
+ Another pretext task is ``context prediction.''
139
+ A strong tradition for this kind of task already exists in the text domain, where ``skip-gram''~\cite{mikolov2013distributed} models have been shown to generate useful word representations. The idea is to train a model (e.g. a deep network) to predict, from a single word, the $n$ preceding and $n$ succeeding words. In principle, similar reasoning could be applied in the image domain, a kind of visual ``fill in the blank'' task, but, again, one runs into the problem of determining whether the predictions themselves are correct~\cite{doersch2014context},
140
+ unless one cares about predicting only very low-level features~\cite{domke2008killed,larochelle2011neural,theis2015generative}.
141
+ To address this,~\cite{malisiewicz2009beyond} predicts the appearance of an image region by consensus voting of the transitive nearest neighbors of its surrounding regions. Our previous work~\cite{doersch2014context} explicitly formulates a statistical test to determine whether
142
+ the data is better explained by a prediction or by a low-level null hypothesis model.
143
+
144
+
145
+
146
+
147
+
148
+
149
+ \begin{figure}[t]
150
+ \begin{center}
151
+ \includegraphics[width=0.9\linewidth]{figs/taskv2_jitter.pdf}
152
+ \vspace{-.2cm}
153
+ \end{center}
154
+ \caption{The algorithm receives two patches in one of these eight possible spatial arrangements, without any context, and must then classify which configuration was sampled. }
155
+ \vspace{-.2in}
156
+ \label{fig:task}
157
+ \end{figure}
158
+
159
+
160
+
161
+ The key problem that these approaches must address is that predicting pixels is much harder than predicting words, due to the huge variety of pixels that can arise from the same semantic object. In the text domain, one
162
+ interesting idea is to switch from a pure prediction task to a discrimination task~\cite{tsujiiythu2007discriminative,collobert2008unified}.
163
+ In this case, the pretext task is to discriminate true snippets of text from the same snippets where a word has been replaced at random.
164
+ A direct extension of this to 2D might be to discriminate between real images vs. images where one patch has been replaced by a random patch from elsewhere in the dataset.
165
+ However, such a task would be trivial, since discriminating low-level color statistics and lighting would be enough. To make the task harder and more high-level,
166
+ in this paper, we instead classify between multiple possible configurations of patches sampled from {\em the same image}, which means they will share lighting and color statistics, as shown on Figure~\ref{fig:task}.
167
+
168
+
169
+
170
+
171
+
172
+
173
+
174
+
175
+
176
+
177
+
178
+
179
+
180
+
181
+
182
+
183
+
184
+ Another line of work in unsupervised learning from images aims to discover object categories using hand-crafted features and various forms of clustering (e.g.~\cite{sivic2005discovering,russell2006using} learned a generative model over bags of visual words). Such representations lose shape information, and will readily discover clusters of, say, foliage. A few subsequent works have attempted to use representations more closely tied to shape \cite{lee2009foreground,payet2010set}, but relied on contour extraction, which is difficult in complex images. Many other approaches~\cite{grauman2006unsupervised,kim2008unsupervised,faktor2012clustering} focus on defining similarity metrics which can be used in more standard clustering algorithms; ~\cite{RematasCVPR15}, for instance, re-casts the problem as frequent itemset mining. Geometry may also be used to for verifying links between images~\cite{quack2008world,Chum09,heath2010image}, although this can fail for deformable objects.
185
+
186
+ Video can provide another cue for representation learning. For most scenes, the identity of objects remains unchanged even as appearance changes with time. This kind of temporal coherence has a long history in visual learning literature~\cite{foldiak1991learning,wiskott02}, and contemporaneous work shows strong improvements on modern detection datasets~\cite{wang2015unsupervised}.
187
+
188
+ Finally, our work is related to a line of research on discriminative patch
189
+ mining~\cite{doersch2012makes,singh2012unsupervised,juneja13blocks,li2013harvesting,sun2013learning,doersch2013mid}, which has emphasized weak supervision as a means of object discovery. Like the current work, they emphasize the utility of learning representations of patches (i.e. object parts) before learning full objects and scenes, and argue that scene-level labels can serve as a pretext task. For example, \cite{doersch2012makes} trains detectors to be sensitive to different geographic locales, but the actual goal is to discover specific elements of architectural style.
190
+
191
+
192
+
193
+
194
+
195
+
196
+
197
+
198
+ \vspace{-0.05in}
199
+ \section{Learning Visual Context Prediction}\label{sec:learning}
200
+ \vspace{-0.05in}
201
+ We aim to learn an image representation for our pretext task, i.e., predicting the relative position of patches within an image. We employ Convolutional Neural Networks (ConvNets), which are well known to learn complex image representations with minimal human feature design.
202
+ Building a ConvNet that can predict a relative offset for a pair of patches is, in principle, straightforward: the network must feed the two input patches through several convolution layers, and produce an output that assigns a probability to each of the eight spatial configurations (Figure~\ref{fig:task}) that might have been sampled (i.e. a softmax output). Note, however, that we ultimately wish to learn a feature embedding for {\em individual} patches, such that patches which are visually similar (across different images) would be close in the
203
+ embedding space.
204
+
205
+
206
+
207
+ To achieve this, we use a late-fusion architecture shown in Figure~\ref{fig:arch}: a pair of AlexNet-style architectures~\cite{krizhevsky2012imagenet} that process each patch separately, until a depth analogous to fc6 in AlexNet, after which point the representations are fused. For the layers that process only one of the patches, weights are tied between both sides of the network, such that the same fc6-level embedding function is computed for both patches.
208
+ Because there is limited capacity for joint reasoning---i.e., only two layers receive input from both patches---we expect the network to perform the bulk of the semantic reasoning for each patch separately.
209
+ When designing the network, we followed AlexNet where possible.
210
+
211
+ To obtain training examples given an image, we sample the first patch uniformly, without any reference to image content.
212
+ Given the position of the first patch, we sample the second patch randomly from the eight possible neighboring locations as in Figure~\ref{fig:task}.
213
+
214
+ \vspace{-0.05in}
215
+ \subsection{Avoiding ``trivial'' solutions}
216
+ \vspace{-0.05in}
217
+
218
+ When designing a pretext task, care must be taken to ensure that the task forces the network to extract the desired information (high-level semantics, in our case),
219
+ without taking ``trivial'' shortcuts.
220
+ In our case, low-level cues like boundary patterns or textures continuing between patches could potentially serve as such a shortcut.
221
+ Hence, for the relative prediction task, it was important to include a gap between patches (in our case, approximately half the patch width).
222
+ Even with the gap, it is possible that long lines spanning neighboring patches could could give away the correct answer.
223
+ Therefore, we also randomly jitter each patch location by up to 7 pixels (see Figure~\ref{fig:task}).
224
+
225
+
226
+
227
+
228
+
229
+
230
+
231
+
232
+
233
+
234
+
235
+
236
+
237
+
238
+
239
+
240
+ \begin{figure}[t]
241
+ \begin{center}
242
+
243
+ \includegraphics[width=0.8\linewidth]{figs/arch_shrunk.pdf}
244
+ \vspace{-.1in}
245
+ \end{center}
246
+ \caption{Our architecture for pair classification. Dotted lines indicate shared weights. `conv' stands for a convolution layer, `fc' stands for a fully-connected one, `pool' is a max-pooling layer, and `LRN' is a local response normalization layer. Numbers in parentheses are kernel size, number of outputs, and stride (fc layers have only a number of outputs). The LRN parameters follow~\cite{krizhevsky2012imagenet}. All conv and fc layers are followed by ReLU nonlinearities, except fc9 which feeds into a softmax classifier. }
247
+ \vspace{-.2in}
248
+ \label{fig:arch}
249
+ \end{figure}
250
+
251
+ \begin{figure*}[t]
252
+ \begin{center}
253
+
254
+ \includegraphics[width=0.85\linewidth]{figs/nnsv3.pdf}
255
+ \vspace{-.1in}
256
+ \end{center}
257
+ \caption{Examples of patch clusters obtained by nearest neighbors.
258
+ The query patch is shown on the far left. Matches are for three different features: fc6 features from a random initialization of our architecture, AlexNet fc7 after training on labeled ImageNet, and the fc6 features learned from our method. Queries were chosen from 1000 randomly-sampled patches. The top group is examples where our algorithm performs well; for the middle AlexNet outperforms our approach; and for the bottom all three features work well.}
259
+ \vspace{-.2in}
260
+ \label{fig:nns}
261
+ \end{figure*}
262
+
263
+ However, even these precautions are not enough:
264
+ we were surprised to find that, for some images, another trivial solution exists. We traced the problem to an unexpected culprit: chromatic aberration.
265
+ Chromatic aberration arises from differences in the way the lens focuses light at different wavelengths.
266
+ In some cameras, one color channel (commonly green) is shrunk toward the image center relative to the others~\cite[p.~76]{brewster1854treatise}.
267
+ A ConvNet, it turns out, can learn to localize a patch relative to the lens itself (see Section~\ref{sec:aberration}) simply by detecting the separation between green and magenta (red + blue). Once the network learns the absolute location on the lens, solving the relative location task becomes trivial.
268
+ To deal with this problem, we experimented with two types of pre-processing. One is to shift green and magenta toward gray (`projection'). Specifically, let $a=[-1,2,-1]$ (the 'green-magenta color axis' in RGB space). We then define $B=I-a^{T}a/(aa^{T})$, which is a matrix that subtracts the projection of a color onto the green-magenta color axis. We multiply every pixel value by $B$.
269
+ An alternative approach is to randomly drop 2 of the 3 color channels from each patch (`color dropping'), replacing the dropped colors with Gaussian noise (standard deviation $\sim 1/100$ the standard deviation of the remaining channel). For qualitative results, we show the `color-dropping' approach, but found both performed similarly; for the object detection results, we show both results.
270
+
271
+ \begin{figure*}[t]
272
+ \begin{center}
273
+ \includegraphics[width=0.99\linewidth]{figs/aberration.pdf}
274
+ \end{center}
275
+ \vspace{-0.2in}
276
+ \caption{
277
+ We trained a network to predict the absolute $(x,y)$ coordinates of randomly sampled patches.
278
+ Far left: input image. Center left: extracted patches.
279
+ Center right: the location the trained network predicts for each patch shown on the left. Far right: the same result after our color projection scheme. Note that the far right patches are shown \textit{after} color projection; the operation's effect is almost unnoticeable.
280
+ }
281
+ \vspace{-0.2in}
282
+ \label{fig:aberration}
283
+ \end{figure*}
284
+
285
+ \noindent {\bf Implementation Details:} We use Caffe~\cite{jia2014caffe}, and train on the ImageNet~\cite{deng2009imagenet} 2012 training set (~1.3M images), using only the images and discarding the labels. First, we resize each image to between 150K and 450K total pixels, preserving the aspect-ratio. From these images, we sample patches at resolution 96-by-96. For computational efficiency, we only sample the patches from a grid like pattern, such that each sampled patch can participate in as many as 8 separate pairings. We allow a gap of 48 pixels between the sampled patches in the grid, but also jitter the location of each patch in the grid by $-7$ to $7$ pixels in each direction.
286
+ We preprocess patches by (1) mean subtraction
287
+ (2) projecting or dropping colors (see above), and (3) randomly downsampling some patches to as little as 100 total pixels, and then upsampling it, to build robustness to pixelation. When applying simple SGD to train the network, we found that the network predictions would degenerate to a uniform prediction over the 8 categories, with all activations for fc6 and fc7 collapsing to 0. This meant that the optimization became permanently stuck in a saddle point where it ignored the input from the lower layers (which helped minimize the variance of the final output), and therefore that the net could not tune the lower-level features and escape the saddle point. Hence, our final implementation employs batch normalization~\cite{ioffe2015batch}, without the scale and shift ($\gamma$ and $\beta$), which forces the network activations to vary across examples. We also find that high momentum values (e.g. $.999$) accelerated learning. For experiments, we use a ConvNet trained on a K40 GPU for approximately four weeks.
288
+
289
+
290
+
291
+
292
+ \vspace{-0.1in}
293
+ \section{Experiments}
294
+ \vspace{-0.05in}
295
+
296
+
297
+
298
+ We first demonstrate the network has learned to associate semantically similar patches, using simple nearest-neighbor matching. We then apply the trained network in two domains.
299
+ First, we use the model as ``pre-training'' for a standard vision task with only limited training data: specifically, we use the VOC 2007 object detection. Second, we evaluate visual data mining, where the goal is to start with an unlabeled image collection and discover object classes.
300
+ Finally, we analyze the performance on the layout prediction ``pretext task'' to see how much is left to learn from this supervisory signal.
301
+
302
+ \vspace{-0.05in}
303
+ \subsection{Nearest Neighbors}\label{sec:nns}
304
+ \vspace{-0.05in}
305
+ Recall our intuition that training should assign similar representations to semantically similar patches. In this section, our goal is to understand which patches our network considers similar.
306
+ We begin by sampling random 96x96 patches, which we represent using fc6 features (i.e. we remove fc7 and higher shown in Figure~\ref{fig:arch}, and use only one of the two stacks). We find nearest neighbors using normalized correlation of these features. Results for some patches (selected out of 1000 random queries) are shown in Figure~\ref{fig:nns}.
307
+ For comparison, we repeated the experiment using fc7 features from AlexNet trained on ImageNet (obtained by upsampling the patches), and using fc6 features from our architecture but without any training (random weights initialization). As shown in Figure~\ref{fig:nns}, the matches returned by our feature often capture the semantic information that we are after, matching AlexNet in terms of semantic content (in some cases, e.g. the car wheel, our matches capture pose better). Interestingly, in a few cases, random (untrained) ConvNet also does reasonably well.
308
+
309
+
310
+
311
+
312
+
313
+
314
+ \vspace{-0.05in}
315
+ \subsection{Aside: Learnability of Chromatic Aberration}\label{sec:aberration}
316
+ \vspace{-0.05in}
317
+ We noticed in early nearest-neighbor experiments that some patches retrieved match patches from the same absolute location in the image, regardless of content, because those patches displayed similar aberration. To further demonstrate this phenomenon, we trained a network
318
+ to predict the absolute $(x,y)$ coordinates of patches sampled from ImageNet. While the overall accuracy of this regressor is not very high, it does surprisingly well for some images: for the top 10\% of images, the average (root-mean-square) error is .255, while chance performance (always predicting the image center) yields a RMSE of .371. Figure~\ref{fig:aberration} shows one such result. Applying the proposed ``projection'' scheme increases the error on the top 10\% of images to .321.
319
+
320
+
321
+
322
+
323
+
324
+
325
+ \begin{table*}
326
+ \scriptsize{
327
+ \setlength{\tabcolsep}{3pt}
328
+ \center
329
+ \definecolor{LightRed}{rgb}{1,.5,.5}
330
+ \begin{tabular}{c|c c c c c c c c c c c c c c c c c c c c|c}
331
+ \hline
332
+ \hline
333
+ \textbf{VOC-2007 Test}&
334
+ aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & mAP\\
335
+ \hline
336
+ \hline
337
+ \textbf{DPM-v5}\cite{felzenszwalb2010}&
338
+ 33.2 & 60.3 & 10.2 & 16.1 & 27.3 & 54.3 & 58.2 & 23.0 & 20.0 & 24.1 & 26.7 & 12.7 & 58.1 & 48.2 & 43.2 & 12.0 & 21.1 & 36.1 & 46.0 & 43.5 & 33.7\\
339
+ \hline
340
+ \textbf{\cite{cinbis2013segmentation}} w/o context &
341
+ 52.6 & 52.6 & 19.2 & 25.4 & 18.7 & 47.3 & 56.9 & 42.1 & 16.6 & 41.4 & 41.9 & 27.7 & 47.9 & 51.5 & 29.9 & 20.0 & 41.1 & 36.4 & 48.6 & 53.2 & 38.5\\
342
+ \hline
343
+ \textbf{Regionlets\cite{WangRegionlets}}&
344
+ 54.2 & 52.0 & 20.3 & 24.0 & 20.1 & 55.5 & 68.7 & 42.6 & 19.2 & 44.2 & 49.1 & 26.6 & 57.0 & 54.5 & 43.4 & 16.4 & 36.6 & 37.7 & 59.4 & 52.3 & 41.7 \\
345
+ \hline
346
+ \textbf{Scratch-R-CNN\cite{agrawal2014analyzing}}&
347
+ 49.9 & 60.6 & 24.7 & 23.7 & 20.3 & 52.5 & 64.8 & 32.9 & 20.4 & 43.5 & 34.2 & 29.9 & 49.0 & 60.4 & 47.5 & 28.0 & 42.3 & 28.6 & 51.2 & 50.0 & 40.7\\
348
+ \hline
349
+ \textbf{Scratch-Ours}&
350
+ 52.6 & 60.5 & 23.8 & 24.3 & 18.1 & 50.6 & 65.9 & 29.2 & 19.5 & 43.5 & 35.2 & 27.6 & 46.5 & 59.4 & 46.5 & 25.6 & 42.4 & 23.5 & 50.0 & 50.6 & 39.8\\
351
+ \hline
352
+ \textbf{Ours-projection}&
353
+ 58.4 & 62.8 & 33.5 & 27.7 & 24.4 & 58.5 & 68.5 & 41.2 & 26.3 & 49.5 & 42.6 & 37.3 & 55.7 & 62.5 & 49.4 & 29.0 & 47.5 & 28.4 & 54.7 & 56.8 & 45.7\\
354
+ \hline
355
+ \textbf{Ours-color-dropping}&
356
+ 60.5 & 66.5 & 29.6 & 28.5 & 26.3 & 56.1 & 70.4 & 44.8 & 24.6 & 45.5 & 45.4 & 35.1 & 52.2 & 60.2 & 50.0 & 28.1 & 46.7 & 42.6 & 54.8 & 58.6 & 46.3\\
357
+ \hline
358
+ \textbf{Ours-Yahoo100m}&
359
+ 56.2 & 63.9 & 29.8 & 27.8 & 23.9 & 57.4 & 69.8 & 35.6 & 23.7 & 47.4 & 43.0 & 29.5 & 52.9 & 62.0 & 48.7 & 28.4 & 45.1 & 33.6 & 49.0 & 55.5 & 44.2\\
360
+ \hline
361
+ \hline
362
+ \textbf{ImageNet-R-CNN\cite{girshick2014rich}}&
363
+ 64.2 & 69.7 & 50 & 41.9 & 32.0 & 62.6 & 71.0 & 60.7 & 32.7 & 58.5 & 46.5 & 56.1 & 60.6 & 66.8 & 54.2 & 31.5 & 52.8 & 48.9 & 57.9 & 64.7 & 54.2\\
364
+ \hline
365
+ \hline
366
+ \textbf{K-means-rescale~\cite{krahenbuhl2015data}}&
367
+ 55.7 & 60.9 & 27.9 & 30.9 & 12.0 & 59.1 & 63.7 & 47.0 & 21.4 & 45.2 & 55.8 & 40.3 & 67.5 & 61.2 & 48.3 & 21.9 & 32.8 & 46.9 & 61.6 & 51.7 & 45.6 \\
368
+ \hline
369
+ \textbf{Ours-rescale~\cite{krahenbuhl2015data}}&
370
+ 61.9 & 63.3 & 35.8 & 32.6 & 17.2 & 68.0 & 67.9 & 54.8 & 29.6 & 52.4 & 62.9 & 51.3 & 67.1 & 64.3 & 50.5 & 24.4 & 43.7 & 54.9 & 67.1 & 52.7 & 51.1\\
371
+ \hline
372
+ \textbf{ImageNet-rescale~\cite{krahenbuhl2015data}}&
373
+ 64.0 & 69.6 & 53.2 & 44.4 & 24.9 & 65.7 & 69.6 & 69.2 & 28.9 & 63.6 & 62.8 & 63.9 & 73.3 & 64.6 & 55.8 & 25.7 & 50.5 & 55.4 & 69.3 & 56.4 & 56.5 \\
374
+ \hline
375
+ \hline
376
+ \textbf{VGG-K-means-rescale}&
377
+ 56.1 & 58.6 & 23.3 & 25.7 & 12.8 & 57.8 & 61.2 & 45.2 & 21.4 & 47.1 & 39.5 & 35.6 & 60.1 & 61.4 & 44.9 & 17.3 & 37.7 & 33.2 & 57.9 & 51.2 & 42.4 \\
378
+ \hline
379
+ \textbf{VGG-Ours-rescale}&
380
+ 71.1 & 72.4 & 54.1 & 48.2 & 29.9 & 75.2 & 78.0 & 71.9 & 38.3 & 60.5 & 62.3 & 68.1 & 74.3 & 74.2 & 64.8 & 32.6 & 56.5 & 66.4 & 74.0 & 60.3 & 61.7\\
381
+ \hline
382
+ \textbf{VGG-ImageNet-rescale}&
383
+ 76.6 & 79.6 & 68.5 & 57.4 & 40.8 & 79.9 & 78.4 & 85.4 & 41.7 & 77.0 & 69.3 & 80.1 & 78.6 & 74.6 & 70.1 & 37.5 & 66.0 & 67.5 & 77.4 & 64.9 & 68.6 \\
384
+ \hline
385
+ \hline
386
+
387
+ \end{tabular}
388
+ \caption{Mean Average Precision on VOC-2007.}
389
+ \label{tab:voc_2007}
390
+ \vspace{-.25cm}
391
+ }
392
+ \end{table*}
393
+
394
+
395
+ \vspace{-0.05in}
396
+ \subsection{Object Detection}
397
+ \vspace{-0.05in}
398
+ \label{sec:obj_det}
399
+
400
+
401
+ Previous work
402
+ on the Pascal VOC challenge~\cite{everingham2010pascal}
403
+ has shown that pre-training on ImageNet (i.e., training a ConvNet to solve the ImageNet challenge) and then ``fine-tuning'' the network (i.e. re-training the ImageNet model for PASCAL data) provides a substantial boost over training on the Pascal training set alone~\cite{girshick2014rich,agrawal2014analyzing}. However, as far as we are aware, no works have shown that \textit{unsupervised} pre-training on images can provide such a performance boost, no matter how much data is used.
404
+
405
+
406
+ \begin{figure}
407
+ \begin{minipage}[c]{0.4\linewidth}
408
+ \includegraphics[width=\textwidth]{figs/fcarch_pool.pdf}
409
+ \end{minipage}\hfill
410
+ \begin{minipage}[c]{0.57\linewidth}
411
+ \caption{Our architecture for Pascal VOC detection. Layers from conv1 through pool5 are copied from our patch-based network (Figure~\ref{fig:arch}). The new 'conv6' layer is created by converting the fc6 layer into a convolution layer. Kernel sizes, output units, and stride are given in parentheses, as in Figure~\ref{fig:arch}.} \label{fig:fcarch}
412
+ \end{minipage}
413
+ \vspace{-.6cm}
414
+ \end{figure}
415
+
416
+ Since we are already using a ConvNet, we adopt the current state-of-the-art R-CNN pipeline~\cite{girshick2014rich}. R-CNN works on object proposals that have been resized to 227x227. Our algorithm, however, is aimed at 96x96 patches. We find that downsampling the proposals to 96x96 loses too much detail.
417
+ Instead, we adopt the architecture shown in Figure~\ref{fig:fcarch}. As above, we use only one stack from Figure~\ref{fig:arch}.
418
+ Second, we resize the convolution layers to operate on inputs of 227x227. This results in a pool5 that is 7x7 spatially, so we must convert the previous fc6 layer into a convolution layer (which we call conv6) following~\cite{long2014fully}. Note our conv6 layer has 4096 channels, where each unit connects to a 3x3 region of pool5. A conv layer with 4096 channels would be quite expensive to connect directly to a 4096-dimensional fully-connected layer. Hence, we add another layer after conv6 (called conv6b), using a 1x1 kernel, which reduces the dimensionality to 1024 channels (and adds a nonlinearity). Finally, we feed the outputs through a pooling layer to a fully connected layer (fc7) which in turn connects to a final fc8 layer which feeds into the softmax. We fine-tune this network according to the procedure described in~\cite{girshick2014rich} (conv6b, fc7, and fc8 start with random weights), and use fc7 as the final representation. We do not use bounding-box regression, and take the appropriate results from~\cite{girshick2014rich} and~\cite{agrawal2014analyzing}.
419
+
420
+
421
+
422
+ Table~\ref{tab:voc_2007} shows our results. Our architecture trained from scratch (random initialization) performs slightly worse than AlexNet trained from scratch. However, our pre-training makes up for this, boosting the from-scratch number by 6\% MAP, and outperforms an AlexNet-style model trained from scratch on Pascal by over 5\%. This puts us about 8\% behind
423
+ the performance of R-CNN pre-trained with ImageNet labels~\cite{girshick2014rich}. This is the best result we are aware of on VOC 2007 without using labels outside the dataset. We ran additional baselines initialized with batch normalization, but found they performed worse than the ones shown.
424
+
425
+ To understand the effect of various dataset biases~\cite{torralba11}, we also performed a preliminary experiment pre-training on a randomly-selected 2M subset of the Yahoo/Flickr 100-million Dataset~\cite{thomee2015yfcc100m}, which was collected entirely automatically. The performance after fine-tuning is slightly worse than Imagenet, but there is still a considerable boost over the from-scratch model.
426
+
427
+
428
+ In the above fine-tuning experiments, we removed the batch normalization layers by estimating the
429
+ mean and variance of the conv- and fc- layers, and then rescaling the weights and biases such that the outputs of the conv and fc layers have mean 0 and variance 1 for each channel.
430
+ Recent work~\cite{krahenbuhl2015data},
431
+ however, has shown empirically that the scaling of the weights prior to finetuning can have a
432
+ strong impact on test-time performance, and argues that our previous method of
433
+ removing batch normalization leads too poorly scaled weights. They propose a simple way to
434
+ rescale the network's weights without changing the function that the network computes, such that
435
+ the network behaves better during finetuning. Results using this technique are shown
436
+ in Table~\ref{tab:voc_2007}.
437
+ Their approach gives a boost to all methods, but gives less of a boost to the
438
+ already-well-scaled ImageNet-category model. Note that for this comparison, we
439
+ used fast-rcnn~\cite{girshickICCV15fastrcnn} to save compute time, and we discarded all
440
+ pre-trained fc-layers from our model, re-initializing them with the K-means procedure
441
+ of~\cite{krahenbuhl2015data}
442
+ (which was used to initialize all layers in the ``K-means-rescale'' row).
443
+ Hence, the structure of the network during
444
+ fine-tuning and testing was the
445
+ same for all models.
446
+
447
+
448
+
449
+ Considering that we have essentially infinite data to train our model, we might expect
450
+ that our algorithm should also provide a large boost to higher-capacity models such as
451
+ VGG~\cite{Simonyan14c}. To test this, we trained a model following the 16-layer
452
+ structure of~\cite{Simonyan14c} for the convolutional layers on each side of the network
453
+ (the final fc6-fc9 layers were the same as in Figure~\ref{fig:arch}).
454
+ We again
455
+ fine-tuned the representation on Pascal VOC using fast-rcnn, by transferring only the
456
+ conv layers, again following Kr{\"a}henb{\"u}hl et al.~\cite{krahenbuhl2015data} to
457
+ re-scale the transferred weights and
458
+ initialize the rest. As a baseline, we performed a similar experiment with the ImageNet-pretrained
459
+ 16-layer model of~\cite{Simonyan14c} (though we kept pre-trained fc layers
460
+ rather than re-initializing them),
461
+ and also by initializing the entire network with
462
+ K-means~\cite{krahenbuhl2015data}. Training time was considerably longer---about 8 weeks
463
+ on a Titan X GPU---but the the network outperformed the AlexNet-style model by a considerable
464
+ margin. Note the model initialized with K-means performed roughly on par with the analogous
465
+ AlexNet model, suggesting that most of the boost came from the unsupervised pre-training.
466
+
467
+
468
+
469
+
470
+ \begin{figure*}
471
+ \begin{center}
472
+ \includegraphics[width=0.95\linewidth]{figs/discoveredv6.pdf}
473
+ \end{center}
474
+ \vspace{-0.1in}
475
+ \caption{Object clusters discovered by our algorithm. The number beside each cluster indicates its ranking, determined by the fraction of the top matches that geometrically verified. For all clusters, we show the raw top 7 matches that verified geometrically.
476
+ The full ranking is available on our project webpage.}
477
+ \vspace{-0.2in}
478
+ \label{fig:discovered}
479
+ \end{figure*}
480
+
481
+ \begin{table}
482
+
483
+ \setlength{\tabcolsep}{3pt}
484
+ \center
485
+ \definecolor{LightRed}{rgb}{1,.5,.5}
486
+ \begin{tabular}{l c c c c c}
487
+ \Xhline{2\arrayrulewidth}
488
+ &\multicolumn{2}{c}{Lower Better} & \multicolumn{3}{c}{Higher Better}\\
489
+ & Mean & Median & $11.25^{\circ}$ & $22.5^{\circ}$ & $30^{\circ}$\\
490
+ \hline
491
+ Scratch & 38.6 & 26.5 & 33.1 & 46.8 & 52.5 \\
492
+ Unsup. Tracking~\cite{wang2015unsupervised} & 34.2 & 21.9 & 35.7 & 50.6 & 57.0 \\
493
+ Ours & \textbf{33.2} & 21.3 & 36.0 & 51.2 & 57.8 \\
494
+ ImageNet Labels & 33.3 & \textbf{20.8} & \textbf{36.7} & \textbf{51.7} & \textbf{58.1} \\
495
+ \Xhline{2\arrayrulewidth}
496
+ \end{tabular}
497
+ \vspace{.05cm}
498
+ \caption{Accuracy on NYUv2.}
499
+ \label{tab:surf_norm}
500
+ \vspace{-.25cm}
501
+
502
+ \end{table}
503
+
504
+ \vspace{-0.05in}
505
+ \subsection{Geometry Estimation}
506
+ \vspace{-0.05in}
507
+
508
+ The results of Section~\ref{sec:obj_det} suggest that our representation is sensitive
509
+ to objects, even though it was not originally trained to find them. This raises the question:
510
+ Does our representation extract information that is useful for other, non-object-based tasks?
511
+ To find out, we fine-tuned our network to perform the surface normal estimation on NYUv2 proposed in Fouhey et al.~\cite{Fouhey13a}, following the finetuning procedure of Wang et al.~\cite{wang2015unsupervised} (hence, we compare directly to the unsupervised pretraining results
512
+ reported there). We used the color-dropping network, restructuring the fully-connected
513
+ layers as in Section~\ref{sec:obj_det}. Surprisingly, our results are almost equivalent to
514
+ those obtained using a fully-labeled ImageNet model.
515
+ One possible explanation for this is that the ImageNet categorization task does relatively
516
+ little to encourage a network to pay attention to geometry, since the geometry is largely
517
+ irrelevant once an object is identified. Further evidence of this can be seen in seventh row of
518
+ Figure~\ref{fig:nns}: the nearest neighbors for ImageNet AlexNet are all car wheels, but they are
519
+ not aligned well with the query patch.
520
+
521
+
522
+ \vspace{-0.05in}
523
+ \subsection{Visual Data Mining}
524
+ \label{sec:datamining}
525
+ \vspace{-0.05in}
526
+ Visual data mining~\cite{quack2008world,doersch2012makes,singh2012unsupervised,RematasCVPR15}, or unsupervised object discovery~\cite{sivic2005discovering,russell2006using,grauman2006unsupervised},
527
+ aims to use a large image collection to discover image fragments which happen to depict the same semantic objects.
528
+ Applications include dataset visualization, content-based retrieval, and tasks that require relating visual data to other unstructured information (e.g. GPS coordinates~\cite{doersch2012makes}).
529
+ For automatic data mining, our approach from section~\ref{sec:nns} is inadequate:
530
+ although object patches match to similar objects, textures match just as readily to similar textures. Suppose, however, that we sampled two non-overlapping patches from the same object. Not only would the nearest neighbor lists for both patches share many images, but within those images, the nearest neighbors would be in roughly the same spatial configuration. For texture regions, on the other hand, the spatial configurations of the neighbors would be random, because the texture has no global layout.
531
+
532
+ To implement this, we first sample a constellation of four adjacent patches from an image (we use four to reduce the likelihood of a matching spatial arrangement happening by chance).
533
+ We find the top 100 images which have the strongest matches for all four patches, ignoring spatial layout.
534
+ We then use a type of geometric verification~\cite{chum2007total} to filter away the images where the four matches are not geometrically consistent.
535
+ Because our features are more semantically-tuned, we can use a much weaker type of geometric verification than~\cite{chum2007total}.
536
+ Finally, we rank the different constellations by counting the number of times the top 100 matches geometrically verify.
537
+
538
+
539
+
540
+
541
+
542
+
543
+
544
+
545
+
546
+
547
+
548
+
549
+
550
+ \noindent {\bf Implementation Details:} To compute whether a set of four matched patches geometrically verifies, we first compute the best-fitting square $S$ to the patch centers (via least-squares), while constraining that side of $S$ be between $2/3$ and $4/3$ of the average side of the patches. We then compute the squared error of the patch centers relative to $S$ (normalized by dividing the sum-of-squared-errors by the square of the side of $S$). The patch is geometrically verified if this normalized squared error is less than $1$. When sampling patches do not use any of the data augmentation preprocessing steps (e.g. downsampling). We use the color-dropping version of our network.
551
+
552
+
553
+ We applied the described mining algorithm to Pascal VOC 2011, with no pre-filtering of images and no additional labels.
554
+ We show some of the resulting patch clusters in Figure~\ref{fig:discovered}. The results are visually comparable to our previous work~\cite{doersch2014context}, although we discover a few objects that were not found in~\cite{doersch2014context}, such as monitors, birds, torsos, and plates of food. The discovery of birds and torsos---which are notoriously deformable---provides further evidence for the invariances our algorithm has learned.
555
+ We believe we have covered all objects discovered in~\cite{doersch2014context}, with the exception of (1) trusses and (2) railroad tracks without trains (though we do discover them with trains). For some objects like dogs, we discover more variety and rank the best ones higher. Furthermore, many of the clusters shown in~\cite{doersch2014context} depict gratings (14 out of the top 100), whereas none of ours do (though two of our top hundred depict diffuse gradients). As in~\cite{doersch2014context}, we often re-discover the same object multiple times with different viewpoints, which accounts for most of the gaps between ranks in Figure~\ref{fig:discovered}. The main disadvantages of our algorithm relative to~\cite{doersch2014context} are 1) some loss of purity, and 2) that we cannot currently determine an object mask automatically (although one could imagine dynamically adding more sub-patches to each proposed object).
556
+
557
+ \begin{figure}[t]
558
+ \begin{center}
559
+
560
+ \includegraphics[width=0.9\linewidth]{figs/paris.pdf}
561
+ \vspace{-.2cm}
562
+ \end{center}
563
+ \vspace{-0.05in}
564
+ \caption{Clusters discovered and automatically ranked via our algorithm (\S~\ref{sec:datamining}) from the Paris Street View dataset. }
565
+ \vspace{-0.1in}
566
+ \label{fig:paris}
567
+ \end{figure}
568
+
569
+
570
+ To ensure that our algorithm has not simply learned an object-centric representation due to the various biases~\cite{torralba11} in ImageNet, we also applied our algorithm to 15,000 Street View images from Paris (following~\cite{doersch2012makes}). The results in Figure~\ref{fig:paris} show that our representation captures scene layout and architectural elements. For this experiment, to rank clusters, we use the de-duplication procedure originally proposed in~\cite{doersch2012makes}.
571
+
572
+
573
+
574
+
575
+
576
+
577
+ \vspace{-0.15in}
578
+ \subsubsection{Quantitative Results} \label{quantitative}\vspace{-0.05in}
579
+ As part of the qualitative evaluation, we applied our algorithm to the subset of Pascal VOC 2007 selected in~\cite{singh2012unsupervised}: specifically, those containing at least one instance of \textit{bus}, \textit{dining table}, \textit{motorbike}, \textit{horse}, \textit{sofa}, or \textit{train}, and evaluate via a purity coverage curve following~\cite{doersch2014context}. We select 1000 sets of 10 images each for evaluation. The evaluation then sorts the sets by \textit{purity}: the fraction of images in the cluster containing the same category. We generate the curve by walking down the ranking. For each point on the curve, we plot average purity of all sets up to a given point in the ranking against \textit{coverage}: the fraction of images in the dataset that are contained in at least one of the sets up to that point. As shown in Figure~\ref{fig:purcov}, we have gained substantially in terms of coverage, suggesting increased invariance for our learned feature. However, we have also lost some highly-pure clusters compared to~\cite{doersch2014context}---which is not very surprising considering that our validation procedure is considerably simpler.
580
+
581
+
582
+ \noindent {\bf Implementation Details:} We initialize 16,384 clusters by sampling patches, mining nearest neighbors, and geometric verification ranking as described above. The resulting clusters are highly redundant.
583
+ The cluster selection procedure of~\cite{doersch2014context} relies on a likelihood ratio score that is calibrated across clusters, which is not available to us.
584
+ To select clusters, we first select the top 10 geometrically-verified neighbors for each cluster. Then we iteratively select the highest-ranked cluster that contributes at least one image to our coverage score. When we run out of images that aren't included in the coverage score, we choose clusters to cover each image at least twice, and then three times, and so on.
585
+
586
+
587
+ \begin{figure}[t]
588
+ \begin{center}
589
+ \includegraphics[width=0.8\linewidth]{figs/purity_coverage.pdf}
590
+ \vspace{-.2in}
591
+ \end{center}
592
+ \caption{Purity vs coverage for objects discovered on a subset of Pascal VOC 2007. The numbers in the legend indicate area under the curve (AUC). In parentheses is the AUC up to a coverage of .5.}
593
+ \vspace{-.2in}
594
+ \label{fig:purcov}
595
+ \end{figure}
596
+
597
+
598
+
599
+
600
+
601
+
602
+
603
+
604
+
605
+
606
+
607
+
608
+
609
+
610
+
611
+
612
+
613
+
614
+
615
+
616
+
617
+
618
+
619
+
620
+ \subsection{Accuracy on the Relative Prediction Task Task}\label{pretext}
621
+ Can we improve the representation by further training on our relative prediction pretext task? To find out, we briefly analyze classification performance on pretext task itself.
622
+ We sampled 500 random images from Pascal VOC 2007, sampled 256 pairs of patches from each, and classified them into the eight relative-position categories from Figure~\ref{fig:task}. This gave an accuracy of 38.4\%, where chance performance is 12.5\%, suggesting that the pretext task is quite hard (indeed, human performance on the task is similar).
623
+ To measure possible overfitting, we also ran the same experiment on ImageNet, which is the dataset we used for training. The network was 39.5\% accurate on the training set, and 40.3\% accurate on the validation set (which the network never saw during training), suggesting that little overfitting has occurred.
624
+
625
+ One possible reason why the pretext task is so difficult is because, for a large fraction of patches within each image, the task is almost impossible. Might the task be easiest for image regions corresponding to objects? To test this hypothesis, we repeated our experiment using only patches sampled from within Pascal object ground-truth bounding boxes. We select only those boxes that are at least 240 pixels on each side, and which are not labeled as truncated, occluded, or difficult. Surprisingly, this gave essentially the same accuracy of 39.2\%, and a similar experiment only on cars yielded 45.6\% accuracy. So, while our algorithm is sensitive to objects, it is almost as sensitive to the layout of the rest of the image.
626
+
627
+
628
+ \footnotesize \noindent {\bf Acknowledgements} We thank Xiaolong Wang and Pulkit Agrawal for help with baselines,
629
+ Berkeley and CMU vision group members for many fruitful discussions, and Jitendra Malik for putting gelato on the line. This work was partially supported by Google Graduate Fellowship to CD, ONR MURI N000141010934, Intel research grant, an NVidia hardware grant, and an Amazon Web Services grant.
630
+
631
+
632
+
633
+
634
+
635
+
636
+
637
+
638
+ {\footnotesize
639
+ \bibliographystyle{ieee}
640
+ \bibliography{egbib}
641
+ }
642
+
643
+ \end{document}
papers/1506/1506.00019.tex ADDED
@@ -0,0 +1,1412 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[]{article}
2
+
3
+
4
+ \usepackage{url}
5
+ \usepackage{graphicx} \usepackage{algorithm}
6
+ \usepackage{algorithmic}
7
+ \usepackage{amsthm}
8
+ \usepackage{amsmath}
9
+ \usepackage{amsfonts}
10
+ \usepackage{bbm}
11
+ \usepackage{titlesec}
12
+ \usepackage{caption}
13
+ \usepackage{subcaption}
14
+ \usepackage{natbib}
15
+ \bibliographystyle{plainnat}
16
+
17
+ \DeclareMathOperator{\sgn}{sgn}
18
+ \DeclareMathOperator{\argmin}{argmin}
19
+ \DeclareMathOperator{\softmax}{softmax}
20
+
21
+
22
+ \begin{document}
23
+
24
+ \title{A Critical Review of Recurrent Neural Networks for Sequence Learning}
25
+
26
+
27
+ \author{
28
+ Zachary C. Lipton\\
29
+ \texttt{zlipton@cs.ucsd.edu}
30
+ \and
31
+ John Berkowitz\\
32
+ \texttt{jaberkow@physics.ucsd.edu}
33
+ \and
34
+ Charles Elkan\\
35
+ \texttt{elkan@cs.ucsd.edu}}
36
+
37
+
38
+
39
+
40
+ \date{June 5th, 2015}
41
+ \maketitle
42
+
43
+
44
+
45
+ \begin{abstract}
46
+ Countless learning tasks require dealing with sequential data.
47
+ Image captioning, speech synthesis, and music generation all require that a model produce outputs that are sequences.
48
+ In other domains, such as time series prediction, video analysis, and musical information retrieval,
49
+ a model must learn from inputs that are sequences.
50
+ Interactive tasks, such as translating natural language,
51
+ engaging in dialogue, and controlling a robot,
52
+ often demand both capabilities.
53
+ Recurrent neural networks (RNNs) are connectionist models
54
+ that capture the dynamics of sequences via cycles in the network of nodes.
55
+ Unlike standard feedforward neural networks, recurrent networks retain a state
56
+ that can represent information from an arbitrarily long context window.
57
+ Although recurrent neural networks have traditionally been difficult to train, and often contain millions of parameters,
58
+ recent advances in network architectures, optimization techniques, and parallel computation
59
+ have enabled successful large-scale learning with them.
60
+ In recent years, systems based on long short-term memory (LSTM)
61
+ and bidirectional (BRNN) architectures
62
+ have demonstrated ground-breaking performance on tasks as varied
63
+ as image captioning, language translation, and handwriting recognition.
64
+ In this survey, we review and synthesize the research
65
+ that over the past three decades first yielded and then made practical these powerful learning models.
66
+ When appropriate, we reconcile conflicting notation and nomenclature.
67
+ Our goal is to provide a self-contained explication of the state of the art
68
+ together with a historical perspective and references to primary research.
69
+ \end{abstract}
70
+
71
+
72
+
73
+ \section{Introduction}
74
+ Neural networks are powerful learning models
75
+ that achieve state-of-the-art results in a wide range of supervised and unsupervised machine learning tasks.
76
+ They are suited especially well for machine perception tasks,
77
+ where the raw underlying features are not individually interpretable.
78
+ This success is attributed to their ability to learn hierarchical representations,
79
+ unlike traditional methods that rely upon hand-engineered features \citep{farabet2013learning}.
80
+ Over the past several years, storage has become more affordable, datasets have grown far larger,
81
+ and the field of parallel computing has advanced considerably.
82
+ In the setting of large datasets, simple linear models tend to under-fit,
83
+ and often under-utilize computing resources.
84
+ Deep learning methods, in particular those based on {deep belief networks} (DNNs),
85
+ which are greedily built by stacking restricted Boltzmann machines,
86
+ and convolutional neural networks,
87
+ which exploit the local dependency of visual information,
88
+ have demonstrated record-setting results on many important applications.
89
+
90
+ However, despite their power,
91
+ standard neural networks have limitations.
92
+ Most notably, they rely on the assumption of independence among the training and test examples.
93
+ After each example (data point) is processed, the entire state of the network is lost.
94
+ If each example is generated independently, this presents no problem.
95
+ But if data points are related in time or space, this is unacceptable.
96
+ Frames from video, snippets of audio, and words pulled from sentences,
97
+ represent settings where the independence assumption fails.
98
+ Additionally, standard networks generally rely on examples being vectors of fixed length.
99
+ Thus it is desirable to extend these powerful learning tools
100
+ to model data with temporal or sequential structure and varying length inputs and outputs,
101
+ especially in the many domains where neural networks are already the state of the art.
102
+ Recurrent neural networks (RNNs) are connectionist models
103
+ with the ability to selectively pass information across sequence steps,
104
+ while processing sequential data one element at a time.
105
+ Thus they can model input and/or output
106
+ consisting of sequences of elements that are not independent.
107
+ Further, recurrent neural networks can simultaneously model sequential and time dependencies
108
+ on multiple scales.
109
+
110
+ In the following subsections, we explain the fundamental reasons
111
+ why recurrent neural networks are worth investigating.
112
+ To be clear, we are motivated by a desire to achieve empirical results.
113
+ This motivation warrants clarification because recurrent networks have roots
114
+ in both cognitive modeling and supervised machine learning.
115
+ Owing to this difference of perspectives, many published papers have different aims and priorities.
116
+ In many foundational papers, generally published in cognitive science and computational neuroscience journals,
117
+ such as \citep{hopfield1982neural, jordan1997serial, elman1990finding},
118
+ biologically plausible mechanisms are emphasized.
119
+ In other papers \citep{schuster1997bidirectional, socher2014grounded, karpathy2014deep},
120
+ biological inspiration is downplayed in favor of achieving empirical results
121
+ on important tasks and datasets.
122
+ This review is motivated by practical results rather than biological plausibility,
123
+ but where appropriate, we draw connections to relevant concepts in neuroscience.
124
+ Given the empirical aim, we now address three significant questions
125
+ that one might reasonably want answered before reading further.
126
+
127
+ \subsection{Why model sequentiality explicitly?}
128
+
129
+ In light of the practical success and economic value of sequence-agnostic models,
130
+ this is a fair question.
131
+ Support vector machines, logistic regression, and feedforward networks
132
+ have proved immensely useful without explicitly modeling time.
133
+ Arguably, it is precisely the assumption of independence
134
+ that has led to much recent progress in machine learning.
135
+ Further, many models implicitly capture time by concatenating each input
136
+ with some number of its immediate predecessors and successors,
137
+ presenting the machine learning model
138
+ with a sliding window of context about each point of interest.
139
+ This approach has been used with deep belief nets for speech modeling by \citet{maas2012recurrent}.
140
+
141
+ Unfortunately, despite the usefulness of the independence assumption,
142
+ it precludes modeling long-range dependencies.
143
+ For example, a model trained using a finite-length context window of length $5$
144
+ could never be trained to answer the simple question,
145
+ \emph{``what was the data point seen six time steps ago?"}
146
+ For a practical application such as call center automation,
147
+ such a limited system might learn to route calls,
148
+ but could never participate with complete success in an extended dialogue.
149
+ Since the earliest conception of artificial intelligence,
150
+ researchers have sought to build systems that interact with humans in time.
151
+ In Alan Turing's groundbreaking paper \emph{Computing Machinery and Intelligence},
152
+ he proposes an ``imitation game" which judges a machine's intelligence by
153
+ its ability to convincingly engage in dialogue \citep{turing1950computing}.
154
+ Besides dialogue systems, modern interactive systems of economic importance include
155
+ self-driving cars and robotic surgery, among others.
156
+ Without an explicit model of sequentiality or time,
157
+ it seems unlikely that any combination of classifiers or regressors
158
+ can be cobbled together to provide this functionality.
159
+
160
+
161
+ \subsection{Why not use Markov models?}
162
+
163
+ Recurrent neural networks are not the only models capable of representing time dependencies.
164
+ Markov chains, which model transitions between states in an observed sequence,
165
+ were first described by the mathematician Andrey Markov in 1906.
166
+ Hidden Markov models (HMMs), which model an observed sequence
167
+ as probabilistically dependent upon a sequence of unobserved states,
168
+ were described in the 1950s and have been widely studied since the 1960s \citep{stratonovich1960conditional}.
169
+ However, traditional Markov model approaches are limited because their states
170
+ must be drawn from a modestly sized discrete state space $S$.
171
+ The dynamic programming algorithm that is used to perform efficient inference with hidden Markov models
172
+ scales in time $O(|S|^2)$ \citep{viterbi1967error}.
173
+ Further, the transition table capturing the probability of moving between any two time-adjacent states is of size $|S|^2$.
174
+ Thus, standard operations become infeasible with an HMM
175
+ when the set of possible hidden states grows large.
176
+ Further, each hidden state can depend only on the immediately previous state.
177
+ While it is possible to extend a Markov model
178
+ to account for a larger context window
179
+ by creating a new state space equal to the cross product
180
+ of the possible states at each time in the window,
181
+ this procedure grows the state space exponentially with the size of the window,
182
+ rendering Markov models computationally impractical for modeling long-range dependencies \citep{graves2014neural}.
183
+
184
+ Given the limitations of Markov models,
185
+ we ought to explain why it is reasonable that connectionist models,
186
+ i.e., artificial neural networks, should fare better.
187
+ First, recurrent neural networks can capture long-range time dependencies,
188
+ overcoming the chief limitation of Markov models.
189
+ This point requires a careful explanation.
190
+ As in Markov models, any state in a traditional RNN depends only on the current input
191
+ as well as on the state of the network at the previous time step.\footnote{
192
+ While traditional RNNs only model the dependence of the current state on the previous state,
193
+ bidirectional recurrent neural networks (BRNNs) \citep{schuster1997bidirectional} extend RNNs
194
+ to model dependence on both past states and future states.
195
+ }
196
+ However, the hidden state at any time step can contain information
197
+ from a nearly arbitrarily long context window.
198
+ This is possible because the number of distinct states
199
+ that can be represented in a hidden layer of nodes
200
+ grows exponentially with the number of nodes in the layer.
201
+ Even if each node took only binary values,
202
+ the network could represent $2^N$ states
203
+ where $N$ is the number of nodes in the hidden layer.
204
+ When the value of each node is a real number,
205
+ a network can represent even more
206
+ distinct states.
207
+ While the potential expressive power of a network grows exponentially with the number of nodes,
208
+ the complexity of both inference and training grows at most quadratically.
209
+
210
+
211
+ \subsection{Are RNNs too expressive?}
212
+
213
+ Finite-sized RNNs with nonlinear activations are a rich family of models,
214
+ capable of nearly arbitrary computation.
215
+ A well-known result is that a finite-sized recurrent neural network with sigmoidal activation functions
216
+ can simulate a universal Turing machine \citep{siegelmann1991turing}.
217
+ The capability of RNNs to perform arbitrary computation
218
+ demonstrates their expressive power,
219
+ but one could argue that the C programming language
220
+ is equally capable of expressing arbitrary programs.
221
+ And yet there are no papers claiming that the invention of C
222
+ represents a panacea for machine learning.
223
+ A fundamental reason is there is no simple way of efficiently exploring the space of C programs.
224
+ In particular, there is no general way to calculate the gradient of an arbitrary C program
225
+ to minimize a chosen loss function.
226
+ Moreover, given any finite dataset,
227
+ there exist countless programs which overfit the dataset,
228
+ generating desired training output but failing to generalize to test examples.
229
+
230
+ Why then should RNNs suffer less from similar problems?
231
+ First, given any fixed architecture (set of nodes, edges, and activation functions),
232
+ the recurrent neural networks with this architecture are differentiable end to end.
233
+ The derivative of the loss function can be calculated with respect
234
+ to each of the parameters (weights) in the model.
235
+ Thus, RNNs are amenable to gradient-based training.
236
+ Second, while the Turing-completeness of RNNs is an impressive property,
237
+ given a fixed-size RNN with a specific architecture,
238
+ it is not actually possible to reproduce any arbitrary program.
239
+ Further, unlike a program composed in C,
240
+ a recurrent neural network can be regularized via standard techniques
241
+ that help prevent overfitting,
242
+ such as weight decay, dropout, and limiting the degrees of freedom.
243
+
244
+ \subsection{Comparison to prior literature}
245
+
246
+ The literature on recurrent neural networks can seem impenetrable to the uninitiated.
247
+ Shorter papers assume familiarity with a large body of background literature,
248
+ while diagrams are frequently underspecified,
249
+ failing to indicate which edges span time steps and which do not.
250
+ Jargon abounds, and notation is inconsistent across papers
251
+ or overloaded within one paper.
252
+ Readers are frequently in the unenviable position
253
+ of having to synthesize conflicting information across many papers in order to understand just one.
254
+ For example, in many papers subscripts index both nodes and time steps.
255
+ In others, $h$ simultaneously stands for a link function and a layer of hidden nodes.
256
+ The variable $t$ simultaneously stands for both time indices and targets, sometimes in the same equation.
257
+ Many excellent research papers have appeared recently,
258
+ but clear reviews of the recurrent neural network literature are rare.
259
+
260
+ Among the most useful resources are
261
+ a recent book on supervised sequence labeling
262
+ with recurrent neural networks \citep{graves2012supervised}
263
+ and an earlier doctoral thesis \citep{gers2001long}.
264
+ A recent survey covers recurrent neural nets for language modeling \citep{de2015survey}.
265
+ Various authors focus on specific technical aspects;
266
+ for example \citet{pearlmutter1995gradient} surveys gradient calculations in continuous time recurrent neural networks.
267
+ In the present review paper, we aim to provide a readable, intuitive, consistently notated,
268
+ and reasonably comprehensive but selective survey of research
269
+ on recurrent neural networks for learning with sequences.
270
+ We emphasize architectures, algorithms, and results,
271
+ but we aim also to distill the intuitions
272
+ that have guided this largely heuristic and empirical field.
273
+ In addition to concrete modeling details,
274
+ we offer qualitative arguments, a historical perspective,
275
+ and comparisons to alternative methodologies where appropriate.
276
+
277
+
278
+ \section{Background}
279
+
280
+ This section introduces formal notation and provides a brief background on neural networks in general.
281
+
282
+ \subsection{Sequences}
283
+
284
+ The input to an RNN is a sequence, and/or its target is a sequence.
285
+ An input sequence can be denoted
286
+ $(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}, ... , \boldsymbol{x}^{(T)})$
287
+ where each data point $\boldsymbol{x}^{(t)}$ is a real-valued vector.
288
+ Similarly, a target sequence can be denoted $(\boldsymbol{y}^{(1)}, \boldsymbol{y}^{(2)}, ... , \boldsymbol{y}^{(T)})$.
289
+ A training set typically is a set of examples where each example is an (input sequence, target sequence) pair,
290
+ although commonly either the input or the output may be a single data point. Sequences may be of finite or countably infinite length.
291
+ When they are finite, the maximum time index of the sequence is called $T$.
292
+ RNNs are not limited to time-based sequences.
293
+ They have been used successfully on non-temporal sequence data,
294
+ including genetic data \citep{baldi2003principled}.
295
+ However, in many important applications of RNNs,
296
+ the sequences have an explicit or implicit temporal aspect.
297
+ While we often refer to time in this survey,
298
+ the methods described here are applicable to non-temporal as well as to temporal tasks.
299
+
300
+ Using temporal terminology, an input sequence consists of
301
+ data points $\boldsymbol{x}^{(t)}$ that arrive
302
+ in a discrete {sequence} of \emph{time steps} indexed by $t$.
303
+ A target sequence consists of data points $\boldsymbol{y}^{(t)}$.
304
+ We use superscripts with parentheses for time, and not subscripts,
305
+ to prevent confusion between sequence steps and indices of nodes in a network.
306
+ When a model produces predicted data points, these are labeled $\hat{\boldsymbol{y}}^{(t)}$.
307
+
308
+ The time-indexed data points may be equally spaced samples from a continuous real-world process.
309
+ Examples include the still images that comprise the frames of videos
310
+ or the discrete amplitudes sampled at fixed intervals that comprise audio recordings.
311
+ The time steps may also be ordinal, with no exact correspondence to durations.
312
+ In fact, RNNs are frequently applied to domains
313
+ where sequences have a defined order but no explicit notion of time.
314
+ This is the case with natural language.
315
+ In the word sequence ``John Coltrane plays the saxophone",
316
+ $\boldsymbol{x}^{(1)} = \text{John}$, $\boldsymbol{x}^{(2)} = \text{Coltrane}$, etc.
317
+
318
+
319
+ \subsection{Neural networks}
320
+
321
+ Neural networks are biologically inspired models of computation.
322
+ Generally, a neural network consists of a set of \emph{artificial neurons},
323
+ commonly referred to as \emph{nodes} or \emph{units},
324
+ and a set of directed edges between them,
325
+ which intuitively represent the \emph{synapses}
326
+ in a biological neural network.
327
+ Associated with each neuron $j$ is an activation function $l_j(\cdot)$,
328
+ which is sometimes called a link function.
329
+ We use the notation $l_j$ and not $h_j$, unlike some other papers,
330
+ to distinguish the activation function from the values of the hidden nodes in a network,
331
+ which, as a vector, is commonly notated $\boldsymbol{h}$ in the literature.
332
+
333
+ Associated with each edge from node $j'$ to $j$ is a weight $w_{jj'}$.
334
+ Following the convention adopted in several foundational papers
335
+ \citep{hochreiter1997long, gers2000learning, gers2001long, sutskever2011generating},
336
+ we index neurons with $j$ and $j'$,
337
+ and $w_{jj'}$ denotes the ``to-from" weight corresponding to the directed edge to node $j$ from node $j'$.
338
+ It is important to note that in many references
339
+ the indices are flipped and $w_{j'j} \neq w_{jj'}$
340
+ denotes the ``from-to" weight on the directed edge from the node $j'$ to the node $j$,
341
+ as in lecture notes by \citet{elkanlearningmeaning} and in \citet{wiki:backpropagation}.
342
+
343
+ \begin{figure}
344
+ \centering
345
+ \includegraphics[width=.6\linewidth]{artificial-neuron.png}
346
+ \caption{An artificial neuron computes a nonlinear function of a weighted sum of its inputs.}
347
+ \label{fig:artificial-neuron}
348
+ \end{figure}
349
+
350
+ The value $v_j$ of each neuron $j$ is calculated
351
+ by applying its activation function to a weighted sum of the values of its input nodes (Figure~\ref{fig:artificial-neuron}):
352
+ $$v_j = l_j \left( \sum_{j'} w_{jj'} \cdot v_{j'} \right) .$$
353
+ For convenience, we term the weighted sum inside the parentheses
354
+ the \emph{incoming activation} and notate it as $a_j$.
355
+ We represent this computation in diagrams by depicting
356
+ neurons as circles and edges as arrows connecting them.
357
+ When appropriate, we indicate the exact activation function with a symbol, e.g., $\sigma$ for sigmoid.
358
+
359
+ Common choices for the activation function include the sigmoid $\sigma(z) = 1/(1 + e^{-z})$
360
+ and the $\textit{tanh}$ function $\phi(z) = (e^{z} - e^{-z})/(e^{z} + e^{-z})$.
361
+ The latter has become common in feedforward neural nets
362
+ and was applied to recurrent nets by \citet{sutskever2011generating}.
363
+ Another activation function which has become prominent in deep learning research
364
+ is the rectified linear unit (ReLU) whose formula is $l_j(z) = \max(0,z)$.
365
+ This type of unit has been demonstrated to improve the performance of many deep neural networks
366
+ \citep{nair2010rectified, maas2012recurrent, zeiler2013rectified}
367
+ on tasks as varied as speech processing and object recognition,
368
+ and has been used in recurrent neural networks by \citet{bengio2013advances}.
369
+
370
+ The activation function at the output nodes depends upon the task.
371
+ For multiclass classification with $K$ alternative classes,
372
+ we apply a $\softmax$ nonlinearity in an output layer of $K$ nodes.
373
+ The $\softmax$ function calculates
374
+ $$
375
+ \hat{y}_k = \frac{e^{a_k} }{ \sum_{k'=1}^{K} e^{ {a_{k'}} }} \mbox{~for~} k=1 \mbox{~to~} k=K.
376
+ $$
377
+ The denominator is a normalizing term consisting of the sum of the numerators,
378
+ ensuring that the outputs of all nodes sum to one.
379
+ For multilabel classification the activation function is simply a point-wise sigmoid,
380
+ and for regression we typically have linear output.
381
+
382
+
383
+ \begin{figure}
384
+ \centering
385
+ \includegraphics[width=.7\linewidth]{feed-forward.png}
386
+ \caption{A feedforward neural network.
387
+ An example is presented to the network by setting the values of the blue (bottom) nodes.
388
+ The values of the nodes in each layer are computed successively as a function of the prior layers
389
+ until output is produced at the topmost layer.}
390
+ \label{fig:feedforward-neural-network}
391
+ \end{figure}
392
+
393
+ \subsection{Feedforward networks and backpropagation}
394
+
395
+ With a neural model of computation, one must determine the order in which computation should proceed.
396
+ Should nodes be sampled one at a time and updated,
397
+ or should the value of all nodes be calculated at once and then all updates applied simultaneously?
398
+ Feedforward networks (Figure~\ref{fig:feedforward-neural-network}) are a restricted class of networks which
399
+ deal with this problem by forbidding cycles in the directed graph of nodes.
400
+ Given the absence of cycles, all nodes can be arranged into layers,
401
+ and the outputs in each layer can be calculated given the outputs from the lower layers.
402
+
403
+ The input $\boldsymbol{x}$ to a feedforward network is provided
404
+ by setting the values of the lowest layer.
405
+ Each higher layer is then successively computed
406
+ until output is generated at the topmost layer $\boldsymbol{\hat{y}}$.
407
+ Feedforward networks are frequently used for supervised learning tasks
408
+ such as classification and regression.
409
+ Learning is accomplished by iteratively updating each of the weights to minimize a loss function,
410
+ $\mathcal{L}(\boldsymbol{\hat{y}},\boldsymbol{y})$,
411
+ which penalizes the distance between the output $\boldsymbol{\hat{y}}$
412
+ and the target $\boldsymbol{y}$.
413
+
414
+
415
+
416
+ The most successful algorithm for training neural networks is backpropagation,
417
+ introduced for this purpose by \citet{rumelhart1985learning}.
418
+ Backpropagation uses the chain rule to calculate the derivative of the loss function $\mathcal{L}$
419
+ with respect to each parameter in the network.
420
+ The weights are then adjusted by gradient descent.
421
+ Because the loss surface is non-convex, there is no assurance
422
+ that backpropagation will reach a global minimum.
423
+ Moreover, exact optimization is known to be an NP-hard problem.
424
+ However, a large body of work on heuristic pre-training and optimization techniques
425
+ has led to impressive empirical success on many supervised learning tasks.
426
+ In particular, convolutional neural networks, popularized by \citet{le1990handwritten},
427
+ are a variant of feedforward neural network that holds records since 2012
428
+ in many computer vision tasks such as object detection \citep{krizhevsky2012imagenet}.
429
+
430
+ Nowadays, neural networks are usually trained with stochastic gradient descent (SGD) using mini-batches.
431
+ With batch size equal to one,
432
+ the stochastic gradient update equation is
433
+ $$\boldsymbol{w} \gets \boldsymbol{w} - \eta \nabla_{\boldsymbol{w}} F_i$$
434
+ where $\eta$ is the learning rate
435
+ and $\nabla_{\boldsymbol{w}} F_i$ is the gradient of the objective function
436
+ with respect to the parameters $\boldsymbol{w}$ as calculated on a single example $(x_i, y_i)$.
437
+ Many variants of SGD are used to accelerate learning.
438
+ Some popular heuristics, such as AdaGrad \citep{duchi2011adaptive}, AdaDelta \citep{zeiler2012adadelta},
439
+ and RMSprop \citep{rmsprop}, tune the learning rate adaptively for each feature.
440
+ AdaGrad, arguably the most popular,
441
+ adapts the learning rate by caching the sum of squared gradients
442
+ with respect to each parameter at each time step.
443
+ The step size for each feature is multiplied by the inverse of the square root of this cached value.
444
+ AdaGrad leads to fast convergence on convex error surfaces,
445
+ but because the cached sum is monotonically increasing, the method has a monotonically decreasing learning rate,
446
+ which may be undesirable on highly non-convex loss surfaces.
447
+ RMSprop modifies AdaGrad by introducing a decay factor in the cache,
448
+ changing the monotonically growing value into a moving average.
449
+ Momentum methods are another common SGD variant used to train neural networks.
450
+ These methods add to each update a decaying sum of the previous updates.
451
+ When the momentum parameter is tuned well and the network is initialized well,
452
+ momentum methods can train deep nets and recurrent nets
453
+ competitively with more computationally expensive methods like the Hessian-free optimizer of \citet{sutskever2013importance}.
454
+
455
+ To calculate the gradient in a feedforward neural network, backpropagation proceeds as follows.
456
+ First, an example is propagated forward through the network to produce a
457
+ value $v_j$ at each node and outputs $\boldsymbol{\hat{y}}$ at the topmost layer.
458
+ Then, a loss function value $\mathcal{L}(\hat{y}_k, y_k)$ is computed at each output node $k$.
459
+ Subsequently, for each output node $k$, we calculate
460
+ $$\delta_k = \frac{\partial \mathcal{L}(\hat{y}_k, y_k)}{\partial \hat{y}_k} \cdot l_k'(a_k).$$
461
+ Given these values $\delta_k$, for each node in the immediately prior layer we calculate
462
+ $$\delta_j = l'(a_j) \sum_{k} \delta_k \cdot w_{kj}.$$
463
+ This calculation is performed successively for each lower layer to yield
464
+ $\delta_j$ for every node $j$ given the $\delta$ values for each node connected to $j$ by an outgoing edge.
465
+ Each value $\delta_j$ represents the derivative $\partial \mathcal{L}/{\partial a_j}$ of the total loss function
466
+ with respect to that node's {incoming activation}.
467
+ Given the values $v_j$ calculated during the forward pass,
468
+ and the values $\delta_j$ calculated during the backward pass,
469
+ the derivative of the loss $\mathcal{L}$ with respect a given parameter $w_{jj'}$ is
470
+ $$\frac{\partial \mathcal{L}}{\partial w_{jj'}}= \delta_j v_{j'}.$$
471
+
472
+
473
+ Other methods have been explored for learning the weights in a neural network.
474
+ A number of papers from the 1990s \citep{belew1990evolving, gruau1994neural}
475
+ championed the idea of learning neural networks with genetic algorithms,
476
+ with some even claiming that achieving success on real-world problems
477
+ only by applying many small changes to the weights of a network was impossible.
478
+ Despite the subsequent success of backpropagation, interest in genetic algorithms continues.
479
+ Several recent papers explore genetic algorithms for neural networks,
480
+ especially as a means of learning the architecture of neural networks,
481
+ a problem not addressed by backpropagation \citep{bayer2009evolving, harp2013optimizing}.
482
+ By \emph{architecture} we mean the number of layers,
483
+ the number of nodes in each, the connectivity pattern among the layers, the choice of activation functions, etc.
484
+
485
+ One open question in neural network research is how to exploit sparsity in training.
486
+ In a neural network with sigmoidal or $\textit{tanh}$ activation functions,
487
+ the nodes in each layer never take value exactly zero.
488
+ Thus, even if the inputs are sparse, the nodes at each hidden layer are not.
489
+ However, rectified linear units (ReLUs) introduce sparsity to hidden layers \citep{glorot2011deep}.
490
+ In this setting, a promising path may be to store the sparsity pattern when computing each layer's values
491
+ and use it to speed up computation of the next layer in the network.
492
+ Some recent work
493
+ shows that given sparse inputs to a linear model with a standard regularizer,
494
+ sparsity can be fully exploited even if regularization makes the gradient be not sparse
495
+ \citep{carpenter2008lazy, langford2009sparse, singer2009efficient, lipton2015efficient}.
496
+
497
+
498
+ \section{Recurrent neural networks}
499
+
500
+ \begin{figure}[t]
501
+ \centering
502
+ \includegraphics[width=.7\linewidth]{simple-recurrent.png}
503
+ \caption{A simple recurrent network.
504
+ At each time step $t$, activation is passed along solid edges as in a feedforward network.
505
+ Dashed edges connect a source node at each time $t$ to a target node at each following time $t+1$.}
506
+ \label{fig:simple-recurrent}
507
+ \end{figure}
508
+
509
+ Recurrent neural networks are feedforward neural networks augmented by the inclusion of edges that span adjacent time steps,
510
+ introducing a notion of time to the model.
511
+ Like feedforward networks, RNNs may not have cycles among conventional edges.
512
+ However, edges that connect adjacent time steps,
513
+ called recurrent edges, may form cycles, including cycles of length one
514
+ that are self-connections from a node to itself across time.
515
+ At time $t$, nodes with recurrent edges
516
+ receive {input} from the current data point $\boldsymbol{x}^{(t)}$
517
+ and also from hidden node values $\boldsymbol{h}^{(t-1)}$ in the network's previous state.
518
+ The output $\boldsymbol{\hat{y}}^{(t)}$ at each time $t$ is calculated
519
+ given the hidden node values $\boldsymbol{h}^{(t)}$ at time $t$.
520
+ Input $\boldsymbol{x}^{(t-1)}$ at time $t-1$
521
+ can influence the output $\boldsymbol{\hat{y}}^{(t)}$ at time $t$ and later
522
+ by way of the recurrent connections.
523
+
524
+ Two equations specify all calculations necessary for computation at each time step on the forward pass
525
+ in a simple recurrent neural network as in Figure~\ref{fig:simple-recurrent}:
526
+ $$
527
+ \boldsymbol{h}^{(t)} = \sigma(W^{\mbox{hx}} \boldsymbol{x}^{(t)} + W^{\mbox{hh}} \boldsymbol{h}^{(t-1)} +\boldsymbol{b}_h )
528
+ $$
529
+ $$
530
+ \hat{\boldsymbol{y}}^{(t)} = \softmax ( W^{\mbox{yh}} \boldsymbol{h}^{(t)} + \boldsymbol{b}_y).
531
+ $$
532
+ Here $W_{\mbox{hx}}$ is the matrix of conventional weights between the input and the hidden layer
533
+ and $W_{\mbox{hh}}$ is the matrix of recurrent weights between the hidden layer and itself at adjacent time steps.
534
+ The vectors $\boldsymbol{b}_h$ and $\boldsymbol{b}_y$ are bias parameters which allow each node to learn an offset.
535
+
536
+
537
+
538
+ \begin{figure}[t]
539
+ \centering
540
+ \includegraphics[width=.7\linewidth]{unfolded-rnn.png}
541
+ \caption{The recurrent network of Figure~\ref{fig:simple-recurrent} unfolded across time steps.}
542
+ \label{fig:unfolded-rnn}
543
+ \end{figure}
544
+
545
+
546
+
547
+ The dynamics of the network depicted in Figure~\ref{fig:simple-recurrent} across time steps can be visualized
548
+ by \emph{unfolding} it as in Figure~\ref{fig:unfolded-rnn}.
549
+ Given this picture, the network can be interpreted not as cyclic, but rather as a deep network
550
+ with one layer per time step and shared weights across time steps.
551
+ It is then clear that the unfolded network can be trained across many time steps using backpropagation.
552
+ This algorithm, called \emph{backpropagation through time} (BPTT),
553
+ was introduced by \citet{werbos1990backpropagation}.
554
+ All recurrent networks in common current use apply it.
555
+
556
+ \begin{figure}[t]
557
+ \centering
558
+ \includegraphics[width=.7\linewidth]{jordan-network.png}
559
+ \caption{A recurrent neural network as proposed by \citet{jordan1997serial}.
560
+ Output units are connected to special units that at the next time step
561
+ feed into themselves and into hidden units.}
562
+ \label{fig:jordan-network}
563
+ \end{figure}
564
+
565
+ \begin{figure}[t]
566
+ \centering
567
+ \includegraphics[width=.7\linewidth]{elman-network.png}
568
+ \caption{A recurrent neural network as described by \citet{elman1990finding}.
569
+ Hidden units are connected to context units,
570
+ which feed back into the hidden units at the next time step.}
571
+ \label{fig:elman-network}
572
+ \end{figure}
573
+
574
+ \subsection{Early recurrent network designs}
575
+
576
+ The foundational research on recurrent networks took place in the 1980s.
577
+ In 1982, Hopfield introduced a family of recurrent neural networks
578
+ that have pattern recognition capabilities \citep{hopfield1982neural}.
579
+ They are defined by the values of the weights between nodes
580
+ and the link functions are simple thresholding at zero.
581
+ In these nets, a pattern is placed in the network by setting the values of the nodes.
582
+ The network then runs for some time according to its update rules,
583
+ and eventually another pattern is read out.
584
+ Hopfield networks are useful for recovering a stored pattern from a corrupted version
585
+ and are the forerunners of Boltzmann machines and auto-encoders.
586
+
587
+
588
+ An early architecture for supervised learning on sequences was introduced by \citet{jordan1997serial}.
589
+ Such a network (Figure~\ref{fig:jordan-network}) is a feedforward network with a single hidden layer
590
+ that is extended with {special units}.\footnote{
591
+ \citet{jordan1997serial} calls the special units ``state units"
592
+ while \citet{elman1990finding} calls a corresponding structure ``context units."
593
+ In this paper we simplify terminology by using only ``context units".
594
+ }
595
+ Output node values are fed to the special units,
596
+ which then feed these values to the hidden nodes at the following time step.
597
+ If the output values are actions, the special units allow the network
598
+ to {remember} actions taken at previous time steps.
599
+ Several modern architectures use a related form of direct transfer from output nodes;
600
+ \citet{sutskever2014sequence} translates sentences between natural languages,
601
+ and when generating a text sequence, the word chosen at each time step
602
+ is fed into the network as input at the following time step.
603
+ Additionally, the special units in a Jordan network are self-connected.
604
+ Intuitively, these edges allow sending information across multiple time steps
605
+ without perturbing the output at each intermediate time step.
606
+
607
+ The architecture introduced by \citet{elman1990finding}
608
+ is simpler than the earlier Jordan architecture.
609
+ Associated with each unit in the hidden layer is a context unit.
610
+ Each such unit $j'$ takes as input the state
611
+ of the corresponding hidden node $j$ at the previous time step,
612
+ along an edge of fixed weight $w_{j'j} = 1$.
613
+ This value then feeds back into the same hidden node $j$ along a standard edge.
614
+ This architecture is equivalent to a simple RNN in which each hidden node
615
+ has a single self-connected recurrent edge.
616
+ The idea of fixed-weight recurrent edges that make hidden nodes self-connected
617
+ is fundamental in subsequent work on LSTM networks \citep{hochreiter1997long}.
618
+
619
+ \citet{elman1990finding} trains the network using backpropagation
620
+ and demonstrates that the network can learn time dependencies.
621
+ The paper features two sets of experiments.
622
+ The first extends the logical operation \emph{exclusive or} (XOR)
623
+ to the time domain by concatenating sequences of three tokens.
624
+ For each three-token segment, e.g.~``011", the first two tokens (``01") are chosen randomly
625
+ and the third (``1") is set by performing xor on the first two.
626
+ Random guessing should achieve accuracy of $50\%$.
627
+ A perfect system should perform the same as random for the first two tokens,
628
+ but guess the third token perfectly, achieving accuracy of $66.7\%$.
629
+ The simple network of \citet{elman1990finding} does in fact approach this maximum achievable score.
630
+
631
+
632
+ \begin{figure}
633
+ \center
634
+ \includegraphics[width=.7\linewidth]{vanishing1.png}
635
+ \caption{A simple recurrent net with one input unit, one output unit, and one recurrent hidden unit.}
636
+ \label{fig:vanishing1}
637
+ \end{figure}
638
+
639
+ \subsection{Training recurrent networks}
640
+
641
+ Learning with recurrent networks has long been considered to be difficult.
642
+ Even for standard feedforward networks, the optimization task is NP-complete \cite{blum1993training}.
643
+ But learning with recurrent networks can be especially challenging
644
+ due to the difficulty of learning long-range dependencies, as
645
+ described by \citet{bengio1994learning} and
646
+ expanded upon by \citet{hochreiter2001gradient}.
647
+ The problems of \emph{vanishing} and \emph{exploding} gradients
648
+ occur when backpropagating errors across many time steps.
649
+ As a toy example, consider a network with a single input node,
650
+ a single output node, and a single recurrent hidden node (Figure~\ref{fig:vanishing1}).
651
+ Now consider an input passed to the network at time $\tau$ and an error calculated at time $t$,
652
+ assuming input of zero in the intervening time steps.
653
+ The tying of weights across time steps means that the recurrent edge at the hidden node $j$ always has the same weight.
654
+ Therefore, the contribution of the input at time $\tau$ to the output at time $t$
655
+ will either explode or approach zero, exponentially fast, as $t - \tau$ grows large.
656
+ Hence the derivative of the error with respect to the input
657
+ will either explode or vanish.
658
+
659
+ Which of the two phenomena occurs
660
+ depends on whether the weight of the recurrent edge $|w_{jj}| > 1$ or $|w_{jj}| < 1$
661
+ and on the activation function in the hidden node (Figure~\ref{fig:vanishing2}).
662
+ Given a sigmoid activation function,
663
+ the vanishing gradient problem is more pressing,
664
+ but with a rectified linear unit $\max(0,x)$,
665
+ it is easier to imagine the exploding gradient.
666
+ \citet{pascanu2012difficulty} give a thorough mathematical treatment
667
+ of the vanishing and exploding gradient problems,
668
+ characterizing exact conditions under which these problems may occur.
669
+ Given these conditions, they suggest an approach to training via a regularization term
670
+ that forces the weights to values where the gradient neither vanishes nor explodes.
671
+
672
+ Truncated backpropagation through time (TBPTT) is one solution to the exploding gradient problem
673
+ for continuously running networks \citep{williams1989learning}.
674
+ With TBPTT, some maximum number of time steps is set along which error can be propagated.
675
+ While TBPTT with a small cutoff can be used to alleviate the exploding gradient problem,
676
+ it requires that one sacrifice the ability to learn long-range dependencies.
677
+ The LSTM architecture described below
678
+ uses carefully designed nodes with recurrent edges with fixed unit weight
679
+ as a solution to the vanishing gradient problem.
680
+
681
+ \begin{figure}
682
+ \center
683
+ \includegraphics[width=.7\linewidth]{vanishing2.png}
684
+ \caption{A visualization of the vanishing gradient problem,
685
+ using the network depicted in Figure~\ref{fig:vanishing1}, adapted from \citet{graves2012supervised}.
686
+ If the weight along the recurrent edge is less than one,
687
+ the contribution of the input at the first time step
688
+ to the output at the final time step
689
+ will decrease exponentially fast as a function of the length of the time interval in between.
690
+ }
691
+ \label{fig:vanishing2}
692
+ \end{figure}
693
+
694
+ The issue of local optima is an obstacle to effective training
695
+ that cannot be dealt with simply by modifying the network architecture.
696
+ Optimizing even a single hidden-layer feedforward network is
697
+ an NP-complete problem \citep{blum1993training}.
698
+ However, recent empirical and theoretical studies suggest that
699
+ in practice, the issue may not be as important as once thought.
700
+ \citet{dauphin2014identifying} show that while many critical points exist
701
+ on the error surfaces of large neural networks,
702
+ the ratio of saddle points to true local minima increases exponentially with the size of the network,
703
+ and algorithms can be designed to escape from saddle points.
704
+
705
+ Overall, along with the improved architectures explained below,
706
+ fast implementations and better gradient-following heuristics
707
+ have rendered RNN training feasible.
708
+ Implementations of forward and backward propagation using GPUs,
709
+ such as the Theano \citep{bergstra2010theano} and Torch \citep{collobert2011torch7} packages,
710
+ have made it straightforward to implement fast training algorithms.
711
+ In 1996, prior to the introduction of the LSTM, attempts to train recurrent nets to bridge long time gaps
712
+ were shown to perform no better than random guessing \citep{hochreiter1996bridging}.
713
+ However, RNNs are now frequently trained successfully.
714
+
715
+ For some tasks, freely available software can be run on a single GPU
716
+ and produce compelling results in hours \citep{karpathyunreasonable}.
717
+ \citet{martens2011learning} reported success training recurrent neural networks
718
+ with a Hessian-free truncated Newton approach,
719
+ and applied the method to a network which learns to generate text one character at a time in \citep{sutskever2011generating}.
720
+ In the paper that describes the abundance of saddle points
721
+ on the error surfaces of neural networks \citep{dauphin2014identifying},
722
+ the authors present a saddle-free version of Newton's method.
723
+ Unlike Newton's method, which is attracted to critical points, including saddle points,
724
+ this variant is specially designed to escape from them.
725
+ Experimental results include a demonstration
726
+ of improved performance on recurrent networks.
727
+ Newton's method requires computing the Hessian,
728
+ which is prohibitively expensive for large networks,
729
+ scaling quadratically with the number of parameters.
730
+ While their algorithm only approximates the Hessian,
731
+ it is still computationally expensive compared to SGD.
732
+ Thus the authors describe a hybrid approach
733
+ in which the saddle-free Newton method is applied only
734
+ in places where SGD appears to be {stuck}.
735
+
736
+ \section{Modern RNN architectures}
737
+
738
+ The most successful RNN architectures for sequence learning stem from two papers published in 1997.
739
+ The first paper, \emph{Long Short-Term Memory} by \citet{hochreiter1997long},
740
+ introduces the memory cell, a unit of computation that replaces
741
+ traditional nodes in the hidden layer of a network.
742
+ With these memory cells, networks are able to overcome difficulties with training
743
+ encountered by earlier recurrent networks.
744
+ The second paper, \emph{Bidirectional Recurrent Neural Networks} by \citet{schuster1997bidirectional},
745
+ introduces an architecture in which information from both the future and the past
746
+ are used to determine the output at any point in the sequence.
747
+ This is in contrast to previous networks, in which only past input can affect the output,
748
+ and has been used successfully for sequence labeling tasks in natural language processing, among others.
749
+ Fortunately, the two innovations are not mutually exclusive,
750
+ and have been successfully combined for phoneme classification \citep{graves2005framewise}
751
+ and handwriting recognition \citep{graves2009novel}.
752
+ In this section we explain the LSTM and BRNN
753
+ and we describe the \emph{neural Turing machine} (NTM),
754
+ which extends RNNs with an addressable external memory \citep{graves2014neural}.
755
+
756
+ \subsection{Long short-term memory (LSTM)}
757
+
758
+ \citet{hochreiter1997long} introduced the LSTM model
759
+ primarily in order to overcome the problem of vanishing gradients.
760
+ This model resembles a standard recurrent neural network with a hidden layer,
761
+ but each ordinary node (Figure~\ref{fig:artificial-neuron}) in the hidden layer
762
+ is replaced by a \emph{memory cell} (Figure~\ref{fig:lstm}).
763
+ Each memory cell contains a node with a self-connected recurrent edge of fixed weight one,
764
+ ensuring that the gradient can pass across many time steps without vanishing or exploding.
765
+ To distinguish references to a memory cell
766
+ and not an ordinary node, we use the subscript $c$.
767
+
768
+ \begin{figure}[t]
769
+ \center
770
+ \includegraphics[width=.8\linewidth]{lstm.png}
771
+ \caption{One LSTM memory cell as proposed by \citet{hochreiter1997long}.
772
+ The self-connected node is the internal state $s$.
773
+ The diagonal line indicates that it is linear, i.e.~the identity link function is applied.
774
+ The blue dashed line is the recurrent edge, which has fixed unit weight.
775
+ Nodes marked $\Pi$ output the product of their inputs.
776
+ All edges into and from $\Pi$ nodes also have fixed unit weight.}
777
+ \label{fig:lstm}
778
+ \end{figure}
779
+
780
+ The term ``long short-term memory" comes from the following intuition.
781
+ Simple recurrent neural networks have \emph{long-term memory} in the form of weights.
782
+ The weights change slowly during training, encoding general knowledge about the data.
783
+ They also have \emph{short-term memory} in the form of ephemeral activations,
784
+ which pass from each node to successive nodes.
785
+ The LSTM model introduces an intermediate type of storage via the memory cell.
786
+ A memory cell is a composite unit, built from simpler nodes in a specific connectivity pattern,
787
+ with the novel inclusion of multiplicative nodes, represented in diagrams by the letter $\Pi$.
788
+ All elements of the LSTM cell are enumerated and described below.
789
+ Note that when we use vector notation,
790
+ we are referring to the values of the nodes in an entire layer of cells.
791
+ For example, $\boldsymbol{s}$ is a vector containing the value of $s_c$ at each memory cell $c$ in a layer.
792
+ When the subscript $c$ is used, it is to index an individual memory cell.
793
+
794
+ \begin{itemize}
795
+ \item \emph{Input node:}
796
+ This unit, labeled $g_c$, is a node that takes activation in the standard way
797
+ from the input layer $\boldsymbol{x^{(t)}}$ at the current time step
798
+ and (along recurrent edges) from the hidden layer at the previous time step $\boldsymbol{h}^{(t-1)}$.
799
+ Typically, the summed weighted input is run through a $\textit{tanh}$ activation function,
800
+ although in the original LSTM paper, the activation function is a $\textit{sigmoid}$.
801
+
802
+
803
+ \item \emph{Input gate:}
804
+ Gates are a distinctive feature of the LSTM approach.
805
+ A gate is a sigmoidal unit that, like the {input node}, takes
806
+ activation from the current data point $\boldsymbol{x}^{(t)}$
807
+ as well as from the hidden layer at the previous time step.
808
+ A gate is so-called because its value is used to multiply the value of another node.
809
+ It is a \emph{gate} in the sense that if its value is zero, then flow from the other node is cut off.
810
+ If the value of the gate is one, all flow is passed through.
811
+ The value of the \emph{input gate} $i_c$ multiplies the value of the $\emph{input node}$.
812
+
813
+ \item \emph{Internal state:}
814
+ At the heart of each memory cell is a node $s_c$ with linear activation,
815
+ which is referred to in the original paper as the ``internal state" of the cell.
816
+ The internal state $s_c$ has a self-connected recurrent edge with fixed unit weight.
817
+ Because this edge spans adjacent time steps with constant weight,
818
+ error can flow across time steps without vanishing or exploding.
819
+ This edge is often called the \emph{constant error carousel}.
820
+ In vector notation, the update for the internal state is
821
+ $\boldsymbol{s}^{(t)} = \boldsymbol{g}^{(t)} \odot \boldsymbol{i}^{(t)} + \boldsymbol{s}^{(t-1)}$
822
+ where $\odot$ is pointwise multiplication.
823
+
824
+ \item \emph{Forget gate:}
825
+ {These gates} $f_c$ were introduced by \citet{gers2000learning}.
826
+ They provide a method by which the network can learn to flush the contents of the internal state.
827
+ This is especially useful in continuously running networks.
828
+ With forget gates, the equation to calculate the internal state on the forward pass is
829
+ $$
830
+ \boldsymbol{s}^{(t)} = \boldsymbol{g}^{(t)} \odot \boldsymbol{i}^{(t)}
831
+ + \boldsymbol{f}^{(t)} \odot \boldsymbol{s}^{(t-1)}.
832
+ $$
833
+
834
+ \item \emph{Output gate:}
835
+ The value $v_c$ ultimately produced by a memory cell
836
+ is the value of the internal state $s_c$ multiplied by the value of the \emph{output gate} $o_c$.
837
+ It is customary that the internal state first be run through a \textit{tanh} activation function,
838
+ as this gives the output of each cell the same dynamic range as an ordinary \textit{tanh} hidden unit.
839
+ However, in other neural network research, rectified linear units,
840
+ which have a greater dynamic range, are easier to train.
841
+ Thus it seems plausible that the nonlinear function on the internal state might be omitted.
842
+ \end{itemize}
843
+
844
+ In the original paper and in most subsequent work, the input node is labeled $g$.
845
+ We adhere to this convention but note that it may be confusing as $g$ does not stand for \emph{gate}.
846
+ In the original paper, the gates are called $y_{in}$ and $y_{out}$ but this is confusing
847
+ because $y$ generally stands for output in the machine learning literature.
848
+ Seeking comprehensibility, we break with this convention and use $i$, $f$, and $o$ to refer to
849
+ input, forget and output gates respectively, as in \citet{sutskever2014sequence}.
850
+
851
+ Since the original LSTM was introduced, several variations have been proposed.
852
+ {Forget gates}, described above, were proposed in 2000 and were not part of the original LSTM design.
853
+ However, they have proven effective and are standard in most modern implementations.
854
+ That same year, \citet{gers2000recurrent} proposed peephole connections
855
+ that pass from the internal state directly to the input and output gates of that same node
856
+ without first having to be modulated by the output gate.
857
+ They report that these connections improve performance
858
+ on timing tasks where the network must learn
859
+ to measure precise intervals between events.
860
+ The intuition of the peephole connection can be captured by the following example.
861
+ Consider a network which must learn to count objects and emit some desired output
862
+ when $n$ objects have been seen.
863
+ The network might learn to let some fixed amount of activation into the internal state after each object is seen.
864
+ This activation is trapped in the internal state $s_c$ by the constant error carousel,
865
+ and is incremented iteratively each time another object is seen.
866
+ When the $n$th object is seen,
867
+ the network needs to know to let out content from the internal state so that it can affect the output.
868
+ To accomplish this, the output gate $o_c$ must know the content of the internal state $s_c$.
869
+ Thus $s_c$ should be an input to $o_c$.
870
+
871
+ \begin{figure}
872
+ \center
873
+ \includegraphics[width=.8\linewidth]{lstm-forget.png}
874
+ \caption{LSTM memory cell with a forget gate as described by \citet{gers2000learning}.}
875
+ \label{fig:lstm-forget}
876
+ \end{figure}
877
+
878
+ Put formally, computation in the LSTM model proceeds according to the following calculations,
879
+ which are performed at each time step.
880
+ These equations give the full algorithm for a modern LSTM with forget gates:
881
+ $$ \boldsymbol{g}^{(t)} = \phi( W^{\mbox{gx}} \boldsymbol{x}^{(t)} + W^{\mbox{gh}} \boldsymbol{h}^{(t-1)} + \boldsymbol{b}_g)$$
882
+ $$ \boldsymbol{i}^{(t)} = \sigma( W^{\mbox{ix}} \boldsymbol{x}^{(t)} + W^{\mbox{ih}} \boldsymbol{h}^{(t-1)} + \boldsymbol{b}_i) $$
883
+ $$ \boldsymbol{f}^{(t)} = \sigma( W^{\mbox{fx}} \boldsymbol{x}^{(t)} + W^{\mbox{fh}} \boldsymbol{h}^{(t-1)} + \boldsymbol{b}_f) $$
884
+ $$ \boldsymbol{o}^{(t)} = \sigma( W^{\mbox{ox}} \boldsymbol{x}^{(t)} + W^{\mbox{oh}} \boldsymbol{h}^{(t-1)} + \boldsymbol{b}_o) $$
885
+ $$ \boldsymbol{s}^{(t)} = \boldsymbol{g}^{(t)} \odot \boldsymbol{i}^{(i)} + \boldsymbol{s}^{(t-1)} \odot \boldsymbol{f}^{(t)}$$
886
+ $$ \boldsymbol{h}^{(t)} = \phi ( \boldsymbol{s}^{(t)}) \odot \boldsymbol{o}^{(t)}. $$
887
+ The value of the hidden layer of the LSTM at time $t$ is the vector $\boldsymbol{h}^{(t)}$,
888
+ while $\boldsymbol{h}^{(t-1)}$ is the values output by each memory cell in the hidden layer at the previous time.
889
+ Note that these equations include the forget gate, but not peephole connections.
890
+ The calculations for the simpler LSTM without forget gates
891
+ are obtained by setting $\boldsymbol{f}^{(t)} = 1$ for all $t$.
892
+ We use the $\textit{tanh}$ function $\phi$ for the input node $g$
893
+ following the state-of-the-art design of \citet{zaremba2014learning}.
894
+ However, in the original LSTM paper, the activation function for $g$ is the sigmoid $\sigma$.
895
+
896
+ \begin{figure}[t]
897
+ \center
898
+ \includegraphics[width=.8\linewidth]{lstm-network.png}
899
+ \caption{A recurrent neural network with a hidden layer consisting of two memory cells.
900
+ The network is shown unfolded across two time steps.}
901
+ \label{fig:lstm-network}
902
+ \end{figure}
903
+
904
+ Intuitively, in terms of the forward pass, the LSTM can learn when to let activation into the internal state.
905
+ As long as the input gate takes value zero, no activation can get in.
906
+ Similarly, the output gate learns when to let the value out.
907
+ When both gates are \emph{closed}, the activation is trapped in the memory cell,
908
+ neither growing nor shrinking, nor affecting the output at intermediate time steps.
909
+ In terms of the backwards pass,
910
+ the constant error carousel enables the gradient to propagate back across many time steps,
911
+ neither exploding nor vanishing.
912
+ In this sense, the gates are learning when to let {error} in, and when to let it out.
913
+ In practice, the LSTM has shown a superior ability
914
+ to learn long-range dependencies as compared to simple RNNs.
915
+ Consequently, the majority of state-of-the-art application papers covered in this review use the LSTM model.
916
+
917
+ One frequent point of confusion is the manner in which multiple memory cells
918
+ are used together to comprise the hidden layer of a working neural network.
919
+ To alleviate this confusion, we depict in Figure~\ref{fig:lstm-network}
920
+ a simple network with two memory cells, analogous to Figure~\ref{fig:unfolded-rnn}.
921
+ The output from each memory cell flows in the subsequent time step
922
+ to the input node and all gates of each memory cell.
923
+ It is common to include multiple layers of memory cells \citep{sutskever2014sequence}.
924
+ Typically, in these architectures each layer takes input from the layer below at the same time step
925
+ and from the same layer in the previous time step.
926
+
927
+
928
+ \begin{figure}[]
929
+ \center
930
+ \includegraphics[width=.8\linewidth]{brnn.png}
931
+ \caption{A bidirectional recurrent neural network as described by \citet{schuster1997bidirectional},
932
+ unfolded in time.}
933
+ \label{fig:brnn}
934
+ \end{figure}
935
+
936
+ \subsection{Bidirectional recurrent neural networks (BRNNs)}
937
+
938
+ Along with the LSTM, one of the most used RNN architectures
939
+ is the bidirectional recurrent neural network (BRNN) (Figure~\ref{fig:brnn})
940
+ first described by \citet{schuster1997bidirectional}.
941
+ In this architecture, there are two layers of hidden nodes.
942
+ Both hidden layers are connected to input and output.
943
+ The two hidden layers are differentiated in that the first has recurrent connections
944
+ from the past time steps
945
+ while in the second the direction of recurrent of connections is flipped,
946
+ passing activation backwards along the sequence.
947
+ Given an input sequence and a target sequence,
948
+ the BRNN can be trained by ordinary backpropagation after unfolding across time.
949
+ The following three equations describe a BRNN:
950
+ $$\boldsymbol{h}^{(t)} = \sigma(W^{\mbox{h}\mbox{x}} \boldsymbol{x}^{(t)} + W^{\mbox{h}\mbox{h}} \boldsymbol{h}^{(t-1)} +\boldsymbol{b}_{h} ) $$
951
+ $$\boldsymbol{z}^{(t)} = \sigma(W^{\mbox{z}\mbox{x}} \boldsymbol{x}^{(t)} + W^{\mbox{z}\mbox{z}} \boldsymbol{z}^{(t+1)} +\boldsymbol{b}_{z} ) $$
952
+ $$ \hat{\boldsymbol{y}}^{(t)} = \softmax ( W^{\mbox{yh}} \boldsymbol{h}^{(t)} + W^{\mbox{yz}} \boldsymbol{z}^{(t)} + \boldsymbol{b}_y)$$
953
+ where $\boldsymbol{h}^{(t)}$ and $\boldsymbol{z}^{(t)}$
954
+ are the values of the hidden layers in the forwards and backwards directions respectively.
955
+
956
+ One limitation of the BRNN is that cannot run continuously, as it requires a fixed endpoint in both the future and in the past.
957
+ Further, it is not an appropriate machine learning algorithm for the online setting,
958
+ as it is implausible to receive information from the future, i.e., to know sequence elements that have not been observed.
959
+ But for prediction over a sequence of fixed length, it is often sensible to take into account both past and future sequence elements.
960
+ Consider the natural language task of {part-of-speech tagging}.
961
+ Given any word in a sentence, information about both
962
+ the words which precede and those which follow it
963
+ is useful for predicting that word's part-of-speech.
964
+
965
+
966
+ The LSTM and BRNN are in fact compatible ideas.
967
+ The former introduces a new basic unit from which to compose a hidden layer,
968
+ while the latter concerns the wiring of the hidden layers, regardless of what nodes they contain.
969
+ Such an approach, termed a BLSTM has been used to achieve state of the art results
970
+ on handwriting recognition and phoneme classification \citep{graves2005framewise, graves2009novel}.
971
+
972
+
973
+ \subsection{Neural Turing machines}
974
+
975
+ The neural Turing machine (NTM) extends recurrent neural networks
976
+ with an addressable external memory \citep{graves2014neural}.
977
+ This work improves upon the ability of RNNs
978
+ to perform complex algorithmic tasks such as sorting.
979
+ The authors take inspiration from theories in cognitive science,
980
+ which suggest humans possess a ``central executive" that interacts with a memory buffer \citep{baddeley1996working}.
981
+ By analogy to a Turing machine, in which a program directs \emph{read heads} and \emph{write heads}
982
+ to interact with external memory in the form of a tape, the model is named a Neural Turing Machine.
983
+ While technical details of the read/write heads are beyond the scope of this review,
984
+ we aim to convey a high-level sense of the model and its applications.
985
+
986
+ The two primary components of an NTM are a \emph{controller} and \emph{memory matrix}.
987
+ The controller, which may be a recurrent or feedforward neural network,
988
+ takes input and returns output to the outside world,
989
+ as well as passing instructions to and reading from the memory.
990
+ The memory is represented by a large matrix of $N$ memory locations,
991
+ each of which is a vector of dimension $M$.
992
+ Additionally, a number of read and write heads facilitate the interaction
993
+ between the {controller} and the {memory matrix}.
994
+ Despite these additional capabilities, the NTM is differentiable end-to-end
995
+ and can be trained by variants of stochastic gradient descent using BPTT.
996
+
997
+ \citet{graves2014neural} select five algorithmic tasks
998
+ to test the performance of the NTM model.
999
+ By \emph{algorithmic} we mean that for each task,
1000
+ the target output for a given input
1001
+ can be calculated by following a simple program,
1002
+ as might be easily implemented in any universal programming language.
1003
+ One example is the \emph{copy} task,
1004
+ where the input is a sequence of fixed length binary vectors followed by a delimiter symbol.
1005
+ The target output is a copy of the input sequence.
1006
+ In another task, \emph{priority sort}, an input consists of a sequence of binary vectors
1007
+ together with a distinct scalar priority value for each vector.
1008
+ The target output is the sequence of vectors sorted by priority.
1009
+ The experiments test whether an NTM can be trained via supervised learning
1010
+ to implement these common algorithms correctly and efficiently.
1011
+ Interestingly, solutions found in this way generalize reasonably well
1012
+ to inputs longer than those presented in the training set.
1013
+ In contrast, the LSTM without external memory
1014
+ does not generalize well to longer inputs.
1015
+ The authors compare three different architectures, namely
1016
+ an LSTM RNN, an NTM with a feedforward controller,
1017
+ and an NTM with an LSTM controller.
1018
+ On each task, both NTM architectures significantly outperform
1019
+ the LSTM RNN both in training set performance and in generalization to test data.
1020
+
1021
+
1022
+ \section{Applications of LSTMs and BRNNs}
1023
+
1024
+ The previous sections introduced the building blocks
1025
+ from which nearly all state-of-the-art recurrent neural networks are composed.
1026
+ This section looks at several application areas where recurrent networks
1027
+ have been employed successfully.
1028
+ Before describing state of the art results in detail,
1029
+ it is appropriate to convey a concrete sense of the precise architectures with which
1030
+ many important tasks can be expressed clearly as sequence learning problems with recurrent neural networks.
1031
+ Figure \ref{fig:rnn-types} demonstrates several common RNN architectures
1032
+ and associates each with corresponding well-documented tasks.
1033
+
1034
+ \begin{figure}
1035
+ \centering
1036
+ \includegraphics[width=.6\linewidth]{rnn-types.pdf}
1037
+ \caption{Recurrent neural networks have been used successfully to model both sequential inputs and sequential outputs
1038
+ as well as mappings between single data points and sequences (in both directions).
1039
+ This figure, based on a similar figure in \citet{karpathyunreasonable}
1040
+ shows how numerous tasks can be modeled with RNNs with sequential inputs and/or sequential outputs.
1041
+ In each subfigure, blue rectangles correspond to inputs,
1042
+ red rectangles to outputs and green rectangles to the entire hidden state of the neural network.
1043
+ (a) This is the conventional independent case, as assumed by standard feedforward networks.
1044
+ (b) Text and video classification are tasks in which a sequence is mapped to one fixed length vector.
1045
+ (c) Image captioning presents the converse case,
1046
+ where the input image is a single non-sequential data point.
1047
+ (d) This architecture has been used for natural language translation,
1048
+ a sequence-to-sequence task in which the two sequences may have varying and different lengths.
1049
+ (e) This architecture has been used to learn a generative model for text, predicting at each step the following character.}
1050
+ \label{fig:rnn-types}
1051
+ \end{figure}
1052
+
1053
+ In the following subsections, we first introduce the representations of natural language
1054
+ used for input and output to recurrent neural networks
1055
+ and the commonly used performance metrics for sequence prediction tasks.
1056
+ Then we survey state-of-the-art results in machine translation,
1057
+ image captioning, video captioning, and handwriting recognition.
1058
+ Many applications of RNNs involve processing written language.
1059
+ Some applications, such as image captioning,
1060
+ involve generating strings of text.
1061
+ Others, such as machine translation and dialogue systems,
1062
+ require both inputting and outputting text.
1063
+ Systems which output text are more difficult to evaluate empirically than those which
1064
+ produce binary predictions or numerical output.
1065
+ As a result several methods have been developed to assess the quality of translations and captions.
1066
+ In the next subsection, we provide the background necessary
1067
+ to understand how text is represented in most modern recurrent net applications.
1068
+ We then explain the commonly reported evaluation metrics.
1069
+
1070
+ \subsection{Representations of natural language inputs and outputs}
1071
+
1072
+ When words are output at each time step,
1073
+ generally the output consists of a softmax vector $\boldsymbol{y}^{(t)} \in \mathbbm{R}^{K}$
1074
+ where $K$ is the size of the vocabulary.
1075
+ A softmax layer is an element-wise logistic function
1076
+ that is normalized so that all of its components sum to one.
1077
+ Intuitively, these outputs correspond to the probabilities
1078
+ that each word is the correct output at that time step.
1079
+
1080
+ For application where an input consists of a sequence of words,
1081
+ typically the words are fed to the network one at a time in consecutive time steps.
1082
+ In these cases, the simplest way to represent words is a \emph{one-hot} encoding,
1083
+ using binary vectors with a length equal to the size of the vocabulary,
1084
+ so ``1000" and ``0100" would represent the first and second words in the vocabulary respectively.
1085
+ Such an encoding is discussed by \citet{elman1990finding} among others.
1086
+ However, this encoding is inefficient, requiring as many bits as the vocabulary is large.
1087
+ Further, it offers no direct way to capture different aspects
1088
+ of similarity between words in the encoding itself.
1089
+ Thus it is common now to model words with a
1090
+ distributed representation using a \emph{meaning vector}.
1091
+ In some cases, these meanings for words are learned given a large corpus of supervised data,
1092
+ but it is more usual to initialize the \emph{meaning vectors}
1093
+ using an embedding based on word co-occurrence statistics.
1094
+ Freely available code to produce word vectors from these statistics include
1095
+ \emph{GloVe} \citep{pennington2014glove},
1096
+ and \emph{word2vec} \citep{goldberg2014word2vec},
1097
+ which implements a word embedding algorithm from \citet{mikolov2013efficient}.
1098
+
1099
+ Distributed representations for textual data were described by \citet{hinton1986learning},
1100
+ used extensively for natural language by \citet{bengio2003neural},
1101
+ and more recently brought to wider attention in the deep learning community
1102
+ in a number of papers describing recursive auto-encoder (RAE) networks
1103
+ \citep{socher2010learning, socher2011dynamic, socher2011parsing, socher2011semi}.
1104
+ For clarity we point out that these \emph{recursive} networks
1105
+ are \emph{not} recurrent neural networks,
1106
+ and in the present survey the abbreviation RNN always means {recurrent neural network}.
1107
+ While they are distinct approaches, recurrent and recursive neural networks have important features in common,
1108
+ namely that they both involve extensive weight tying and are both trained end-to-end via backpropagation.
1109
+
1110
+ In many experiments with recurrent neural networks
1111
+ \citep{elman1990finding, sutskever2011generating, zaremba2014learning},
1112
+ input is fed in one character at a time, and output generated one character at a time,
1113
+ as opposed to one word at a time.
1114
+ While the output is nearly always a softmax layer,
1115
+ many papers omit details of how they represent single-character inputs.
1116
+ It seems reasonable to infer that characters are encoded with a one-hot encoding.
1117
+ We know of no cases of paper using a distributed representation at the single-character level.
1118
+
1119
+ \subsection{Evaluation methodology}
1120
+
1121
+ A serious obstacle to training systems well to output variable length sequences of words
1122
+ is the flaws of the available performance metrics.
1123
+ In the case of captioning or translation,
1124
+ there maybe be multiple correct translations.
1125
+ Further, a labeled dataset may contain multiple \emph{reference translations}
1126
+ for each example.
1127
+ Comparing against such a gold standard is more fraught
1128
+ than applying standard performance measure to binary classification problems.
1129
+
1130
+ One commonly used metric for structured natural language output with multiple references is $\textit{BLEU}$ score.
1131
+ Developed in 2002,
1132
+ $\textit{BLEU}$ score is related to modified unigram precision \citep{papineni2002bleu}.
1133
+ It is the geometric mean of the $n$-gram precisions
1134
+ for all values of $n$ between $1$ and some upper limit $N$.
1135
+ In practice, $4$ is a typical value for $N$, shown to maximize agreement with human raters.
1136
+ Because precision can be made high by offering excessively short translations,
1137
+ the $\textit{BLEU}$ score includes a brevity penalty $B$.
1138
+ Where $c$ is average the length of the candidate translations and $r$ the average length of the reference translations,
1139
+ the brevity penalty is
1140
+ $$
1141
+ B =
1142
+ \begin{cases}
1143
+ 1 &\mbox{if } c > r \\
1144
+ e^{(1-r/c)} & \mbox{if } c \leq r
1145
+ \end{cases}.
1146
+ $$
1147
+ Then the $\textit{BLEU}$ score is
1148
+ $$
1149
+ \textit{BLEU} = B \cdot \exp \left( \frac{1}{N} \sum_{n=1}^{N} \log p_n \right)
1150
+ $$
1151
+ where $p_n$ is the modified $n$-gram precision,
1152
+ which is the number of $n$-grams in the candidate translation
1153
+ that occur in any of the reference translations,
1154
+ divided by the total number of $n$-grams in the candidate translation.
1155
+ This is called \emph{modified} precision because it is an adaptation of precision to the case of multiple references.
1156
+
1157
+ $\textit{BLEU}$ scores are commonly used in recent papers to evaluate both translation and captioning systems.
1158
+ While $\textit{BLEU}$ score does appear highly correlated with human judgments,
1159
+ there is no guarantee that any given translation with a higher $\textit{BLEU}$ score
1160
+ is superior to another which receives a lower $\textit{BLEU}$ score.
1161
+ In fact, while $\textit{BLEU}$ scores tend to be correlated with human judgement across large sets of translations,
1162
+ they are not accurate predictors of human judgement at the single sentence level.
1163
+
1164
+ $\textit{METEOR}$ is an alternative metric intended to overcome the weaknesses
1165
+ of the $\textit{BLEU}$ score \citep{banerjee2005meteor}.
1166
+ $\textit{METEOR}$ is based on explicit word to word matches
1167
+ between candidates and reference sentences.
1168
+ When multiple references exist, the best score is used.
1169
+ Unlike $\textit{BLEU}$, $\textit{METEOR}$ exploits known synonyms and stemming.
1170
+ The first step is to compute an F-score
1171
+ $$
1172
+ F_{\alpha} = \frac{P \cdot R}{\alpha \cdot P + (1-\alpha) \cdot R}
1173
+ $$
1174
+ based on single word matches where $P$ is the precision and $R$ is the recall.
1175
+ The next step is to calculate a fragmentation penalty $M \propto c/m$
1176
+ where $c$ is the smallest number of \emph{chunks} of consecutive words
1177
+ such that the words are adjacent in both the candidate and the reference,
1178
+ and $m$ is the total number of matched unigrams yielding the score.
1179
+ Finally, the score is
1180
+ $$
1181
+ \textit{METEOR} = (1 - M) \cdot F_\alpha.
1182
+ $$
1183
+ Empirically, this metric has been found to agree with human raters more than $\textit{BLEU}$ score.
1184
+ However, $\textit{METEOR}$ is less straightforward to calculate than $\textit{BLEU}$.
1185
+ To replicate the $\textit{METEOR}$ score reported by another party,
1186
+ one must exactly replicate their stemming and synonym matching,
1187
+ as well as the calculations.
1188
+ Both metrics rely upon having the exact same set of reference translations.
1189
+
1190
+ Even in the straightforward case of binary classification, without sequential dependencies,
1191
+ commonly used performance metrics like F1
1192
+ give rise to optimal thresholding strategies
1193
+ which may not accord with intuition about what should constitute good performance \citep{lipton2014optimal}.
1194
+ Along the same lines, given that performance metrics such as the ones above
1195
+ are weak proxies for true objectives, it may be difficult to distinguish between systems which are truly stronger
1196
+ and those which most overfit the performance metrics in use.
1197
+
1198
+
1199
+ \subsection{Natural language translation}
1200
+
1201
+ Translation of text is a fundamental problem in machine learning
1202
+ that resists solutions with shallow methods.
1203
+ Some tasks, like document classification,
1204
+ can be performed successfully with a bag-of-words representation that ignores word order.
1205
+ But word order is essential in translation.
1206
+ The sentences {``Scientist killed by raging virus"}
1207
+ and {``Virus killed by raging scientist"} have identical bag-of-words representations.
1208
+
1209
+ \begin{figure}
1210
+ \center
1211
+ \includegraphics[width=1\linewidth]{sequence-to-sequence.png}
1212
+ \caption{Sequence to sequence LSTM model of \citet{sutskever2014sequence}.
1213
+ The network consists of an encoding model (first LSTM) and a decoding model (second LSTM).
1214
+ The input blocks (blue and purple) correspond to word vectors,
1215
+ which are fully connected to the corresponding hidden state.
1216
+ Red nodes are softmax outputs. Weights are tied among all encoding steps and among all decoding time steps. }
1217
+ \label{fig:sequence-to-sequence}
1218
+ \end{figure}
1219
+
1220
+ \citet{sutskever2014sequence} present a translation model using two multilayered LSTMs
1221
+ that demonstrates impressive performance translating from English to French.
1222
+ The first LSTM is used for \emph{encoding} an input phrase from the source language
1223
+ and the second LSTM for \emph{decoding} the output phrase in the target language.
1224
+ The model works according to the following procedure (Figure~\ref{fig:sequence-to-sequence}):
1225
+ \begin{itemize}
1226
+ \item
1227
+ The source phrase is fed to the {encoding} LSTM one word at a time,
1228
+ which does not output anything.
1229
+ The authors found that significantly better results are achieved
1230
+ when the input sentence is fed into the network in reverse order.
1231
+ \item
1232
+ When the end of the phrase is reached, a special symbol that indicates
1233
+ the beginning of the output sentence is sent to the {decoding} LSTM.
1234
+ Additionally, the {decoding} LSTM receives as input the final state of the first LSTM.
1235
+ The second LSTM outputs softmax probabilities over the vocabulary at each time step.
1236
+ \item
1237
+ At inference time, beam search is used to choose the most likely words from the distribution at each time step,
1238
+ running the second LSTM until the end-of-sentence (\emph{EOS}) token is reached.
1239
+ \end{itemize}
1240
+
1241
+ For training, the true inputs are fed to the encoder,
1242
+ the true translation is fed to the decoder,
1243
+ and loss is propagated back from the outputs of the decoder across the entire sequence to sequence model.
1244
+ The network is trained to maximize the likelihood of the correct translation of each sentence in the training set.
1245
+ At inference time, a left to right beam search is used to determine which words to output.
1246
+ A few among the most likely next words are chosen for expansion after each time step.
1247
+ The beam search ends when the network outputs an end-of-sentence (\emph{EOS}) token.
1248
+ \citet{sutskever2014sequence} train the model using stochastic gradient descent without momentum,
1249
+ halving the learning rate twice per epoch, after the first five epochs.
1250
+ The approach achieves a $\textit{BLEU}$ score of 34.81,
1251
+ outperforming the best previous neural network NLP systems,
1252
+ and matching the best published results for non-neural network approaches,
1253
+ including systems that have explicitly programmed domain expertise.
1254
+ When their system is used to rerank candidate translations from another system,
1255
+ it achieves a $\textit{BLEU}$ score of 36.5.
1256
+
1257
+ The implementation which achieved these results involved eight GPUS.
1258
+ Nevertheless, training took 10 days to complete.
1259
+ One GPU was assigned to each layer of the LSTM,
1260
+ and an additional four GPUs were used simply to calculate softmax.
1261
+ The implementation was coded in C++,
1262
+ and each hidden layer of the LSTM contained 1000 nodes.
1263
+ The input vocabulary contained 160,000 words and the output vocabulary contained 80,000 words.
1264
+ Weights were initialized uniformly randomly in the range between $-0.08$ and $0.08$.
1265
+
1266
+ Another RNN approach to language translation is presented by \citet{auli2013joint}.
1267
+ Their RNN model uses the word embeddings of Mikolov
1268
+ and a lattice representation of the decoder output
1269
+ to facilitate search over the space of possible translations.
1270
+ In the lattice, each node corresponds to a sequence of words.
1271
+ They report a $\textit{BLEU}$ score of 28.5 on French-English translation tasks.
1272
+ Both papers provide results on similar datasets but
1273
+ \citet{sutskever2014sequence} only report on English to French translation
1274
+ while \citet{auli2013joint} only report on French to English translation,
1275
+ so it is not possible to compare the performance of the two models.
1276
+
1277
+ \subsection{Image captioning}
1278
+
1279
+ Recently, recurrent neural networks have been used successfully
1280
+ for generating sentences that describe photographs
1281
+ \citep{vinyals2015show, karpathy2014deep, mao2014deep}.
1282
+ In this task, a training set consists of input images $\boldsymbol{x}$ and target captions $\boldsymbol{y}$.
1283
+ Given a large set of image-caption pairs, a model is trained to predict the appropriate caption for an image.
1284
+
1285
+ \citet{vinyals2015show} follow up on the success in language to language translation
1286
+ by considering captioning as a case of image to language translation.
1287
+ Instead of both encoding and decoding with LSTMs,
1288
+ they introduce the idea of encoding an image with a convolutional neural network,
1289
+ and then decoding it with an LSTM.
1290
+ \citet{mao2014deep} independently developed a similar RNN image captioning network,
1291
+ and achieved then state-of-the-art results
1292
+ on the Pascal, Flickr30K, and COCO datasets.
1293
+
1294
+ \citet{karpathy2014deep} follows on this work,
1295
+ using a convolutional network to encode images
1296
+ together with a bidirectional network attention mechanism
1297
+ and standard RNN to decode captions,
1298
+ using word2vec embeddings as word representations.
1299
+ They consider both full-image captioning and a model
1300
+ that captures correspondences between image regions and text snippets.
1301
+ At inference time, their procedure resembles the one described by \citet{sutskever2014sequence},
1302
+ where sentences are decoded one word at a time.
1303
+ The most probable word is chosen and fed to the network at the next time step.
1304
+ This process is repeated until an \emph{EOS} token is produced.
1305
+
1306
+ To convey a sense of the scale of these problems,
1307
+ \citet{karpathy2014deep} focus on three datasets of captioned images:
1308
+ Flickr8K, Flickr30K, and COCO,
1309
+ of size 50MB (8000 images), 200MB (30,000 images), and 750MB (328,000 images) respectively.
1310
+ The implementation uses the Caffe library \citep{jia2014caffe},
1311
+ and the convolutional network is pretrained on ImageNet data.
1312
+ In a revised version, the authors report that LSTMs outperform simpler RNNs
1313
+ and that learning word representations from random initializations
1314
+ is often preferable to {word2vec} embeddings.
1315
+ As an explanation, they say that {word2vec} embeddings
1316
+ may cluster words like colors together in the embedding space,
1317
+ which can be not suitable for visual descriptions of images.
1318
+
1319
+
1320
+ \subsection{Further applications}
1321
+
1322
+ Handwriting recognition is an application area
1323
+ where bidirectional LSTMs have been used to achieve state of the art results.
1324
+ In work by \citet{liwicki2007novel} and \citet{graves2009novel},
1325
+ data is collected from an interactive whiteboard.
1326
+ Sensors record the $(x,y)$ coordinates of the pen at regularly sampled time steps.
1327
+ In the more recent paper, they use a bidirectional LSTM model,
1328
+ outperforming an HMM model by achieving $81.5\%$
1329
+ word-level accuracy, compared to $70.1\%$ for the HMM.
1330
+
1331
+ In the last year, a number of papers have emerged
1332
+ that extend the success of recurrent networks
1333
+ for translation and image captioning to new domains.
1334
+ Among the most interesting of these applications
1335
+ are unsupervised video encoding \citep{srivastava2015unsupervised},
1336
+ video captioning \citep{venugopalan2015sequence},
1337
+ and program execution \citep{zaremba2014learning}.
1338
+ \citet{venugopalan2015sequence}~demonstrate a sequence to sequence architecture
1339
+ that encodes frames from a video and decode words.
1340
+ At each time step the input to the encoding LSTM
1341
+ is the topmost hidden layer of a convolutional neural network.
1342
+ At decoding time, the network outputs probabilities over the vocabulary at each time step.
1343
+
1344
+ \citet{zaremba2014learning} experiment with networks which read
1345
+ computer programs one character at a time and predict their output.
1346
+ They focus on programs which output integers
1347
+ and find that for simple programs,
1348
+ including adding two nine-digit numbers,
1349
+ their network, which uses LSTM cells in several stacked hidden layers
1350
+ and makes a single left to right pass through the program,
1351
+ can predict the output with 99\% accuracy.
1352
+
1353
+ \section{Discussion}
1354
+
1355
+ Over the past thirty years, recurrent neural networks have gone from
1356
+ models primarily of interest for cognitive modeling and computational neuroscience,
1357
+ to powerful and practical tools for large-scale supervised learning from sequences.
1358
+ This progress owes to advances in model architectures,
1359
+ training algorithms, and parallel computing.
1360
+ Recurrent networks are especially interesting because they overcome
1361
+ many of the restrictions placed on input and output data by traditional machine learning approaches.
1362
+ With recurrent networks, the assumption of independence between consecutive examples is broken,
1363
+ and hence also the assumption of fixed-dimension inputs and outputs.
1364
+
1365
+ While LSTMs and BRNNs have set records in accuracy on many tasks in recent years,
1366
+ it is noteworthy that advances come from novel architectures
1367
+ rather than from fundamentally novel algorithms.
1368
+ Therefore, automating exploration of the space of possible models,
1369
+ for example via genetic algorithms or a Markov chain Monte Carlo approach,
1370
+ could be promising.
1371
+ Neural networks offer a wide range of transferable and combinable techniques.
1372
+ New activation functions, training procedures, initializations procedures, etc.~are generally transferable across networks and tasks,
1373
+ often conferring similar benefits.
1374
+ As the number of such techniques grows, the practicality of testing all combinations diminishes.
1375
+ It seems reasonable to infer that as a community,
1376
+ neural network researchers are exploring the space of model architectures and configurations
1377
+ much as a genetic algorithm might,
1378
+ mixing and matching techniques,
1379
+ with a fitness function in the form of evaluation metrics on major datasets of interest.
1380
+
1381
+ This inference suggests two corollaries.
1382
+ First, as just stated,
1383
+ this body of research could benefit from automated procedures
1384
+ to explore the space of models.
1385
+ Second, as we build systems designed to perform more complex tasks,
1386
+ we would benefit from improved fitness functions.
1387
+ A \textit{BLEU} score inspires less confidence than the accuracy reported on a binary classification task.
1388
+ To this end, when possible, it seems prudent to individually test techniques
1389
+ first with classic feedforward networks on datasets with established benchmarks
1390
+ before applying them to recurrent networks in settings with less reliable evaluation criteria.
1391
+
1392
+ Lastly, the rapid success of recurrent neural networks
1393
+ on natural language tasks leads us to believe that extensions of this work to longer texts would be fruitful.
1394
+ Additionally, we imagine that dialogue systems could be built
1395
+ along similar principles to the architectures used for translation,
1396
+ encoding prompts and generating responses,
1397
+ while retaining the entirety of conversation history as contextual information.
1398
+
1399
+ \section{Acknowledgements}
1400
+ The first author's research is funded by generous support from the Division of Biomedical Informatics at UCSD,
1401
+ via a training grant from the National Library of Medicine.
1402
+ This review has benefited from insightful comments from
1403
+ Vineet Bafna, Julian McAuley, Balakrishnan Narayanaswamy,
1404
+ Stefanos Poulis, Lawrence Saul, Zhuowen Tu, and Sharad Vikram.
1405
+
1406
+
1407
+
1408
+
1409
+
1410
+
1411
+ \bibliography{rnn_jmlr}
1412
+ \end{document}
papers/1506/1506.02078.tex ADDED
@@ -0,0 +1,607 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass{article} \usepackage{iclr2016_conference,times}
2
+ \usepackage{hyperref}
3
+ \usepackage{url}
4
+
5
+ \usepackage{graphicx}
6
+ \usepackage{tabulary}
7
+ \usepackage{amsmath}
8
+ \usepackage{amsfonts}
9
+ \usepackage{epstopdf}
10
+
11
+ \usepackage{array,multirow,graphicx}
12
+
13
+ \usepackage{slashbox}
14
+ \usepackage{natbib}
15
+ \setlength{\bibsep}{18pt}
16
+
17
+ \title{Visualizing and Understanding Recurrent Networks}
18
+
19
+ \author{
20
+ Andrej Karpathy\thanks{Both authors contributed equally to this work.} \hspace{0.4in} Justin Johnson\footnotemark[1] \hspace{0.4in} Li Fei-Fei\\
21
+ Department of Computer Science, Stanford University\\
22
+ \texttt{\small \{karpathy,jcjohns,feifeili\}@cs.stanford.edu}\\
23
+ }
24
+
25
+ \newcommand{\fix}{\marginpar{FIX}}
26
+ \newcommand{\new}{\marginpar{NEW}}
27
+
28
+
29
+
30
+ \begin{document}
31
+
32
+ \maketitle
33
+
34
+ \begin{abstract}
35
+ Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM),
36
+ are enjoying renewed interest as a result of
37
+ successful applications in a wide range of machine learning problems that involve sequential data. However,
38
+ while LSTMs provide exceptional results in practice, the source of their performance and their limitations
39
+ remain rather poorly understood. Using character-level language models as an interpretable testbed,
40
+ we aim to bridge this gap by providing an analysis of their representations, predictions and error types.
41
+ In particular, our experiments reveal the existence of interpretable cells that keep track of long-range
42
+ dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon
43
+ $n$-gram models traces the source of the LSTM improvements to long-range structural dependencies.
44
+ Finally, we provide analysis of the remaining errors and suggests areas for further study.
45
+ \end{abstract}
46
+
47
+ \section{Introduction}
48
+
49
+ Recurrent Neural Networks, and specifically a variant with Long Short-Term Memory (LSTM) \cite{lstm},
50
+ have recently emerged as an effective model in a wide variety of applications that involve sequential data.
51
+ These include language modeling \cite{mikolov}, handwriting recognition and generation \cite{graves},
52
+ machine translation \cite{sutskever2014sequence,bahdanau2014neural}, speech recognition \cite{graves2013speech},
53
+ video analysis \cite{donahue2014long} and image captioning \cite{vinyals2014show,karpathy2014deep}.
54
+
55
+ However, both the source of their impressive performance and their shortcomings remain poorly understood. This raises
56
+ concerns of the lack of interpretability and limits our ability design better architectures. A few recent ablation studies
57
+ analyzed the effects on performance as various gates and connections are removed \cite{odyssey,chung2014empirical}.
58
+ However, while this analysis illuminates the performance-critical pieces of the architecture, it is still limited to
59
+ examining the effects only on the global level of the final test set perplexity alone.
60
+ Similarly, an often cited advantage of the LSTM architecture is that it can store and retrieve information over long
61
+ time scales using its gating mechanisms, and this ability has been carefully studied in toy settings \cite{lstm}.
62
+ However, it is not immediately clear that similar mechanisms can be effectively discovered and utilized
63
+ by these networks in real-world data, and with the common use of simple stochastic gradient descent and
64
+ truncated backpropagation through time.
65
+
66
+ To our knowledge, our work provides the first empirical exploration
67
+ of the predictions of LSTMs and their learned representations on real-world data. Concretely, we use character-level
68
+ language models as an interpretable testbed for illuminating the long-range dependencies learned by LSTMs.
69
+ Our analysis reveals the existence of cells that robustly identify interpretable, high-level patterns such as
70
+ line lengths, brackets and quotes. We further quantify the LSTM predictions with comprehensive comparison
71
+ to $n$-gram models, where we find that LSTMs perform significantly better on characters that require long-range
72
+ reasoning. Finally, we conduct an error analysis in which we \textit{``peel the onion''} of errors with a
73
+ sequence of oracles. These results allow us to quantify the extent of remaining errors in several categories and
74
+ to suggest specific areas for further study.
75
+
76
+ \vspace{-0.1in}
77
+ \section{Related Work}
78
+ \vspace{-0.1in}
79
+
80
+ \textbf{Recurrent Networks}. Recurrent Neural Networks (RNNs) have a long history of applications in various
81
+ sequence learning tasks \cite{rnn,dlbook,rumelhart1985learning}. Despite their early successes, the
82
+ difficulty of training simple recurrent networks \cite{bengiornn94,pascanu2012difficulty} has encouraged various proposals for improvements
83
+ to their basic architecture. Among the most successful variants are the Long Short Term Memory networks \cite{lstm}, which
84
+ can in principle store and retrieve information over long time periods with explicit gating mechanisms and a
85
+ built-in constant error carousel. In the recent years there has been a renewed interest in further improving on the basic architecture by
86
+ modifying the functional form as seen with Gated Recurrent Units \cite{gru}, incorporating content-based soft attention
87
+ mechanisms \cite{bahdanau2014neural,memorynets}, push-pop stacks \cite{armand}, or more generally external memory arrays
88
+ with both content-based and relative addressing mechanisms \cite{ntm}. In this work we focus the majority of our analysis on the
89
+ LSTM due to its widespread popularity and a proven track record.
90
+
91
+ \textbf{Understanding Recurrent Networks}. While there is an abundance of work that modifies or extends the basic LSTM
92
+ architecture, relatively little attention has been paid to understanding the properties of its representations and predictions.
93
+ \cite{odyssey} recently conducted a comprehensive study of LSTM components. Chung et al. evaluated GRU
94
+ compared to LSTMs \cite{chung2014empirical}. \cite{jozefowicz2015empirical} conduct an automated architecture search of thousands of RNN architectures.
95
+ \cite{rnndepth} examined the effects of depth . These approaches study recurrent network based only on the variations
96
+ in the final test set cross entropy, while we break down the performance into interpretable categories and study individual error types.
97
+ Most related to our work is \cite{hermans2013training}, who also study the long-term interactions learned by recurrent networks in
98
+ the context of character-level language models, specifically in the context of parenthesis closing and time-scales analysis. Our work complements
99
+ their results and provides additional types of analysis. Lastly, we are heavily influenced by work on in-depth analysis of errors in object detection
100
+ \cite{hoiem2012diagnosing}, where the final mean average precision is similarly broken down and studied in detail.
101
+
102
+ \vspace{-0.1in}
103
+ \section{Experimental Setup}
104
+ \vspace{-0.1in}
105
+
106
+ We first describe three commonly used recurrent network architectures (RNN, LSTM and the GRU),
107
+ then describe their used in sequence learning and finally discuss the optimization.
108
+
109
+ \vspace{-0.1in}
110
+ \subsection{Recurrent Neural Network Models}
111
+ \vspace{-0.15in}
112
+
113
+ The simplest instantiation of a deep recurrent network arranges hidden state vectors $h_t^l$ in a two-dimensional grid,
114
+ where $t = 1 \ldots T$ is thought of as time and $l = 1 \ldots L$ is the depth. The bottom row of vectors $h_t^0 = x_t$
115
+ at depth zero holds the input vectors $x_t$ and each vector in the top row $\{h_t^L\}$ is used to predict an
116
+ output vector $y_t$. All intermediate vectors $h_t^l$ are computed with a recurrence formula based on $h_{t-1}^l$ and $h_t^{l-1}$.
117
+ Through these hidden vectors, each output $y_t$ at time step $t$ becomes a function of all input vectors up to $t$,
118
+ $\{x_1, \ldots, x_t \}$. The precise mathematical form of the recurrence $(h_{t-1}^l$ , $h_t^{l-1}) \rightarrow h_t^l$ varies from
119
+ model to model and we describe these details next.
120
+
121
+ \textbf{Vanilla Recurrent Neural Network} (RNN) has a recurrence of the form
122
+
123
+ \vspace{-0.15in}
124
+ \begin{align*}
125
+ h^l_t = \tanh W^l \begin{pmatrix}h^{l - 1}_t\\h^l_{t-1}\end{pmatrix}
126
+ \end{align*}
127
+ \vspace{-0.15in}
128
+
129
+ where we assume that all $h \in \mathbb{R}^n$. The parameter matrix $W^l$ on each layer has dimensions [$n \times 2n$] and $\tanh$
130
+ is applied elementwise. Note that $W^l$ varies between layers but is shared through time. We omit the bias vectors for brevity.
131
+ Interpreting the equation above, the inputs from the layer
132
+ below in depth ($h_t^{l-1}$) and before in time ($h_{t-1}^l$) are transformed and interact through additive interaction before being squashed
133
+ by $\tanh$. This is known to be a weak form of coupling~\cite{mrnn}. Both the LSTM and the GRU (discussed next) include
134
+ more powerful multiplicative interactions.
135
+
136
+ \textbf{Long Short-Term Memory} (LSTM)~\cite{lstm} was designed to address the difficulties of training RNNs~\cite{bengiornn94}.
137
+ In particular, it was observed that the backpropagation dynamics caused the gradients in an RNN to either vanish or explode.
138
+ It was later found that the exploding gradient concern can be alleviated with a heuristic of clipping the gradients at some maximum value \cite{pascanu2012difficulty}.
139
+ On the other hand, LSTMs were designed to mitigate the vanishing gradient problem. In addition to a hidden state vector $h_t^l$, LSTMs also maintain
140
+ a memory vector $c_t^l$. At each time step the LSTM can choose to read from, write to, or reset the cell using explicit
141
+ gating mechanisms. The precise form of the update is as follows:
142
+
143
+ \vspace{-0.1in}
144
+ \begin{minipage}{.5\linewidth}
145
+ \begin{align*}
146
+ &\begin{pmatrix}i\\f\\o\\g\end{pmatrix} =
147
+ \begin{pmatrix}\mathrm{sigm}\\\mathrm{sigm}\\\mathrm{sigm}\\\tanh\end{pmatrix}
148
+ W^l \begin{pmatrix}h^{l - 1}_t\\h^l_{t-1}\end{pmatrix}
149
+ \end{align*}
150
+ \end{minipage}\begin{minipage}{.5\linewidth}
151
+ \begin{align*}
152
+ &c^l_t = f \odot c^l_{t-1} + i \odot g\\
153
+ &h^l_t = o \odot \tanh(c^l_t)
154
+ \end{align*}
155
+ \end{minipage}
156
+ \vspace{-0.1in}
157
+
158
+ Here, the sigmoid function $\mathrm{sigm}$ and $\tanh$ are applied element-wise, and $W^l$ is a [$4n \times 2n$] matrix.
159
+ The three vectors $i,f,o \in \mathbb{R}^n$ are thought of as binary gates that control whether each memory cell is updated,
160
+ whether it is reset to zero, and whether its local state is revealed in the hidden vector, respectively. The activations of these gates
161
+ are based on the sigmoid function and hence allowed to range smoothly between zero and one to keep the model differentiable.
162
+ The vector $g \in \mathbb{R}^n$ ranges between -1 and 1 and is used to additively modify the memory contents. This additive interaction is a critical feature
163
+ of the LSTM's design, because during backpropagation a sum operation merely distributes gradients. This allows gradients on the
164
+ memory cells $c$ to flow backwards through time uninterrupted for long time periods, or at least until the flow is disrupted with the
165
+ multiplicative interaction of an active forget gate. Lastly, note that an implementation of the LSTM requires one to maintain
166
+ two vectors ($h_t^l$ and $c_t^l$) at every point in the network.
167
+
168
+ \textbf{Gated Recurrent Unit} (GRU) \cite{gru} recently proposed as a simpler alternative to the LSTM that takes the form:
169
+
170
+ \vspace{-0.1in}
171
+ \begin{minipage}{.5\linewidth}
172
+ \begin{align*}
173
+ &\begin{pmatrix}r\\z\end{pmatrix} =
174
+ \begin{pmatrix}\mathrm{sigm}\\\mathrm{sigm}\end{pmatrix}
175
+ W_r^l \begin{pmatrix}h^{l - 1}_t\\h^l_{t-1}\end{pmatrix}
176
+ \end{align*}
177
+ \end{minipage}\begin{minipage}{.5\linewidth}
178
+ \begin{align*}
179
+ &\tilde{h}^l_t = \tanh( W_x^l h^{l - 1}_t+ W_g^l ( r \odot h^l_{t-1}) ) \\
180
+ & h^l_t = (1 - z) \odot h^l_{t-1} + z \odot \tilde{h}^l_t
181
+ \end{align*}
182
+ \end{minipage}
183
+ \vspace{-0.1in}
184
+
185
+ Here, $W_r^l$ are [$2n \times 2n$] and $W_g^l$ and $W_x^l$ are [$n \times n$]. The GRU has the interpretation of
186
+ computing a \textit{candidate} hidden vector $\tilde{h}^l_t$ and then smoothly interpolating towards it gated by $z$.
187
+
188
+ \vspace{-0.1in}
189
+ \subsection{Character-level Language Modeling}
190
+ \vspace{-0.1in}
191
+
192
+ We use character-level language modeling as an interpretable testbed for sequence learning. In this setting, the input to
193
+ the network is a sequence of characters and the network is trained to predict the next character in the sequence with
194
+ a Softmax classifier at each time step. Concretely, assuming a fixed vocabulary of $K$ characters we encode all characters
195
+ with $K$-dimensional 1-of-$K$ vectors $\{x_t\}, t = 1, \ldots, T$, and feed these to the recurrent network to obtain
196
+ a sequence of $D$-dimensional hidden vectors at the last layer of the network $\{h_t^L\}, t = 1, \dots, T$. To obtain
197
+ predictions for the next character in the sequence we project this top layer of activations to a sequence of vectors $\{y_t\}$, where
198
+ $y_t = W_y h_t^L$ and $W_y$ is a [$K \times D$] parameter matrix. These vectors are interpreted as holding the (unnormalized)
199
+ log probability of the next character in the sequence and the objective is to minimize the average cross-entropy loss over all targets.
200
+
201
+ \vspace{-0.1in}
202
+ \subsection{Optimization}
203
+ \label{subsec:optimization}
204
+ \vspace{-0.1in}
205
+
206
+ Following previous work \cite{sutskever2014sequence} we initialize all parameters uniformly in range $[-0.08, 0.08]$. We use mini-batch
207
+ stochastic gradient descent with batch size 100 and RMSProp \cite{rmsprop} per-parameter adaptive
208
+ update with base learning rate $2 \times 10 ^{-3}$ and decay $0.95$. These settings work robustly with all of our models.
209
+ The network is unrolled for 100 time steps. We train each model for 50 epochs and decay the learning rate after 10 epochs by multiplying it with a factor of
210
+ 0.95 after each additional epoch. We use early stopping based on validation performance and cross-validate the amount of
211
+ dropout for each model individually.
212
+
213
+ \vspace{-0.1in}
214
+ \section{Experiments}
215
+ \vspace{-0.1in}
216
+
217
+ \textbf{Datasets}.
218
+ Two datasets previously used in the context of character-level language models are the Penn Treebank
219
+ dataset \cite{marcus1993building} and the Hutter Prize 100MB of Wikipedia dataset \cite{hutter} . However,
220
+ both datasets contain a mix of common language and special markup. Our goal is not to compete with previous
221
+ work but rather to study recurrent networks in a controlled setting and on both ends on the
222
+ spectrum of degree of structure. Therefore, we chose to use Leo Tolstoy's \textit{War and Peace} (WP) novel,
223
+ which consists of 3,258,246 characters of almost entirely English text with minimal markup, and at the other
224
+ end of the spectrum the source code of the \textit{Linux Kernel} (LK).
225
+ We shuffled all header and source files randomly and concatenated them into a single file to form
226
+ the 6,206,996 character long dataset. We split the data into train/val/test splits as 80/10/10 for WP and
227
+ 90/5/5 for LK. Therefore, there are approximately 300,000 characters in the validation/test splits in each case. The
228
+ total number of characters in the vocabulary is 87 for WP and 101 for LK.
229
+
230
+ \vspace{-0.1in}
231
+ \subsection{Comparing Recurrent Networks}
232
+ \vspace{-0.1in}
233
+
234
+ We first train several recurrent network models to support further analysis and to compare their performance
235
+ in a controlled setting. In particular, we train models in the cross product of type (LSTM/RNN/GRU), number of layers
236
+ (1/2/3), number of parameters (4 settings), and both datasets (WP/KL). For a 1-layer LSTM we used hidden size
237
+ vectors of 64,128,256, and 512 cells, which with our character vocabulary sizes translates to approximately
238
+ 50K, 130K, 400K, and 1.3M parameters respectively. The sizes of hidden layers of the other models were carefully
239
+ chosen so that the total number of parameters in each case is as close as possible to these 4 settings.
240
+
241
+ The test set results are shown in Figure \ref{fig:performance}.
242
+ Our consistent finding is that depth of at least two is beneficial. However, between two and three layers our
243
+ results are mixed. Additionally, the results are mixed between the LSTM and the GRU, but both significantly
244
+ outperform the RNN. We also computed the fraction of times that each pair of models
245
+ agree on the most likely character and use it to render a t-SNE \cite{tsne} embedding (we found this more
246
+ stable and robust than the KL divergence). The plot (Figure \ref{fig:performance}, right) further supports
247
+ the claim that the LSTM and the GRU make similar predictions while the RNNs form their own cluster.
248
+
249
+ \renewcommand\tabcolsep{3pt}
250
+
251
+ \begin{figure*}[t]
252
+ \centering
253
+ \begin{minipage}{.7\textwidth}
254
+ \centering
255
+ \small
256
+ \begin{tabulary}{\linewidth}{LCCC|CCC|CCC}
257
+ & \multicolumn{3}{c}{\textbf{LSTM}} & \multicolumn{3}{c}{\textbf{RNN}} & \multicolumn{3}{c}{\textbf{GRU}} \\
258
+ Layers & 1 & 2 & 3 & 1 & 2 & 3 & 1 & 2 & 3 \\
259
+ \hline
260
+ \hline
261
+ Size & \multicolumn{9}{c}{War and Peace Dataset} \\
262
+ \hline
263
+ 64 & 1.449 & 1.442 & 1.540 & 1.446 & 1.401 & 1.396 & 1.398 & \textbf{1.373} & 1.472 \\
264
+ 128 & 1.277 & 1.227 & 1.279 & 1.417 & 1.286 & 1.277 & 1.230 & \textbf{1.226} & 1.253 \\
265
+ 256 & 1.189 & \textbf{1.137} & 1.141 & 1.342 & 1.256 & 1.239 & 1.198 & 1.164 & 1.138 \\
266
+ 512 & 1.161 & 1.092 & 1.082 & - & - & - & 1.170 & 1.201 & \textbf{1.077} \\
267
+ \hline
268
+ \multicolumn{10}{c}{Linux Kernel Dataset} \\
269
+ \hline
270
+ 64 & 1.355 & \textbf{1.331} & 1.366 & 1.407 & 1.371 & 1.383 & 1.335 & 1.298 & 1.357 \\
271
+ 128 & 1.149 & 1.128 & 1.177 & 1.241 & \textbf{1.120} & 1.220 & 1.154 & 1.125 & 1.150 \\
272
+ 256 & 1.026 & \textbf{0.972} & 0.998 & 1.171 & 1.116 & 1.116 & 1.039 & 0.991 & 1.026 \\
273
+ 512 & 0.952 & 0.840 & 0.846 & - & - & - & 0.943 & 0.861 & \textbf{0.829} \\
274
+ \end{tabulary}
275
+ \end{minipage}\begin{minipage}{0.3\textwidth}
276
+ \centering
277
+ \vspace{0.24in}
278
+ \includegraphics[width=1\textwidth]{model-tsne-crop.pdf}
279
+ \end{minipage}
280
+ \caption{\textbf{Left:} The \textbf{test set cross-entropy loss} for all models and datasets (low is good).
281
+ Models in each row have nearly equal number of parameters. The test set has 300,000 characters.
282
+ The standard deviation, estimated with 100 bootstrap samples, is less than $4\times10^{-3}$ in all cases.
283
+ \textbf{Right:} A t-SNE embedding based on the probabilities assigned to
284
+ test set characters by each model on War and Peace. The color, size, and marker correspond to model type,
285
+ model size, and number of layers.}
286
+ \label{fig:performance}
287
+ \vspace{-0.2in}
288
+ \end{figure*}
289
+ \renewcommand\tabcolsep{6pt}
290
+
291
+ \vspace{-0.1in}
292
+ \subsection{Internal Mechanisms of an LSTM}
293
+ \vspace{-0.1in}
294
+
295
+ \textbf{Interpretable, long-range LSTM cells.}
296
+ An LSTMs can in principle use its
297
+ memory cells to remember long-range information and keep track of various attributes of text it is currently processing. For instance, it
298
+ is a simple exercise to write down toy cell weights that would allow the cell to keep track of whether it is inside a
299
+ quoted string. However, to our knowledge, the existence of such cells has never been experimentally demonstrated
300
+ on real-world data. In particular, it could be argued that even if the LSTM is in principle capable of using these
301
+ operations, practical optimization challenges (i.e. SGD dynamics, or approximate gradients due to truncated backpropagation
302
+ through time) might prevent it from discovering these solutions. In this experiment we verify that multiple interpretable
303
+ cells do in fact exist in these networks (see Figure \ref{fig:sample}). For instance, one cell is clearly
304
+ acting as a line length counter, starting with a high value and then slowly decaying with each character until the next newline. Other
305
+ cells turn on inside quotes, the parenthesis after if statements, inside strings or comments, or with increasing strength as the
306
+ indentation of a block of code increases. In particular, note that truncated backpropagation with our hyperparameters
307
+ prevents the gradient signal from directly noticing dependencies longer than 100 characters, but we still observe
308
+ cells that reliably keep track of quotes or comment blocks much longer than 100 characters (e.g. $\sim 230$ characters in the
309
+ quote detection cell example in Figure \ref{fig:sample}). We hypothesize that these cells first develop on patterns shorter than
310
+ 100 characters but then also appropriately generalize to longer sequences.
311
+
312
+ \textbf{Gate activation statistics}. We can gain some insight into the internal mechanisms of the LSTM by studying the gate
313
+ activations in the networks as they process test set data. We were particularly interested in
314
+ looking at the distributions of saturation regimes in the networks, where we define a gate to be
315
+ left or right-saturated if its activation is less than 0.1 or more than 0.9, respectively, or unsaturated otherwise. We then
316
+ compute the fraction of times that each LSTM gate spends left or right saturated, and plot the results in Figure \ref{fig:saturations}.
317
+ For instance, the number of often right-saturated forget gates is particularly interesting, since this corresponds to cells that
318
+ remember their values for very long time periods. Note that there are multiple cells that are almost always right-saturated
319
+ (showing up on bottom, right of the forget gate scatter plot), and hence function as nearly perfect integrators. Conversely,
320
+ there are no cells that function in purely feed-forward fashion, since their forget gates would show up as consistently left-saturated
321
+ (in top, left of the forget gate scatter plot). The output gate statistics also reveal that there are no cells that get consistently
322
+ revealed or blocked to the hidden state. Lastly, a surprising finding is that unlike the other two layers that contain gates with
323
+ nearly binary regime of operation (frequently either left or right saturated), the activations in the first layer are much more diffuse
324
+ (near the origin in our scatter plots). We struggle to explain this finding but note that it is present across all of our models. A similar
325
+ effect is present in our GRU model, where the first layer reset gates $r$ are nearly never right-saturated and the update gates $z$ are
326
+ rarely ever left-saturated. This points towards a purely feed-forward mode of operation on this layer, where the previous
327
+ hidden state is barely used.
328
+
329
+ \begin{figure*}
330
+ \includegraphics[width=1\linewidth]{sample_final.png}
331
+ \caption{Several examples of cells with interpretable activations discovered in our best Linux Kernel and
332
+ War and Peace LSTMs. Text color corresponds to $tanh(c)$, where -1 is red and +1 is blue.}
333
+ \label{fig:sample}
334
+ \vspace{-0.15in}
335
+ \end{figure*}
336
+
337
+ \begin{figure*}
338
+ \includegraphics[width=0.58\linewidth]{saturations_scatter_3layer_white.png}
339
+ \hspace{0.04\linewidth}
340
+ \includegraphics[width=0.38\linewidth]{saturations_scatter_3layer_white_gru.png}
341
+ \caption{
342
+ \textbf{Left three:} Saturation plots for an LSTM. Each circle is a gate in the LSTM and its position is
343
+ determined by the fraction of time it is left or right-saturated. These fractions must add to at most one
344
+ (indicated by the diagonal line). \textbf{Right two:} Saturation plot for a 3-layer GRU model.
345
+ }
346
+ \label{fig:saturations}
347
+ \vspace{-0.15in}
348
+ \end{figure*}
349
+
350
+ \vspace{-0.1in}
351
+ \subsection{Understanding Long-Range Interactions}
352
+ \vspace{-0.1in}
353
+
354
+ Good performance of LSTMs is frequently attributed to their ability to store long-range information.
355
+ In this section we test this hypothesis by comparing an LSTM with baseline models that
356
+ can only utilize information from a fixed number of previous steps. In particular, we consider two baselines:
357
+
358
+ \hspace{0.1in} \textit{1. $n$-NN}: A fully-connected neural network with one hidden layer and $\mathrm{tanh}$
359
+ nonlinearities. The input to the network is a sparse binary vector of dimension
360
+ $nK$ that concatenates the one-of-$K$ encodings of $n$ consecutive characters. We optimize the model
361
+ as described in Section~\ref{subsec:optimization} and cross-validate the size of the hidden layer.
362
+
363
+ \hspace{0.1in} \textit{2. $n$-gram}: An unpruned $(n+1)$-gram language model
364
+ using modified Kneser-Ney smoothing \cite{chen1999empirical}. This is a standard smoothing
365
+ method for language models \cite{huang2001spoken}. All models were trained using the
366
+ popular KenLM software package \cite{Heafield-estimate}.
367
+
368
+ \setcounter{table}{1}
369
+ \begin{table*}[t]
370
+ \small
371
+ \centering
372
+ \begin{tabulary}{\linewidth}{L|CCCCCCCCC|C}
373
+ \backslashbox{Model}{$n$} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 20\\
374
+ \hline
375
+ \multicolumn{11}{c}{War and Peace Dataset} \\
376
+ \hline
377
+ $n$-gram & 2.399 & 1.928 & 1.521 & 1.314 & 1.232 & 1.203 & \textbf{1.194} & 1.194 & 1.194 & 1.195 \\
378
+ $n$-NN & 2.399 & 1.931 & 1.553 & 1.451 & 1.339 & \textbf{1.321} & - & - & - & - \\
379
+ \hline
380
+ \multicolumn{11}{c}{Linux Kernel Dataset} \\
381
+ \hline
382
+ $n$-gram & 2.702 & 1.954 & 1.440 & 1.213 & 1.097 & 1.027 & 0.982 & 0.953 & 0.933 & \textbf{0.889} \\
383
+ $n$-NN & 2.707 & 1.974 & 1.505 & 1.395 & \textbf{1.256} & 1.376 & - & - & - & - \\
384
+ \end{tabulary}
385
+ \caption{The \textbf{test set cross-entropy loss} on both datasets for $n$-gram models (low is good).
386
+ The standard deviation estimate using 100 bootstrap samples is below $4\times10^{-3}$ in all cases.}
387
+ \label{tab:ngram-performance}
388
+ \end{table*}
389
+
390
+ \textbf{Performance comparisons.}
391
+ The performance of both $n$-gram models is shown in Table~\ref{tab:ngram-performance}.
392
+ The $n$-gram and $n$-NN models perform nearly identically for small values of $n$,
393
+ but for larger values the $n$-NN models start to overfit and the $n$-gram model performs better.
394
+ Moreover, we see that on both datasets our best recurrent network outperforms the 20-gram
395
+ model (1.077 vs. 1.195 on WP and 0.84 vs.0.889). It is difficult to make a direct model size comparison,
396
+ but the 20-gram model file has 3GB, while our largest checkpoints are 11MB. However, the assumptions
397
+ encoded in the Kneser-Ney smoothing model are intended for word-level modeling of natural
398
+ language and may not be optimal for character-level data. Despite this concern, these results already provide
399
+ weak evidence that the recurrent networks are effectively utilizing information beyond 20 characters.
400
+
401
+ \begin{figure}
402
+ \centering
403
+ \includegraphics[width=0.19\linewidth]{linux_error_venn.png}
404
+ \includegraphics[width=0.39\linewidth]{linux-lstm-vs-kenlm-per-char.pdf}
405
+ \includegraphics[width=0.39\linewidth]{warpeace-lstm-vs-kenlm-per-char.pdf}
406
+ \caption{
407
+ \textbf{Left:} Overlap between test-set errors between our best 3-layer LSTM and
408
+ the $n$-gram models (low area is good).
409
+ \textbf{Middle/Right:} Mean probabilities assigned to a correct character (higher is better),
410
+ broken down by the character, and then sorted by the difference between two models.
411
+ ``\textless s\textgreater'' is the space character. LSTM (red) outperforms the
412
+ 20-gram model (blue) on special characters that require long-range reasoning.
413
+ Middle: LK dataset, Right: WP dataset.
414
+ }
415
+ \label{fig:error-venn}
416
+ \vspace{-0.1in}
417
+ \end{figure}
418
+
419
+ \textbf{Error Analysis.}
420
+ It is instructive to delve deeper into the errors made by both recurrent networks and $n$-gram models.
421
+ In particular, we define a character to be an error if the probability assigned to it by a model on the previous time step
422
+ is below 0.5. Figure~\ref{fig:error-venn} (left) shows the overlap between the test-set errors for the
423
+ 3-layer LSTM, and the best $n$-NN and $n$-gram models. We see that the majority of errors are
424
+ shared by all three models, but each model also has its own unique errors.
425
+
426
+ To gain deeper insight into the errors that are unique to the LSTM or the 20-gram model,
427
+ we compute the mean probability assigned to each character in the vocabulary across the test
428
+ set. In Figure~\ref{fig:error-venn} (middle,right) we display the 10 characters where each
429
+ model has the largest advantage over the other.
430
+ On the Linux Kernel dataset, the LSTM displays a large advantage on special characters that
431
+ are used to structure C programs, including whitespace and brackets. The War and Peace
432
+ dataset features an interesting long-term dependency with the carriage return, which occurs
433
+ approximately every 70 characters. Figure~\ref{fig:error-venn} (right) shows that the LSTM has a
434
+ distinct advantage on this character. To accurately predict the presence of the carriage return
435
+ the model likely needs to keep track of its distance since the last carriage return. The cell example
436
+ we've highlighted in Figure~\ref{fig:sample} (top, left) seems particularly well-tuned for
437
+ this specific task. Similarly, to predict a closing bracket or quotation mark, the model must be
438
+ aware of the corresponding open bracket, which may have appeared many time steps ago.
439
+ The fact that the LSTM performs significantly better than the 20-gram model on these characters
440
+ provides strong evidence that the model is capable of effectively keeping track of long-range interactions.
441
+
442
+ \textbf{Case study: closing brace}. Of these structural characters,
443
+ the one that requires the longest-term reasoning is the closing
444
+ brace (``\}'') on the Linux Kernel dataset. Braces are used to denote blocks of code, and may be
445
+ nested; as such, the distance between an opening brace and its corresponding closing brace can
446
+ range from tens to hundreds of characters. This feature makes the closing brace an ideal test
447
+ case for studying the ability of the LSTM to reason over various time scales. We group closing
448
+ brace characters on the test set by the distance to their corresponding open brace and compute
449
+ the mean probability assigned by the LSTM and the 20-gram model to closing braces within each
450
+ group. The results are shown in Figure~\ref{fig:brace-distance} (left). First, note that the LSTM
451
+ only slightly outperforms the 20-gram model in the first bin, where the distance between braces
452
+ is only up to 20 characters. After this point the performance of the 20-gram model stays
453
+ relatively constant, reflecting a baseline probability of predicting the closing brace without seeing
454
+ its matching opening brace. Compared to this baseline, we see that the LSTM gains significant
455
+ boosts up to 60 characters, and then its performance delta slowly decays over time as it becomes
456
+ difficult to keep track of the dependence.
457
+
458
+ \textbf{Training dynamics.}
459
+ It is also instructive to examine the training dynamics of the LSTM by comparing it
460
+ with trained $n$-NN models during training using the (symmetric) KL divergence between the
461
+ predictive distributions on the test set. We plot the divergence and the difference in the
462
+ mean loss in Figure~\ref{fig:brace-distance} (right). Notably, we see that in the first few
463
+ iterations the LSTM behaves like the 1-NN model but then diverges from it soon after. The LSTM
464
+ then behaves most like the 2-NN, 3-NN, and 4-NN models in turn. This experiment suggests that
465
+ the LSTM ``grows'' its competence over increasingly longer dependencies during training.
466
+ This insight might be related to why Sutskever et al. \cite{sutskever2014sequence} observe
467
+ improvements when they reverse the source sentences in their encoder-decoder architecture for machine
468
+ translation. The inversion introduces short-term dependencies that the LSTM can model first,
469
+ and then longer dependencies are learned over time.
470
+
471
+ \begin{figure*}
472
+ \centering
473
+ \includegraphics[width=0.48\textwidth]{linux-curly-dist-vs-prob-v2.pdf}
474
+ \includegraphics[width=0.48\textwidth]{linux-lstm-vs-nn-over-time.pdf}
475
+ \caption{
476
+ \textbf{Left}: Mean probabilities that the LSTM and 20-gram model assign to
477
+ the ``\}" character, bucketed by the distance to the matching ``\{".
478
+ \textbf{Right}: Comparison of the similarity between 3-layer LSTM and the $n$-NN baselines
479
+ over the first 3 epochs of training, as measured by the symmetric KL-divergence (middle) and the
480
+ test set loss (right). Low KL indicates similar predictions, and positive $\Delta$loss indicates that
481
+ the LSTM outperforms the baseline.
482
+ }
483
+ \label{fig:brace-distance}
484
+ \vspace{-0.15in}
485
+ \end{figure*}
486
+
487
+ \vspace{-0.1in}
488
+ \subsection{Error Analysis: Breaking Down the Failure Cases}
489
+ \vspace{-0.1in}
490
+
491
+ In this section we break down LSTM's errors into categories to study the remaining limitations,
492
+ the relative severity of each error type, and to suggest areas for further study. We focus on
493
+ the War and Peace dataset where it is easier to categorize the errors. Our approach is to
494
+ \textit{``peel the onion''} by iteratively removing the errors with a series of constructed oracles.
495
+ As in the last section, we consider a character to be an error if the probability it was assigned
496
+ by the model in the previous time step is below 0.5. Note that the order in which the oracles are
497
+ applied influences the results. We tried to apply the oracles
498
+ in order of increasing difficulty of removing each error category and believe that the final results
499
+ are instructive despite this downside. The oracles we use are, in order:
500
+
501
+ \textbf{$n$-gram oracle.} First, we construct optimistic $n$-gram oracles that eliminate errors that
502
+ might be fixed with better modeling of short dependencies. In particular, we evaluate the $n$-gram
503
+ model ($n = 1, \ldots, 9$) and remove a character error if it is correctly classified (probability assigned
504
+ to that character greater than 0.5) by any of these models. This gives us an approximate idea of the
505
+ amount of signal present only in the last 9 characters, and how many errors we could optimistically
506
+ hope to eliminate without needing to reason over long time horizons.
507
+
508
+ \vspace{-0.07in}
509
+ \textbf{Dynamic $n$-long memory oracle.} To motivate the next oracle, consider the string \textit{``Jon yelled at Mary but Mary couldn't
510
+ hear him.''} One interesting and consistent failure mode that we noticed in the predictions is that if the LSTM fails to predict the characters
511
+ of the first occurrence of \textit{``Mary''} then it will almost always also fail to predict the same characters of the second occurrence,
512
+ with a nearly identical pattern of errors. However, in principle the presence of the first mention should make the second much
513
+ more likely. The LSTM could conceivably store a summary of previously seen characters in the data and fall back on this memory when it is
514
+ uncertain. However, this does not appear to take place in practice. This limitation is related to the improvements
515
+ seen in ``dynamic evaluation'' \cite{mikolov2012statistical,jelinek1991dynamic} of recurrent language models,
516
+ where an RNN is allowed to train on the test set characters during evaluation as long as it sees them
517
+ only once. In this mode of operation when the RNN trains on the first occurrence of \textit{``Mary''}, the log probabilities on the second
518
+ occurrence are significantly better. We hypothesize that this \textit{dynamic} aspect is a common feature of sequence data,
519
+ where certain subsequences that might not frequently occur in the training data should still be more likely if they
520
+ were present in the immediate history. However, this general algorithm does not seem to be learned by the LSTM. Our
521
+ dynamic memory oracle quantifies the severity of this limitation by removing errors in all words (starting with the second
522
+ character) that can found as a substring in the last $n$ characters (we use $n \in \{100, 500, 1000, 5000\})$.
523
+
524
+ \begin{figure*}
525
+ \centering
526
+ \includegraphics[width=1\linewidth]{nips15_onion.pdf}
527
+ \caption{
528
+ \textbf{Left:} LSTM errors removed one by one with oracles, starting from top of the pie chart
529
+ and going counter-clockwise. The area of each slice corresponds to fraction of errors contributed.
530
+ ``$n$-memory'' refers to dynamic memory oracle with context of $n$ previous characters. ``Word $t$-train''
531
+ refers to the rare words oracle with word count threshold of $t$.
532
+ \textbf{Right:} Concrete examples of text from the test set for each error type. Blue color
533
+ highlights the relevant characters with the associated error. For the memory category we also
534
+ highlight the repeated substrings with red bounding rectangles.}
535
+ \label{fig:errors}
536
+ \vspace{-0.15in}
537
+ \end{figure*}
538
+
539
+ \vspace{-0.07in}
540
+ \textbf{Rare words oracle.} Next, we construct an oracle that eliminates errors for rare words
541
+ that occur only up to $n$ times in the training data ($n = 0, \ldots, 5$). This estimates the severity of errors
542
+ that could optimistically be eliminated by increasing the size of the training data, or with pretraining.
543
+
544
+ \vspace{-0.07in}
545
+ \textbf{Word model oracle.} We noticed that a large portion of the errors occur on the first character
546
+ of each word. Intuitively, the task of selecting the next word in the sequence is harder than completing
547
+ the last few characters of a known word. Motivated by this observation we constructed an oracle that
548
+ eliminated all errors after a space, quote or a newline. Interestingly, a high portion of errors can be
549
+ found after a newline, since the models have to learn that newline has semantics similar to a space.
550
+
551
+ \vspace{-0.07in}
552
+ \textbf{Punctuation oracle.} The remaining errors become difficult to blame on one particular, interpretable
553
+ aspect of the modeling. At this point we construct an oracle that removes errors on all punctuation.
554
+
555
+ \vspace{-0.07in}
556
+ \textbf{Boost oracles.} The remaining errors that do not show salient structures or patterns are removed by an
557
+ oracle that boosts the probability of the correct letter by a fixed amount.
558
+ These oracles allow us to understand the distribution of the difficulty of the remaining errors.
559
+
560
+ We now subject two LSTM models to the error analysis: First, our best LSTM model and second, the best LSTM model
561
+ in the smallest model category (50K parameters). The small and large models allow us to understand how the error
562
+ break down changes as we scale up the model. The error breakdown after applying each oracle for both models
563
+ can be found in Figure \ref{fig:errors}.
564
+
565
+ \textbf{The error breakdown.} In total, our best LSTM model made a total of 140K
566
+ errors out of 330K test set characters (42\%). Of these, the $n$-gram oracle eliminates 18\%, suggesting that the model is not
567
+ taking full advantage of the last 9 characters. The dynamic memory oracle eliminates 6\% of the errors. In principle,
568
+ a dynamic evaluation scheme could be used to mitigate this error, but we believe that a more principled approach
569
+ could involve an approach similar to Memory Networks \cite{memorynets}, where the model is allowed to attend to a recent
570
+ history of the sequence while making its next prediction. Finally, the rare words oracle accounts for 9\% of the errors.
571
+ This error type might be mitigated with unsupervised pretraining \cite{dai2015semi}, or by increasing the size of the training set.
572
+ The majority of the remaining errors (37\%) follow a space, a quote, or a newline, indicating the model's difficulty
573
+ with word-level predictions. This suggests that longer time horizons in backpropagation through time, or possibly hierarchical
574
+ context models, could provide improvements. See Figure \ref{fig:errors} (right) for examples of each error type. We
575
+ believe that this type of error breakdown is a valuable tool for isolating and understanding the source of improvements
576
+ provided by new proposed models.
577
+
578
+ \textbf{Errors eliminated by scaling up}. In contrast, the smaller LSTM
579
+ model makes a total of 184K errors (or 56\% of the test set), or approximately 44K more than the large model.
580
+ Surprisingly, 36K of these errors (81\%) are $n$-gram errors, 5K come from the boost category,
581
+ and the remaining 3K are distributed across the other categories relatively evenly.
582
+ That is, scaling the model up by a factor 26 in the number of parameters has almost entirely provided gains
583
+ in the local, $n$-gram error rate and has left the other error categories untouched in comparison.
584
+ This analysis provides some evidence that it might be necessary to develop new architectural improvements
585
+ instead of simply scaling up the basic model.
586
+
587
+ \vspace{-0.1in}
588
+ \section{Conclusion}
589
+ \vspace{-0.1in}
590
+
591
+ We have used character-level language models as an interpretable test bed for analyzing the predictions, representations
592
+ training dynamics, and error types present in Recurrent Neural Networks. In particular, our qualitative visualization
593
+ experiments, cell activation statistics and comparisons to finite horizon $n$-gram models demonstrate that these
594
+ networks learn powerful, and often interpretable long-range interactions on real-world data. Our error analysis broke
595
+ down cross entropy loss into several interpretable categories, and allowed us to illuminate the sources of remaining
596
+ limitations and to suggest further areas for study. In particular, we found that scaling up the model almost entirely
597
+ eliminates errors in the $n$-gram category, which provides some evidence that further architectural innovations may
598
+ be needed to address the remaining errors.
599
+
600
+ \subsubsection*{Acknowledgments}
601
+
602
+ We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this research.
603
+
604
+ \bibliography{iclr2016_conference}
605
+ \bibliographystyle{iclr2016_conference}
606
+
607
+ \end{document}
papers/1506/1506.02640.tex ADDED
@@ -0,0 +1,514 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,twocolumn,letterpaper]{article}
2
+
3
+ \usepackage{cvpr}
4
+ \usepackage{times}
5
+ \usepackage{epsfig}
6
+ \usepackage{graphicx}
7
+ \usepackage{amsmath}
8
+ \usepackage{amssymb}
9
+ \usepackage{subcaption}
10
+
11
+ \usepackage{url}
12
+ \usepackage{relsize}
13
+ \usepackage{xcolor,colortbl}
14
+ \usepackage{multirow}
15
+ \usepackage{wrapfig}
16
+ \usepackage{bbm}
17
+ \usepackage[labelfont=bf]{caption}
18
+ \usepackage{tabularx}
19
+
20
+
21
+
22
+ \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}
23
+
24
+ \cvprfinalcopy
25
+
26
+ \def\cvprPaperID{2411} \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
27
+
28
+ \ifcvprfinal\pagestyle{empty}\fi
29
+ \begin{document}
30
+
31
+ \title{\vspace{-1cm}You Only Look Once: \\
32
+ Unified, Real-Time Object Detection\vspace{-.25cm}}
33
+
34
+
35
+
36
+
37
+ \author{Joseph Redmon$^*$, Santosh Divvala$^{* \dag}$, Ross Girshick$^\P$, Ali Farhadi$^{* \dag}$\\
38
+ \small{University of Washington$^*$, Allen Institute for AI$^\dag$, Facebook AI Research$^\P$}\\ \url{http://pjreddie.com/yolo/}}
39
+
40
+ \maketitle
41
+
42
+
43
+ \begin{abstract}
44
+ \vspace{-.25cm}
45
+ We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance.
46
+
47
+ Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
48
+ \end{abstract}
49
+
50
+ \section{Introduction}
51
+
52
+ Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms for object detection would allow computers to drive cars without specialized sensors, enable assistive devices to convey real-time scene information to human users, and unlock the potential for general purpose, responsive robotic systems.
53
+
54
+
55
+ Current detection systems repurpose classifiers to perform detection. To detect an object, these systems take a classifier for that object and evaluate it at various locations and scales in a test image. Systems like deformable parts models (DPM) use a sliding window approach where the classifier is run at evenly spaced locations over the entire image \cite{lsvm-pami}.
56
+
57
+ More recent approaches like R-CNN use region proposal methods to first generate potential bounding boxes in an image and then run a classifier on these proposed boxes. After classification, post-processing is used to refine the bounding boxes, eliminate duplicate detections, and rescore the boxes based on other objects in the scene \cite{girshick2014rich}. These complex pipelines are slow and hard to optimize because each individual component must be trained separately.
58
+
59
+ \begin{figure}[t]
60
+ \begin{center}
61
+ \includegraphics[width=\linewidth]{system}
62
+ \end{center}
63
+ \caption{\small \textbf{The YOLO Detection System.} Processing images with YOLO is simple and straightforward. Our system (1) resizes the input image to $448 \times 448$, (2) runs a single convolutional network on the image, and (3) thresholds the resulting detections by the model's confidence.}
64
+ \label{system}
65
+ \end{figure}
66
+
67
+
68
+
69
+
70
+
71
+ We reframe object detection as a single regression problem, straight from image pixels to bounding box coordinates and class probabilities. Using our system, you only look once (YOLO) at an image to predict what objects are present and where they are.
72
+
73
+ YOLO is refreshingly simple: see Figure \ref{system}. A single convolutional network simultaneously predicts multiple bounding boxes and class probabilities for those boxes. YOLO trains on full images and directly optimizes detection performance. This unified model has several benefits over traditional methods of object detection.
74
+
75
+ First, YOLO is extremely fast. Since we frame detection as a regression problem we don't need a complex pipeline. We simply run our neural network on a new image at test time to predict detections. Our base network runs at 45 frames per second with no batch processing on a Titan X GPU and a fast version runs at more than 150 fps. This means we can process streaming video in real-time with less than 25 milliseconds of latency. Furthermore, YOLO achieves more than twice the mean average precision of other real-time systems. For a demo of our system running in real-time on a webcam please see our project webpage: \url{http://pjreddie.com/yolo/}.
76
+
77
+ Second, YOLO reasons globally about the image when making predictions. Unlike sliding window and region proposal-based techniques, YOLO sees the entire image during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Fast R-CNN, a top detection method \cite{DBLP:journals/corr/Girshick15}, mistakes background patches in an image for objects because it can't see the larger context. YOLO makes less than half the number of background errors compared to Fast R-CNN.
78
+
79
+ Third, YOLO learns generalizable representations of objects. When trained on natural images and tested on artwork, YOLO outperforms top detection methods like DPM and R-CNN by a wide margin. Since YOLO is highly generalizable it is less likely to break down when applied to new domains or unexpected inputs.
80
+
81
+ YOLO still lags behind state-of-the-art detection systems in accuracy. While it can quickly identify objects in images it struggles to precisely localize some objects, especially small ones. We examine these tradeoffs further in our experiments.
82
+
83
+ All of our training and testing code is open source.
84
+ A variety of pretrained models are also available to download.
85
+
86
+ \section{Unified Detection}
87
+
88
+ We unify the separate components of object detection into a single neural network. Our network uses features from the entire image to predict each bounding box. It also predicts all bounding boxes across all classes for an image simultaneously. This means our network reasons globally about the full image and all the objects in the image. The YOLO design enables end-to-end training and real-time speeds while maintaining high average precision.
89
+
90
+ Our system divides the input image into an $S \times S$ grid. If the center of an object falls into a grid cell, that grid cell is responsible for detecting that object.
91
+
92
+ Each grid cell predicts $B$ bounding boxes and confidence scores for those boxes. These confidence scores reflect how confident the model is that the box contains an object and also how accurate it thinks the box is that it predicts. Formally we define confidence as $\Pr(\textrm{Object}) * \textrm{IOU}_{\textrm{pred}}^{\textrm{truth}}$. If no object exists in that cell, the confidence scores should be zero. Otherwise we want the confidence score to equal the intersection over union (IOU) between the predicted box and the ground truth.
93
+
94
+ Each bounding box consists of 5 predictions: $x$, $y$, $w$, $h$, and confidence. The $(x,y)$ coordinates represent the center of the box relative to the bounds of the grid cell. The width and height are predicted relative to the whole image. Finally the confidence prediction represents the IOU between the predicted box and any ground truth box.
95
+
96
+ Each grid cell also predicts $C$ conditional class probabilities, $\Pr(\textrm{Class}_i | \textrm{Object})$. These probabilities are conditioned on the grid cell containing an object. We only predict one set of class probabilities per grid cell, regardless of the number of boxes $B$.
97
+
98
+ At test time we multiply the conditional class probabilities and the individual box confidence predictions,
99
+ \begin{equation}
100
+ \scriptsize
101
+ \Pr(\textrm{Class}_i | \textrm{Object}) * \Pr(\textrm{Object}) * \textrm{IOU}_{\textrm{pred}}^{\textrm{truth}} = \Pr(\textrm{Class}_i)*\textrm{IOU}_{\textrm{pred}}^{\textrm{truth}}
102
+ \end{equation}
103
+ which gives us class-specific confidence scores for each box. These scores encode both the probability of that class appearing in the box and how well the predicted box fits the object.
104
+
105
+
106
+ \begin{figure}[h]
107
+ \begin{center}
108
+ \includegraphics[width=\linewidth]{model}
109
+ \end{center}
110
+ \caption{\small \textbf{The Model.} Our system models detection as a regression problem. It divides the image into an $S \times S$ grid and for each grid cell predicts $B$ bounding boxes, confidence for those boxes, and $C$ class probabilities. These predictions are encoded as an $S \times S \times (B*5 + C)$ tensor.}
111
+ \label{model}
112
+ \end{figure}
113
+
114
+ For evaluating YOLO on \textsc{Pascal} VOC, we use $S=7$, $B=2$. \textsc{Pascal} VOC has 20 labelled classes so $C=20$. Our final prediction is a $7 \times 7 \times 30$ tensor.
115
+
116
+
117
+ \subsection{Network Design}
118
+
119
+ \begin{figure*}[t]
120
+ \centering
121
+ \includegraphics[width=.8\linewidth]{net}
122
+ \caption{\small \textbf{The Architecture.} Our detection network has 24 convolutional layers followed by 2 fully connected layers. Alternating $1 \times 1$ convolutional layers reduce the features space from preceding layers. We pretrain the convolutional layers on the ImageNet classification task at half the resolution ($224 \times 224$ input image) and then double the resolution for detection.}
123
+ \label{net}
124
+ \end{figure*}
125
+
126
+ We implement this model as a convolutional neural network and evaluate it on the \textsc{Pascal} VOC detection dataset \cite{Everingham15}. The initial convolutional layers of the network extract features from the image while the fully connected layers predict the output probabilities and coordinates.
127
+
128
+ Our network architecture is inspired by the GoogLeNet model for image classification \cite{DBLP:journals/corr/SzegedyLJSRAEVR14}. Our network has 24 convolutional layers followed by 2 fully connected layers. Instead of the inception modules used by GoogLeNet, we simply use $1 \times 1$ reduction layers followed by $3 \times 3$ convolutional layers, similar to Lin et al \cite{DBLP:journals/corr/LinCY13}. The full network is shown in Figure \ref{net}.
129
+
130
+ We also train a fast version of YOLO designed to push the boundaries of fast object detection. Fast YOLO uses a neural network with fewer convolutional layers (9 instead of 24) and fewer filters in those layers. Other than the size of the network, all training and testing parameters are the same between YOLO and Fast YOLO.
131
+
132
+ The final output of our network is the $7 \times 7 \times 30$ tensor of predictions.
133
+
134
+ \subsection{Training}
135
+
136
+
137
+ We pretrain our convolutional layers on the ImageNet 1000-class competition dataset \cite{ILSVRC15}. For pretraining we use the first 20 convolutional layers from Figure \ref{net} followed by a average-pooling layer and a fully connected layer. We train this network for approximately a week and achieve a single crop top-5 accuracy of 88\% on the ImageNet 2012 validation set, comparable to the GoogLeNet models in Caffe's Model Zoo \cite{zoo}. We use the Darknet framework for all training and inference \cite{darknet13}.
138
+
139
+ We then convert the model to perform detection. Ren et al. show that adding both convolutional and connected layers to pretrained networks can improve performance \cite{DBLP:journals/corr/RenHGZ015}. Following their example, we add four convolutional layers and two fully connected layers with randomly initialized weights. Detection often requires fine-grained visual information so we increase the input resolution of the network from $224 \times 224$ to $448 \times 448$.
140
+
141
+ Our final layer predicts both class probabilities and bounding box coordinates. We normalize the bounding box width and height by the image width and height so that they fall between 0 and 1. We parametrize the bounding box $x$ and $y$ coordinates to be offsets of a particular grid cell location so they are also bounded between 0 and 1.
142
+
143
+ We use a linear activation function for the final layer and all other layers use the following leaky rectified linear activation:
144
+
145
+ \begin{equation}
146
+ \phi(x) =
147
+ \begin{cases}
148
+ x, & \text{if } x > 0\\
149
+ 0.1x, & \text{otherwise}
150
+ \end{cases}
151
+ \end{equation}
152
+
153
+ We optimize for sum-squared error in the output of our model. We use sum-squared error because it is easy to optimize, however it does not perfectly align with our goal of maximizing average precision. It weights localization error equally with classification error which may not be ideal. Also, in every image many grid cells do not contain any object. This pushes the ``confidence'' scores of those cells towards zero, often overpowering the gradient from cells that do contain objects. This can lead to model instability, causing training to diverge early on.
154
+
155
+ To remedy this, we increase the loss from bounding box coordinate predictions and decrease the loss from confidence predictions for boxes that don't contain objects. We use two parameters, $\lambda_\textrm{coord}$ and $\lambda_\textrm{noobj}$ to accomplish this. We set $\lambda_\textrm{coord} = 5$ and $\lambda_\textrm{noobj} = .5$.
156
+
157
+ Sum-squared error also equally weights errors in large boxes and small boxes. Our error metric should reflect that small deviations in large boxes matter less than in small boxes. To partially address this we predict the square root of the bounding box width and height instead of the width and height directly.
158
+
159
+
160
+ YOLO predicts multiple bounding boxes per grid cell. At training time we only want one bounding box predictor to be responsible for each object. We assign one predictor to be ``responsible'' for predicting an object based on which prediction has the highest current IOU with the ground truth. This leads to specialization between the bounding box predictors. Each predictor gets better at predicting certain sizes, aspect ratios, or classes of object, improving overall recall.
161
+
162
+ During training we optimize the following, multi-part loss function:
163
+ \scriptsize
164
+ \begin{multline}
165
+ \lambda_\textbf{coord}
166
+ \sum_{i = 0}^{S^2}
167
+ \sum_{j = 0}^{B}
168
+ \mathlarger{\mathbbm{1}}_{ij}^{\text{obj}}
169
+ \left[
170
+ \left(
171
+ x_i - \hat{x}_i
172
+ \right)^2 +
173
+ \left(
174
+ y_i - \hat{y}_i
175
+ \right)^2
176
+ \right]
177
+ \\
178
+ + \lambda_\textbf{coord}
179
+ \sum_{i = 0}^{S^2}
180
+ \sum_{j = 0}^{B}
181
+ \mathlarger{\mathbbm{1}}_{ij}^{\text{obj}}
182
+ \left[
183
+ \left(
184
+ \sqrt{w_i} - \sqrt{\hat{w}_i}
185
+ \right)^2 +
186
+ \left(
187
+ \sqrt{h_i} - \sqrt{\hat{h}_i}
188
+ \right)^2
189
+ \right]
190
+ \\
191
+ + \sum_{i = 0}^{S^2}
192
+ \sum_{j = 0}^{B}
193
+ \mathlarger{\mathbbm{1}}_{ij}^{\text{obj}}
194
+ \left(
195
+ C_i - \hat{C}_i
196
+ \right)^2
197
+ \\
198
+ + \lambda_\textrm{noobj}
199
+ \sum_{i = 0}^{S^2}
200
+ \sum_{j = 0}^{B}
201
+ \mathlarger{\mathbbm{1}}_{ij}^{\text{noobj}}
202
+ \left(
203
+ C_i - \hat{C}_i
204
+ \right)^2
205
+ \\
206
+ + \sum_{i = 0}^{S^2}
207
+ \mathlarger{\mathbbm{1}}_i^{\text{obj}}
208
+ \sum_{c \in \textrm{classes}}
209
+ \left(
210
+ p_i(c) - \hat{p}_i(c)
211
+ \right)^2
212
+ \end{multline}
213
+ \normalsize
214
+ where $\mathbbm{1}_i^{\text{obj}}$ denotes if object appears in cell $i$ and $\mathbbm{1}_{ij}^{\text{obj}}$ denotes that the $j$th bounding box predictor in cell $i$ is ``responsible'' for that prediction.
215
+
216
+ Note that the loss function only penalizes classification error if an object is present in that grid cell (hence the conditional class probability discussed earlier). It also only penalizes bounding box coordinate error if that predictor is ``responsible'' for the ground truth box (i.e. has the highest IOU of any predictor in that grid cell).
217
+
218
+ We train the network for about 135 epochs on the training and validation data sets from \textsc{Pascal} VOC 2007 and 2012. When testing on 2012 we also include the VOC 2007 test data for training. Throughout training we use a batch size of 64, a momentum of $0.9$ and a decay of $0.0005$.
219
+
220
+ Our learning rate schedule is as follows: For the first epochs we slowly raise the learning rate from $10^{-3}$ to $10^{-2}$. If we start at a high learning rate our model often diverges due to unstable gradients. We continue training with $10^{-2}$ for 75 epochs, then $10^{-3}$ for 30 epochs, and finally $10^{-4}$ for 30 epochs.
221
+
222
+ To avoid overfitting we use dropout and extensive data augmentation. A dropout layer with rate~=~.5 after the first connected layer prevents co-adaptation between layers \cite{hinton2012improving}. For data augmentation we introduce random scaling and translations of up to 20\% of the original image size. We also randomly adjust the exposure and saturation of the image by up to a factor of $1.5$ in the HSV color space.
223
+
224
+ \subsection{Inference}
225
+
226
+ Just like in training, predicting detections for a test image only requires one network evaluation. On \textsc{Pascal} VOC the network predicts 98 bounding boxes per image and class probabilities for each box. YOLO is extremely fast at test time since it only requires a single network evaluation, unlike classifier-based methods.
227
+
228
+ The grid design enforces spatial diversity in the bounding box predictions. Often it is clear which grid cell an object falls in to and the network only predicts one box for each object. However, some large objects or objects near the border of multiple cells can be well localized by multiple cells. Non-maximal suppression can be used to fix these multiple detections. While not critical to performance as it is for R-CNN or DPM, non-maximal suppression adds 2-3\% in mAP.
229
+
230
+ \subsection{Limitations of YOLO}
231
+
232
+ YOLO imposes strong spatial constraints on bounding box predictions since each grid cell only predicts two boxes and can only have one class. This spatial constraint limits the number of nearby objects that our model can predict. Our model struggles with small objects that appear in groups, such as flocks of birds.
233
+
234
+ Since our model learns to predict bounding boxes from data, it struggles to generalize to objects in new or unusual aspect ratios or configurations. Our model also uses relatively coarse features for predicting bounding boxes since our architecture has multiple downsampling layers from the input image.
235
+
236
+ Finally, while we train on a loss function that approximates detection performance, our loss function treats errors the same in small bounding boxes versus large bounding boxes. A small error in a large box is generally benign but a small error in a small box has a much greater effect on IOU. Our main source of error is incorrect localizations.
237
+
238
+ \section{Comparison to Other Detection Systems}
239
+
240
+ Object detection is a core problem in computer vision. Detection pipelines generally start by extracting a set of robust features from input images (Haar \cite{papageorgiou1998general}, SIFT \cite{lowe1999object}, HOG \cite{dalal2005histograms}, convolutional features \cite{donahue2013decaf}). Then, classifiers \cite{viola2001robust,lienhart2002extended,girshick2014rich,lsvm-pami} or localizers \cite{blaschko2008learning,DBLP:journals/corr/SermanetEZMFL13} are used to identify objects in the feature space. These classifiers or localizers are run either in sliding window fashion over the whole image or on some subset of regions in the image \cite{uijlings2013selective,gould2009region,zitnick2014edge}. We compare the YOLO detection system to several top detection frameworks, highlighting key similarities and differences.
241
+
242
+ \textbf{Deformable parts models.} Deformable parts models (DPM) use a sliding window approach to object detection \cite{lsvm-pami}. DPM uses a disjoint pipeline to extract static features, classify regions, predict bounding boxes for high scoring regions, etc. Our system replaces all of these disparate parts with a single convolutional neural network. The network performs feature extraction, bounding box prediction, non-maximal suppression, and contextual reasoning all concurrently. Instead of static features, the network trains the features in-line and optimizes them for the detection task. Our unified architecture leads to a faster, more accurate model than DPM.
243
+
244
+ \textbf{R-CNN.} R-CNN and its variants use region proposals instead of sliding windows to find objects in images. Selective Search \cite{uijlings2013selective} generates potential bounding boxes, a convolutional network extracts features, an SVM scores the boxes, a linear model adjusts the bounding boxes, and non-max suppression eliminates duplicate detections. Each stage of this complex pipeline must be precisely tuned independently and the resulting system is very slow, taking more than 40 seconds per image at test time \cite{DBLP:journals/corr/Girshick15}.
245
+
246
+ YOLO shares some similarities with R-CNN. Each grid cell proposes potential bounding boxes and scores those boxes using convolutional features. However, our system puts spatial constraints on the grid cell proposals which helps mitigate multiple detections of the same object. Our system also proposes far fewer bounding boxes, only 98 per image compared to about 2000 from Selective Search. Finally, our system combines these individual components into a single, jointly optimized model.
247
+
248
+ \textbf{Other Fast Detectors} Fast and Faster R-CNN focus on speeding up the R-CNN framework by sharing computation and using neural networks to propose regions instead of Selective Search \cite{DBLP:journals/corr/Girshick15} \cite{ren2015faster}. While they offer speed and accuracy improvements over R-CNN, both still fall short of real-time performance.
249
+
250
+ Many research efforts focus on speeding up the DPM pipeline \cite{sadeghi201430hz} \cite{yan2014fastest} \cite{dean2013fast}. They speed up HOG computation, use cascades, and push computation to GPUs. However, only 30Hz DPM \cite{sadeghi201430hz} actually runs in real-time.
251
+
252
+ Instead of trying to optimize individual components of a large detection pipeline, YOLO throws out the pipeline entirely and is fast by design.
253
+
254
+ Detectors for single classes like faces or people can be highly optimized since they have to deal with much less variation \cite{viola2004robust}. YOLO is a general purpose detector that learns to detect a variety of objects simultaneously.
255
+
256
+ \textbf{Deep MultiBox.} Unlike R-CNN, Szegedy et al. train a convolutional neural network to predict regions of interest \cite{erhan2014scalable} instead of using Selective Search. MultiBox can also perform single object detection by replacing the confidence prediction with a single class prediction. However, MultiBox cannot perform general object detection and is still just a piece in a larger detection pipeline, requiring further image patch classification. Both YOLO and MultiBox use a convolutional network to predict bounding boxes in an image but YOLO is a complete detection system.
257
+
258
+ \textbf{OverFeat.} Sermanet et al. train a convolutional neural network to perform localization and adapt that localizer to perform detection \cite{DBLP:journals/corr/SermanetEZMFL13}. OverFeat efficiently performs sliding window detection but it is still a disjoint system. OverFeat optimizes for localization, not detection performance. Like DPM, the localizer only sees local information when making a prediction. OverFeat cannot reason about global context and thus requires significant post-processing to produce coherent detections.
259
+
260
+ \textbf{MultiGrasp.} Our work is similar in design to work on grasp detection by Redmon et al \cite{DBLP:journals/corr/RedmonA14}. Our grid approach to bounding box prediction is based on the MultiGrasp system for regression to grasps. However, grasp detection is a much simpler task than object detection. MultiGrasp only needs to predict a single graspable region for an image containing one object. It doesn't have to estimate the size, location, or boundaries of the object or predict it's class, only find a region suitable for grasping. YOLO predicts both bounding boxes and class probabilities for multiple objects of multiple classes in an image.
261
+
262
+ \section{Experiments}
263
+
264
+ First we compare YOLO with other real-time detection systems on \textsc{Pascal} VOC 2007. To understand the differences between YOLO and R-CNN variants we explore the errors on VOC 2007 made by YOLO and Fast R-CNN, one of the highest performing versions of R-CNN \cite{DBLP:journals/corr/Girshick15}. Based on the different error profiles we show that YOLO can be used to rescore Fast R-CNN detections and reduce the errors from background false positives, giving a significant performance boost. We also present VOC 2012 results and compare mAP to current state-of-the-art methods. Finally, we show that YOLO generalizes to new domains better than other detectors on two artwork datasets.
265
+
266
+ \subsection{Comparison to Other Real-Time Systems}
267
+
268
+ Many research efforts in object detection focus on making standard detection pipelines fast. \cite{dean2013fast} \cite{yan2014fastest} \cite{sadeghi201430hz} \cite{DBLP:journals/corr/Girshick15} \cite{he2014spatial} \cite{ren2015faster} However, only Sadeghi et al. actually produce a detection system that runs in real-time (30 frames per second or better) \cite{sadeghi201430hz}. We compare YOLO to their GPU implementation of DPM which runs either at 30Hz or 100Hz. While the other efforts don't reach the real-time milestone we also compare their relative mAP and speed to examine the accuracy-performance tradeoffs available in object detection systems.
269
+
270
+ \begin{table}[h]
271
+ \begin{center}
272
+ \begin{tabular}{lrrr}
273
+ Real-Time Detectors & Train & mAP & FPS\\
274
+ \hline
275
+ 100Hz DPM \cite{sadeghi201430hz}& 2007 & 16.0 & 100\\
276
+ 30Hz DPM \cite{sadeghi201430hz} & 2007 & 26.1 & 30 \\
277
+ Fast YOLO & 2007+2012 & 52.7 & \textbf{155} \\
278
+ YOLO & 2007+2012 & \textbf{63.4} & 45 \\
279
+ \hline
280
+ \hline
281
+ Less Than Real-Time & & \\
282
+ \hline
283
+ Fastest DPM \cite{yan2014fastest} & 2007 & 30.4 & 15 \\
284
+ R-CNN Minus R \cite{lenc2015r} & 2007 & 53.5 & 6 \\
285
+ Fast R-CNN \cite{DBLP:journals/corr/Girshick15}& 2007+2012 & 70.0 & 0.5 \\
286
+ Faster R-CNN VGG-16\cite{ren2015faster}& 2007+2012 & 73.2 & 7 \\
287
+ Faster R-CNN ZF \cite{ren2015faster}& 2007+2012 & 62.1 & 18 \\
288
+ YOLO VGG-16 & 2007+2012 & 66.4 & 21 \\
289
+ \end{tabular}
290
+ \end{center}
291
+ \caption{\small \textbf{Real-Time Systems on \textsc{Pascal} VOC 2007.} Comparing the performance and speed of fast detectors. Fast YOLO is the fastest detector on record for \textsc{Pascal} VOC detection and is still twice as accurate as any other real-time detector. YOLO is 10 mAP more accurate than the fast version while still well above real-time in speed.}
292
+ \label{timing}
293
+ \end{table}
294
+
295
+ Fast YOLO is the fastest object detection method on \textsc{Pascal}; as far as we know, it is the fastest extant object detector. With $52.7\%$ mAP, it is more than twice as accurate as prior work on real-time detection. YOLO pushes mAP to $63.4\%$ while still maintaining real-time performance.
296
+
297
+ We also train YOLO using VGG-16. This model is more accurate but also significantly slower than YOLO. It is useful for comparison to other detection systems that rely on VGG-16 but since it is slower than real-time the rest of the paper focuses on our faster models.
298
+
299
+ Fastest DPM effectively speeds up DPM without sacrificing much mAP but it still misses real-time performance by a factor of 2 \cite{yan2014fastest}. It also is limited by DPM's relatively low accuracy on detection compared to neural network approaches.
300
+
301
+ R-CNN minus R replaces Selective Search with static bounding box proposals \cite{lenc2015r}. While it is much faster than R-CNN, it still falls short of real-time and takes a significant accuracy hit from not having good proposals.
302
+
303
+ Fast R-CNN speeds up the classification stage of R-CNN but it still relies on selective search which can take around 2 seconds per image to generate bounding box proposals. Thus it has high mAP but at $0.5$ fps it is still far from real-time.
304
+
305
+ The recent Faster R-CNN replaces selective search with a neural network to propose bounding boxes, similar to Szegedy et al. \cite{erhan2014scalable} In our tests, their most accurate model achieves 7 fps while a smaller, less accurate one runs at 18 fps. The VGG-16 version of Faster R-CNN is 10 mAP higher but is also 6 times slower than YOLO. The Zeiler-Fergus Faster R-CNN is only 2.5 times slower than YOLO but is also less accurate.
306
+
307
+ \subsection{VOC 2007 Error Analysis}
308
+ \label{error}
309
+
310
+ To further examine the differences between YOLO and state-of-the-art detectors, we look at a detailed breakdown of results on VOC 2007. We compare YOLO to Fast R-CNN since Fast R-CNN is one of the highest performing detectors on \textsc{Pascal} and it's detections are publicly available.
311
+
312
+ We use the methodology and tools of Hoiem et al. \cite{hoiem2012diagnosing} For each category at test time we look at the top N predictions for that category. Each prediction is either correct or it is classified based on the type of error:
313
+
314
+ \begin{itemize}
315
+ \itemsep0em
316
+ \item Correct: correct class and $\textrm{IOU} > .5$
317
+ \item Localization: correct class, $.1 < \textrm{IOU} < .5$
318
+ \item Similar: class is similar, $\textrm{IOU} > .1$
319
+ \item Other: class is wrong, $\textrm{IOU} > .1$
320
+ \item Background: $\textrm{IOU} < .1$ for any object
321
+ \end{itemize}
322
+
323
+ Figure \ref{errors} shows the breakdown of each error type averaged across all 20 classes.
324
+
325
+
326
+
327
+ \begin{figure}[t]
328
+ \centering
329
+ \includegraphics[width=\linewidth]{pie_compare}
330
+ \caption{\small \textbf{Error Analysis: Fast R-CNN vs. YOLO} These charts show the percentage of localization and background errors in the top N detections for various categories (N = \# objects in that category).}
331
+ \label{errors}
332
+ \end{figure}
333
+
334
+
335
+
336
+
337
+ YOLO struggles to localize objects correctly. Localization errors account for more of YOLO's errors than all other sources combined. Fast R-CNN makes much fewer localization errors but far more background errors. 13.6\% of it's top detections are false positives that don't contain any objects. Fast R-CNN is almost 3x more likely to predict background detections than YOLO.
338
+
339
+
340
+
341
+
342
+
343
+ \subsection{Combining Fast R-CNN and YOLO}
344
+
345
+ \begin{table}[b]
346
+ \begin{center}
347
+ \begin{tabular}{lrrr}
348
+ & mAP & Combined & Gain \\
349
+ \hline
350
+ Fast R-CNN & 71.8 & - & - \\
351
+ \hline
352
+ Fast R-CNN (2007 data) & \textbf{66.9} & 72.4 & .6 \\
353
+ Fast R-CNN (VGG-M) & 59.2 & 72.4 & .6 \\
354
+ Fast R-CNN (CaffeNet) & 57.1 & 72.1 & .3\\
355
+ YOLO & 63.4 & \textbf{75.0} & \textbf{3.2}\\
356
+ \end{tabular}
357
+ \end{center}
358
+ \caption{\small \textbf{Model combination experiments on VOC 2007.} We examine the effect of combining various models with the best version of Fast R-CNN. Other versions of Fast R-CNN provide only a small benefit while YOLO provides a significant performance boost.}
359
+ \label{combine}
360
+ \end{table}
361
+
362
+ \begin{table*}[t]
363
+ \scriptsize
364
+ \definecolor{Gray}{gray}{0.85}
365
+ \newcolumntype{Y}{>{\centering\arraybackslash}X}
366
+ \begin{center}
367
+ \tabcolsep=0.11cm
368
+ \begin{tabularx}{\linewidth}{@{}l|Y|Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y}
369
+ \textbf{VOC 2012 test} & mAP & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv \\
370
+ \hline
371
+ MR\_CNN\_MORE\_DATA \cite{DBLP:journals/corr/GidarisK15}& \textbf{73.9}& \textbf{85.5}& \textbf{82.9}& \textbf{76.6}& \textbf{57.8}& \textbf{62.7}& \textbf{79.4}& 77.2& 86.6& \textbf{55.0}& \textbf{79.1}& \textbf{62.2}& 87.0& \textbf{83.4}& \textbf{84.7}& 78.9& 45.3& 73.4& 65.8& 80.3& 74.0\\
372
+ HyperNet\_VGG & 71.4& 84.2& 78.5& 73.6& 55.6& 53.7& 78.7& \textbf{79.8}& 87.7& 49.6& 74.9& 52.1& 86.0& 81.7& 83.3& \textbf{81.8}& \textbf{48.6}& \textbf{73.5}& 59.4& 79.9& 65.7\\
373
+ HyperNet\_SP & 71.3& 84.1& 78.3& 73.3& 55.5& 53.6& 78.6& 79.6& 87.5& 49.5& 74.9& 52.1& 85.6& 81.6& 83.2& 81.6& 48.4& 73.2& 59.3& 79.7& 65.6\\
374
+ \rowcolor{Gray}
375
+ \textbf{Fast R-CNN + YOLO} & 70.7 & 83.4 & 78.5 & 73.5 & 55.8 & 43.4 & 79.1 & 73.1 & \textbf{89.4} & 49.4 & 75.5 & 57.0 & \textbf{87.5} & 80.9 & 81.0 & 74.7 & 41.8 & 71.5 & 68.5 & \textbf{82.1} & 67.2 \\
376
+ MR\_CNN\_S\_CNN \cite{DBLP:journals/corr/GidarisK15}& {70.7}& {85.0}& {79.6}& 71.5& 55.3& {57.7}& 76.0& {73.9}& 84.6& {50.5}& {74.3}& {61.7}& 85.5& 79.9& {81.7}& {76.4}& 41.0& 69.0& 61.2& 77.7& {72.1} \\
377
+ Faster R-CNN \cite{ren2015faster}& 70.4& 84.9& 79.8& 74.3& 53.9& 49.8& 77.5& 75.9& 88.5& 45.6& 77.1& 55.3& 86.9& 81.7& 80.9& 79.6& 40.1& 72.6& 60.9& 81.2& 61.5\\
378
+ DEEP\_ENS\_COCO & 70.1& 84.0& 79.4& 71.6& 51.9& 51.1& 74.1& 72.1& 88.6& 48.3& 73.4& 57.8& 86.1& 80.0& 80.7& 70.4& {46.6}& 69.6& \textbf{68.8}& 75.9& 71.4 \\
379
+ NoC \cite{DBLP:journals/corr/RenHGZ015} &68.8& 82.8& 79.0& 71.6& 52.3& 53.7& 74.1& 69.0& 84.9& 46.9& {74.3}& 53.1& 85.0& {81.3}& 79.5& 72.2& 38.9& {72.4}& 59.5& 76.7& 68.1\\
380
+ Fast R-CNN \cite{DBLP:journals/corr/Girshick15}& 68.4 & 82.3 & 78.4 & 70.8 & 52.3 & 38.7 & 77.8 & 71.6 & {89.3} & 44.2 & 73.0 & 55.0 & \textbf{87.5} & 80.5 & 80.8 & 72.0 & 35.1 & 68.3 & 65.7 & 80.4 & 64.2 \\
381
+ UMICH\_FGS\_STRUCT& 66.4& 82.9& 76.1& 64.1& 44.6& 49.4& 70.3& 71.2& 84.6& 42.7& 68.6& 55.8& 82.7& 77.1& 79.9& 68.7& 41.4& 69.0& 60.0& 72.0& 66.2\\
382
+ NUS\_NIN\_C2000 \cite{dong2014towards}& 63.8 & 80.2 & 73.8 & 61.9 & 43.7 & 43.0 & 70.3 & 67.6 & 80.7 & 41.9 & 69.7 & 51.7 & 78.2 & 75.2 & 76.9 & 65.1 & 38.6 & 68.3 & 58.0 & 68.7 & 63.3 \\
383
+ BabyLearning \cite{dong2014towards}& 63.2 & 78.0 & 74.2 & 61.3 & 45.7 & 42.7 & 68.2 & 66.8 & 80.2 & 40.6 & 70.0 & 49.8 & 79.0 & 74.5 & 77.9 & 64.0 & 35.3 & 67.9 & 55.7 & 68.7 & 62.6 \\
384
+ NUS\_NIN & 62.4 & 77.9 & 73.1 & 62.6 & 39.5 & 43.3 & 69.1 & 66.4 & 78.9 & 39.1 & 68.1 & 50.0 & 77.2 & 71.3 & 76.1 & 64.7 & 38.4 & 66.9 & 56.2 & 66.9 & 62.7 \\
385
+ R-CNN VGG BB \cite{girshick2014rich}& 62.4 & 79.6 & 72.7 & 61.9 & 41.2 & 41.9 & 65.9 & 66.4 & 84.6 & 38.5 & 67.2 & 46.7 & 82.0 & 74.8 & 76.0 & 65.2 & 35.6 & 65.4 & 54.2 & 67.4 & 60.3 \\
386
+ R-CNN VGG \cite{girshick2014rich}& 59.2 & 76.8 & 70.9 & 56.6 & 37.5 & 36.9 & 62.9 & 63.6 & 81.1 & 35.7 & 64.3 & 43.9 & 80.4 & 71.6 & 74.0 & 60.0 & 30.8 & 63.4 & 52.0 & 63.5 & 58.7 \\
387
+ \rowcolor{Gray}
388
+ \textbf{YOLO} &57.9& 77.0& 67.2& 57.7& 38.3& 22.7& 68.3& 55.9& 81.4& 36.2& 60.8& 48.5& 77.2& 72.3& 71.3& 63.5& 28.9& 52.2& 54.8& 73.9& 50.8\\
389
+ Feature Edit \cite{shen2014more}& 56.3 & 74.6 & 69.1 & 54.4 & 39.1 & 33.1 & 65.2 & 62.7 & 69.7 & 30.8 & 56.0 & 44.6 & 70.0 & 64.4 & 71.1 & 60.2 & 33.3 & 61.3 & 46.4 & 61.7 & 57.8 \\
390
+ R-CNN BB \cite{girshick2014rich}& 53.3 & 71.8 & 65.8 & 52.0 & 34.1 & 32.6 & 59.6 & 60.0 & 69.8 & 27.6 & 52.0 & 41.7 & 69.6 & 61.3 & 68.3 & 57.8 & 29.6 & 57.8 & 40.9 & 59.3 & 54.1 \\
391
+ SDS \cite{hariharan2014simultaneous}& 50.7 & 69.7 & 58.4 & 48.5 & 28.3 & 28.8 & 61.3 & 57.5 & 70.8 & 24.1 & 50.7 & 35.9 & 64.9 & 59.1 & 65.8 & 57.1 & 26.0 & 58.8 & 38.6 & 58.9 & 50.7 \\
392
+ R-CNN \cite{girshick2014rich}& 49.6 & 68.1 & 63.8 & 46.1 & 29.4 & 27.9 & 56.6 & 57.0 & 65.9 & 26.5 & 48.7 & 39.5 & 66.2 & 57.3 & 65.4 & 53.2 & 26.2 & 54.5 & 38.1 & 50.6 & 51.6 \\
393
+ \end{tabularx}
394
+ \end{center}
395
+ \caption{\small \textbf{\textsc{Pascal} VOC 2012 Leaderboard.} YOLO compared with the full \texttt{comp4} (outside data allowed) public leaderboard as of November 6th, 2015. Mean average precision and per-class average precision are shown for a variety of detection methods. YOLO is the only real-time detector. Fast R-CNN + YOLO is the forth highest scoring method, with a 2.3\% boost over Fast R-CNN.} \vspace{-.3cm}
396
+ \label{results}
397
+ \end{table*}
398
+
399
+ YOLO makes far fewer background mistakes than Fast R-CNN. By using YOLO to eliminate background detections from Fast R-CNN we get a significant boost in performance. For every bounding box that R-CNN predicts we check to see if YOLO predicts a similar box. If it does, we give that prediction a boost based on the probability predicted by YOLO and the overlap between the two boxes.
400
+
401
+ The best Fast R-CNN model achieves a mAP of 71.8\% on the VOC 2007 test set. When combined with YOLO, its mAP increases by 3.2\% to 75.0\%. We also tried combining the top Fast R-CNN model with several other versions of Fast R-CNN. Those ensembles produced small increases in mAP between .3 and .6\%, see Table \ref{combine} for details.
402
+
403
+ The boost from YOLO is not simply a byproduct of model ensembling since there is little benefit from combining different versions of Fast R-CNN. Rather, it is precisely because YOLO makes different kinds of mistakes at test time that it is so effective at boosting Fast R-CNN's performance.
404
+
405
+ Unfortunately, this combination doesn't benefit from the speed of YOLO since we run each model seperately and then combine the results. However, since YOLO is so fast it doesn't add any significant computational time compared to Fast R-CNN.
406
+
407
+ \subsection{VOC 2012 Results}
408
+
409
+ On the VOC 2012 test set, YOLO scores 57.9\% mAP. This is lower than the current state of the art, closer to the original R-CNN using VGG-16, see Table \ref{results}. Our system struggles with small objects compared to its closest competitors. On categories like \texttt{bottle}, \texttt{sheep}, and \texttt{tv/monitor} YOLO scores 8-10\% lower than R-CNN or Feature Edit. However, on other categories like \texttt{cat} and \texttt{train} YOLO achieves higher performance.
410
+
411
+ Our combined Fast R-CNN + YOLO model is one of the highest performing detection methods. Fast R-CNN gets a 2.3\% improvement from the combination with YOLO, boosting it 5 spots up on the public leaderboard.
412
+
413
+ \ifx 1 0
414
+ \subsection{Speed}
415
+
416
+ At test time YOLO processes images at 45 frames per second on an Nvidia Titan X GPU. It is considerably faster than classifier-based methods with similar mAP. Normal R-CNN using AlexNet or the small VGG network take 400-500x longer to process images. The recently proposed Fast R-CNN shares convolutional features between the bounding boxes but still relies on selective search for bounding box proposals which accounts for the bulk of their processing time. YOLO is still around 100x faster than Fast R-CNN. Table \ref{timing} shows a full comparison between multiple R-CNN and Fast R-CNN variants and YOLO.
417
+
418
+ \begin{table}[h]
419
+ \begin{center}
420
+ \begin{tabular}{lrrrr}
421
+ & mAP & Test Time & FPS\\
422
+ \hline
423
+ R-CNN (VGG-16) & 66.0 & 48.2 hr & 0.02 fps \\
424
+ FR-CNN (VGG-16) & 66.9 & 3.1 hr & 0.45 fps \\
425
+ R-CNN (Small VGG) & 60.2 & 14.4 hr & 0.09 fps \\
426
+ FR-CNN (Small VGG) & 59.2 & 2.9 hr & 0.48 fps \\
427
+ R-CNN (Caffe) & 58.5 & 12.2 hr & 0.11 fps \\
428
+ FR-CNN (Caffe) & 57.1 & 2.8 hr & 0.48 fps \\
429
+ YOLO & 63.5 & 110 sec & 45 fps \\
430
+ \end{tabular}
431
+ \end{center}
432
+ \caption{\small \textbf{Prediction Timing.} mAP and timing information for R-CNN, Fast R-CNN, and YOLO on the VOC 2007 test set. Timing information is given both as frames per second and the time each method takes to process the full 4952 image set. The final column shows the relative speed of YOLO compared to that method.}
433
+ \label{timing}
434
+ \end{table}
435
+ \fi
436
+
437
+ \subsection{Generalizability: Person Detection in Artwork}
438
+
439
+ \begin{figure*}
440
+ \centering
441
+ \begin{subfigure}[b]{.45\textwidth}
442
+ \centering
443
+ \includegraphics[width=\textwidth]{cubist}
444
+ \caption{\small Picasso Dataset precision-recall curves.}
445
+ \end{subfigure}\begin{subfigure}[b]{.55\textwidth}
446
+ \centering
447
+ \begin{tabular}{l|r|rr|r}
448
+ & VOC 2007 & \multicolumn{2}{c|}{Picasso} & People-Art\\
449
+ & AP & AP & Best $F_1$ & AP\\
450
+ \hline
451
+ \textbf{YOLO} & \textbf{59.2} & \textbf{53.3} & \textbf{0.590} & \textbf{45}\\
452
+ R-CNN & 54.2 & 10.4 & 0.226 & 26\\
453
+ DPM & 43.2 & 37.8 & 0.458 & 32\\
454
+ Poselets \cite{BourdevMalikICCV09} & 36.5 & 17.8 & 0.271 \\
455
+ D\&T \cite{dalal2005histograms} & - & 1.9 & 0.051 \\
456
+ \end{tabular}
457
+ \caption{\small Quantitative results on the VOC 2007, Picasso, and People-Art Datasets. The Picasso Dataset evaluates on both AP and best $F_1$ score.}
458
+ \end{subfigure}
459
+ \caption{\small \textbf{Generalization results on Picasso and People-Art datasets.}}
460
+ \label{art}
461
+ \end{figure*}
462
+
463
+
464
+ \begin{figure*}[t]
465
+ \begin{center}
466
+ \includegraphics[width=\linewidth]{art.jpg}
467
+ \end{center}
468
+ \caption{\small \textbf{Qualitative Results.} YOLO running on sample artwork and natural images from the internet. It is mostly accurate although it does think one person is an airplane.}
469
+ \label{images}
470
+ \end{figure*}
471
+
472
+ Academic datasets for object detection draw the training and testing data from the same distribution. In real-world applications it is hard to predict all possible use cases and the test data can diverge from what the system has seen before \cite{cai2015cross}.
473
+ We compare YOLO to other detection systems on the Picasso Dataset \cite{ginosar2014detecting} and the People-Art Dataset \cite{cai2015cross}, two datasets for testing person detection on artwork.
474
+
475
+
476
+ Figure \ref{art} shows comparative performance between YOLO and other detection methods. For reference, we give VOC 2007 detection AP on \texttt{person} where all models are trained only on VOC 2007 data. On Picasso models are trained on VOC 2012 while on People-Art they are trained on VOC 2010.
477
+
478
+ R-CNN has high AP on VOC 2007. However, R-CNN drops off considerably when applied to artwork. R-CNN uses Selective Search for bounding box proposals which is tuned for natural images. The classifier step in R-CNN only sees small regions and needs good proposals.
479
+
480
+ DPM maintains its AP well when applied to artwork. Prior work theorizes that DPM performs well because it has strong spatial models of the shape and layout of objects. Though DPM doesn't degrade as much as R-CNN, it starts from a lower AP.
481
+
482
+ YOLO has good performance on VOC 2007 and its AP degrades less than other methods when applied to artwork. Like DPM, YOLO models the size and shape
483
+ of objects, as well as relationships between objects and where objects commonly appear. Artwork and natural images are very different on a pixel level but they are similar in terms of the size and shape of objects, thus YOLO can still predict good bounding boxes and detections.
484
+
485
+ \section{Real-Time Detection In The Wild}
486
+
487
+ YOLO is a fast, accurate object detector, making it ideal for computer vision applications. We connect YOLO to a webcam and verify that it maintains real-time performance, including the time to fetch images from the camera and display the detections.
488
+
489
+ The resulting system is interactive and engaging. While YOLO processes images individually, when attached to a webcam it functions like a tracking system, detecting objects as they move around and change in appearance. A demo of the system and the source code can be found on our project website: \url{http://pjreddie.com/yolo/}.
490
+
491
+
492
+
493
+ \section{Conclusion}
494
+
495
+ We introduce YOLO, a unified model for object detection. Our model is simple to construct and can be trained directly on full images. Unlike classifier-based approaches, YOLO is trained on a loss function that directly corresponds to detection performance and the entire model is trained jointly.
496
+
497
+ Fast YOLO is the fastest general-purpose object detector in the literature and YOLO pushes the state-of-the-art in real-time object detection. YOLO also generalizes well to new domains making it ideal for applications that rely on fast, robust object detection.
498
+
499
+ \noindent\textbf{Acknowledgements:} This work is partially supported by ONR N00014-13-1-0720, NSF IIS-1338054, and The Allen Distinguished Investigator Award.
500
+
501
+
502
+
503
+
504
+
505
+
506
+
507
+ \pagebreak
508
+ {\small
509
+ \bibliographystyle{ieee}
510
+ \bibliography{egbib}
511
+ }
512
+
513
+
514
+ \end{document}
papers/1506/1506.02753.tex ADDED
The diff for this file is too large to render. See raw diff
 
papers/1509/1509.01469.tex ADDED
@@ -0,0 +1,867 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass{article} \usepackage{enumitem}
2
+ \usepackage{times}
3
+ \usepackage{subcaption}
4
+ \usepackage{graphicx}
5
+ \usepackage{amsmath}
6
+ \usepackage{amssymb}
7
+ \usepackage{comment}
8
+ \usepackage{amsthm}
9
+
10
+ \usepackage{tikz, pgfplots}
11
+ \usepgfplotslibrary{groupplots}
12
+ \usetikzlibrary{spy}
13
+ \usepackage{pgfplotstable}
14
+
15
+
16
+ \newtheorem{theorem}{Theorem}[section]
17
+ \newtheorem{definition}{Definition}[section]
18
+ \newtheorem{lemma}{Lemma}[section]
19
+ \DeclareMathOperator*{\E}{\mathbb{E}}
20
+ \DeclareMathOperator*{\argmin}{\operatornamewithlimits{argmin}}
21
+ \DeclareMathOperator*{\argmax}{\operatornamewithlimits{argmax}}
22
+ \usepackage[margin=1in,bottom=1in,top=.875in]{geometry}
23
+
24
+ \title{Quantization based Fast Inner Product Search}
25
+ \date{}
26
+ \author{
27
+ Ruiqi Guo,~Sanjiv Kumar,~Krzysztof Choromanski,~David Simcha\\
28
+ \\
29
+ Google Research, New York, NY 10011, USA\\
30
+ \texttt{\{guorq, sanjivk, kchoro, dsimcha\}@google.com} \\
31
+ }
32
+
33
+ \newcommand{\fix}{\marginpar{FIX}}
34
+ \newcommand{\new}{\marginpar{NEW}}
35
+
36
+
37
+
38
+ \begin{document}
39
+
40
+ \maketitle
41
+
42
+ \begin{abstract}
43
+ We propose a quantization based approach for fast approximate Maximum Inner Product Search (MIPS). Each database vector is quantized in multiple subspaces via a set of codebooks, learned directly by minimizing the inner product quantization error. Then, the inner product of a query to a database vector is approximated as the sum of inner products with the subspace quantizers. Different from recently proposed LSH approaches to MIPS, the database vectors and queries do not need to be augmented in a higher dimensional feature space. We also provide a theoretical analysis of the proposed approach, consisting of the concentration results under mild assumptions. Furthermore, if a small sample of example queries is given at the training time, we propose a modified codebook learning procedure which further improves the accuracy. Experimental results on a variety of datasets including those arising from deep neural networks show that the proposed approach significantly outperforms the existing state-of-the-art.
44
+ \end{abstract}
45
+
46
+ \section{Introduction}
47
+ Many information processing tasks such as retrieval and classification involve computing the inner product of a query vector with a set of database vectors, with the goal of returning the database instances having the largest inner products. This is often called Maximum Inner Product Search (MIPS) problem. Formally, given a database $X=\{x_i\}_{i=1\cdots n}$, and a query vector $q$ drawn from the query distribution $\mathbf{Q}$, where $x_i, q \in \mathbb{R}^d$, we want to find $x_q^* \in X$ such that
48
+ $ x_q^*=\argmax_{x \in X} (q^T x)$. This definition can be trivially extended to return top-$N$ largest inner products.
49
+
50
+ The MIPS problem is particularly appealing for large scale applications. For example, a recommendation system needs to retrieve the most relevant items to a user from an inventory of millions of items, whose relevance is commonly represented as inner products~\cite{cremonesi2010performance}. Similarly, a large scale classification system needs to classify an item into one of the categories, where the number of categories may be very large~\cite{dean2013cvpr}. A brute-force computation of inner products via a linear scan requires $O(n d)$ time and space, which becomes computationally prohibitive when the number of database vectors and the data dimensionality is large. Therefore it is valuable to consider algorithms that can compress the database $X$ and compute approximate $x_q^*$ much faster than the brute-force search.
51
+
52
+ The problem of MIPS is related to that of Nearest Neighbor Search with respect to $L_2$ distance ($L_2$NNS) or angular distance ($\theta$NNS) between a query and a database vector:
53
+ \[q^T x = 1/2(||x||^2 + ||q||^2 - ||q-x||^2) = ||q||||x||\cos{\theta}, \] or
54
+ \[ \argmax_{x \in X} (q^T x) = \argmax_{x \in X} (||x||^2 - ||q-x||^2) = \argmax_{x \in X}(||x|| cos\theta), \]
55
+
56
+ where $||.||$ is the $L_2$ norm. Indeed, if the database vectors are scaled such that $||x||=$ constant $~~\forall x \in X$, the MIPS problem becomes equivalent to L$_2$NNS or $\theta$NNS problems, which have been studied extensively in the literature. However, when the norms of the database vectors vary, as often true in practice, the MIPS problem becomes quite challenging. The inner product (distance) does not satisfy the basic axioms of a metric such as triangle inequality and co-incidence. For instance, it is possible to have $x^T x \leq x^T y$ for some $y \neq x$. In this paper, we focus on the MIPS problem where both database and the query vectors can have arbitrary norms.
57
+
58
+ As the main contribution of this paper, we develop a Quantization-based Inner Product (QUIP) search method to address the MIPS problem. We formulate the problem of quantization as that of codebook learning, which directly minimizes the quantization error in inner products (Sec.~\ref{sec:approx}). Furthermore, if a small sample of example queries is provided at the training time, we propose a constrained optimization framework which further improves the accuracy (Sec.~\ref{sec:opt}). We also provide a concentration-based theoretical analysis of the proposed method (Sec.~\ref{sec:theory}). Extensive experiments on four real-world datasets, involving recommendation (\emph{Movielens}, \emph{Netflix}) and deep-learning based classification (\emph{ImageNet} and \emph{VideoRec}) tasks show that the proposed approach consistently outperforms the state-of-the-art techniques under both fixed space and fixed time scenarios (Sec.~\ref{sec:experiment}).
59
+
60
+
61
+
62
+ \section{Related works}
63
+ \label{sec:relatedworks}
64
+
65
+
66
+ The MIPS problem has been studied for more than a decade. For instance, Cohen et al.~\cite{cohen1999approximating} studied it in the context of document clustering and presented a method based on randomized sampling without computing the full matrix-vector multiplication. In \cite{koenigstein2012cikm, ram2012kdd}, the authors described a procedure to modify tree-based search to adapt to MIPS criterion. Recently, Bachrach et al.~\cite{bachrach2014speeding} proposed an approach that transforms the input vectors such that the MIPS problem becomes equivalent to the $L_2$NNS problem in the transformed space, which they solved using a PCA-Tree.
67
+
68
+ The MIPS problem has received a renewed attention with the recent seminal work from Shrivastava and Li~\cite{shrivastava2014asymmetric}, which introduced an Asymmetric Locality Sensitive Hashing (ALSH) technique with provable search guarantees. They also transform MIPS into $L_2$NNS, and use the popular LSH technique~\cite{andoni2006lsh}.
69
+ Specifically, ALSH applies different vector transformations to a database vector $x$ and the query $q$, respectively:
70
+ \[
71
+ \hat{x} = [\tilde{x}; ||\tilde{x}||^2; ||\tilde{x}||^4; \cdots ||\tilde{x}||^{2^m}].~~~~\hat{q} = [q; 1/2; 1/2; \cdots ;1/2].
72
+ \]
73
+ where $\tilde{x}=U_0 \frac{x}{\max_{x \in X} ||x||}$, $U_0$ is some constant that satisfies $0<U_0<1$, and $m$ is a nonnegative integer. Hence, $x$ and $q$ are mapped to a new $(d+m)$ dimensional space asymmetrically. Shrivastava and Li~\cite{shrivastava2014asymmetric} showed that when $m\rightarrow \infty$, MIPS in the original space is equivalent to $L_2$NNS in the new space. The proposed hash function followed $L_2$LSH form~\cite{andoni2006lsh}:
74
+ $
75
+ h^{L2}_i(\hat{x})=\lfloor \frac{P_i^T \hat{x}+b_i}{r} \rfloor,
76
+ $
77
+ where $P_i$ is a $(d+m)$-dimensional vector whose entries are sampled i.i.d from the standard Gaussian, $\mathcal{N}(0,1)$, and $b_i$ is sampled uniformly from $[0, r]$. The same authors later proposed an improved version of ALSH
78
+ based on Signed Random Projection (SRP)~\cite{shrivastava014e}. It transforms each vector using a slightly different procedure and represents it as a binary code. Then, Hamming distance is used for MIPS.
79
+ \[
80
+ \hat{x} = [\tilde{x}; \frac{1}{2}-||\tilde{x}||^2; \frac{1}{2}-||\tilde{x}||^4; \cdots \frac{1}{2}-||\tilde{x}||^{2^m}],~~~~\hat{q} = [q; 0; 0; \cdots ;0], ~~\textrm{and}
81
+ \]
82
+ \[
83
+ h^{SRP}_i(\hat{x})=sign(P_i^T \hat{x});~~Dist^{SRP}(x,q)=\sum_{i=1}^{b} h_i^{SRP} (\hat{x}) \neq h_i^{SRP} (\hat{q}).
84
+ \]
85
+ Recently, Neyshabur and Srebro~\cite{neyshabur2014simpler} argued that a symmetric transformation was sufficient to develop a provable LSH approach for the MIPS problem if query was restricted to unit norm. They used a transformation similar to the one used by Bachrach et al.~\cite{bachrach2014speeding} to augment the original vectors:
86
+ \[
87
+ \hat{x} = [\tilde{x}; \sqrt{1-||\tilde{x}||^2}].~~~~\hat{q} = [\tilde{q}; 0].
88
+ \]
89
+ where $\tilde{x}=\frac{x}{max_{x\in X} ||x||}$, $\tilde{q}=\frac{q}{||q||}$. They showed that this transformation led to significantly improved results over the SRP based LSH from~\cite{shrivastava014e}. In this paper, we take a quantization based view of the MIPS problem and show that it leads to even better accuracy under both fixed space or fixed time budget on a variety of real world tasks.
90
+
91
+
92
+
93
+
94
+ \section{Quantization-based inner product (QUIP) search}
95
+ \label{sec:approx}
96
+ Instead of augmenting the input vectors to a higher dimensional space as in~\cite{neyshabur2014simpler,shrivastava2014asymmetric}, we approximate the inner products by mapping each vector to a set of subspaces, followed by independent quantization of database vectors in each subspace. In this work, we use a simple procedure for generating the subspaces. Each vector's elements are first permuted using a random (but fixed) permutation\footnote{Another possible choice is random rotation of the vectors which is slightly more expensive than permutation but leads to improved theoretical guarantees as discussed in the appendix.}. Then each permuted vector is mapped to $K$ subspaces using simple chunking, as done in product codes~\cite{sabin1984icassp, jegou2011pami}. For ease of notation, in the rest of the paper we will assume that both query and database vectors have been permuted. Chunking leads to block-decomposition of the query $q \sim \mathbf{Q}$ and each database vector $x \in X$:
97
+ \[
98
+ x=[x^{(1)}; x^{(2)}; \cdots; x^{(K)}]~~~~q=[q^{(1)}; q^{(2)}; \cdots; q^{(K)}],
99
+ \]
100
+ where each $x^{(k)}, q^{(k)} \in \mathbb{R}^l, l = \lceil d/K \rceil.$\footnote{One can do zero-padding wherever necessary, or use different dimensions in each block.}
101
+ The $k^{th}$ subspace containing the $k^{th}$ blocks of all the database vectors, $\{x^{(k)}\}_{i=1...n}$, is then quantized by a codebook $U^{(k)} \in \mathbb{R}^{l \times C_k}$ where $C_k$ is the number of quantizers in subspace $k$. Without loss of generality, we assume $C_k = C ~~\forall~k$. Then, each database vector $x$ is quantized in the $k^{th}$ subspace as $x^{(k)} \approx U^{(k)} \alpha_x^{(k)}$, where $\alpha_x^{(k)}$ is a $C$-dimensional one-hot assignment vector with exactly one $1$ and rest $0$. Thus, a database vector $x$ is quantized by a single dictionary element $u^{(k)}_x$ in the $k^{th}$ subspace.
102
+ Given the quantized database vectors, the exact inner product is approximated as:
103
+ \begin{equation}
104
+ q^T x = \sum_k q^{(k)T} x^{(k)} \approx \sum_k q^{(k)T} U^{(k)} \alpha^{(k)}_x = \sum_k q^{(k)T} u^{(k)}_x
105
+ \label{eqn:approx}
106
+ \end{equation}
107
+ Note that this approximation is 'asymmetric' in the sense that only database vectors $x$ are quantized, not the query vector $q$. One can quantize $q$ as well but it will lead to increased approximation error. In fact, the above asymmetric computation for all the database vectors can still be carried out very efficiently via look up tables similar to~\cite{jegou2011pami}, except that each entry in the $k^{th}$ table is a dot product between $q^{(k)}$ and columns of $U^{(k)}$ .
108
+
109
+ Before describing the learning procedure for the codebooks $U^{(k)}$ and the assignment vectors $\alpha_x^{(k)}$ $\forall~x, k$, we first show an interesting property of the approximation in (\ref{eqn:approx}). Let $S^{(k)}_c$ be the $c^{th}$ partition of the database vectors in subspace $k$ such that $S^{(k)}_c = \{x^{(k)}\!:\alpha_x^{(k)}[c] = 1\}$, where $\alpha^{(k)}_x[c]$ is the $c^{th}$ element of $\alpha^{(k)}_x$ and $U_c^{(k)}$ is the $c^{th}$ column of $U^{(k)}$.
110
+
111
+ \begin{lemma}
112
+ \label{thm:unbiased}
113
+ If $\displaystyle U^{(k)}_c=\frac{1}{|S^{(k)}_c|}\sum_{x^{(k)} \in S^{(k)}_c} x^{(k)}$, then~(\ref{eqn:approx}) is an unbiased estimator of $q^T x$.
114
+ \end{lemma}
115
+ \begin{proof}
116
+ \begin{align*}
117
+ \E_{q\sim \mathbf{Q},x\in X}[q^Tx - \sum_k q^{(k)T} u_x^{(k)}]&=\sum_k \E_{q\sim\mathbf{Q}} q^{(k)T} \E_{x\in X}[(x^{(k)}-u_x^{(k)}]\\
118
+ &=\sum_k \E_{q \sim \mathbf{Q}} q^{(k)T} \E_{x\in X}[\sum_c \mathbb{I}[x^{(k)} \in S^{(k)}_c] (x^{(k)}-U_c^{(k)})]\\
119
+ &=0.
120
+ \end{align*}
121
+ Where $\mathbb{I}$ is the indicator function, and the last equality holds because for each $k$, $\E_{x \in S^{(k)}_c}[x^{(k)}-U^{(k)}_c]=0$ by definition.
122
+ \end{proof}
123
+
124
+ We will provide the concentration inequalities for the estimator in ({\ref{eqn:approx}) in Sec.~\ref{sec:theory}. Next we describe the learning of quantization codebooks in different subspaces. We focus on two different training scenarios: when only the database vectors are given (Sec.~\ref{sec:kmeans}), and when a sample of example queries is also provided (Sec.~\ref{sec:opt}). The latter can result in significant performance gain when queries do not follow the same distribution as the database vectors. Note that the actual queries used at the test time are different from the example queries, and hence unknown at the training time.
125
+
126
+ \subsection{Learning quantization codebooks from database}
127
+ \label{sec:kmeans}
128
+ Our goal is to learn data quantizers that minimize the quantization error due to the inner product approximation given in (\ref{eqn:approx}). Assuming each subspace to be independent, the expected squared error can be expressed as:
129
+ \begin{align}
130
+ \begin{split}
131
+ \E_{q \sim \mathbf{Q}} \E_{x \in X} [q^T x - \sum_k q^{(k)T} U^{(k)}\alpha_x^{(k)}]^2 &= \sum_k \E_{q \sim \mathbf{Q}} \E_{x \in X} [q^{(k)T} (x^{(k)} - u_x^{(k)})]^2\\
132
+ &=\sum_k \E_{x \in X} (x^{(k)} - u_x^{(k)})^T \Sigma_{\mathbf{Q}}^{(k)} (x^{(k)} - u_x^{(k)}),
133
+ \end{split}
134
+ \label{eqn:mse}
135
+ \end{align}
136
+ where $\Sigma^{(k)}_{\mathbf{Q}}=\E_{q \sim \mathbf{Q}} q^{(k)} q^{(k)T}$ is the non-centered query covariance matrix in subspace $k$. Minimizing the error in (\ref{eqn:mse}) is equivalent to solving a modified \emph{k-Means} problem in each subspace independently. Instead of using the Euclidean distance, Mahalanobis distance specified by $\Sigma_{\mathbf{Q}}^{(k)}$ is used for assignment. One can use the standard Lloyd's algorithm to find the solution for each subspace $k$ iteratively by alternating between two steps:
137
+ \begin{eqnarray}
138
+ \label{eqn:kmeans}
139
+ c^{(k)}_x &=& \argmin_{c} (x^{(k)} - U^{(k)}_c)^T \Sigma_{\mathbf{Q}}^{(k)} (x^{(k)} - U^{(k)}_c), ~~ \alpha_x^{(k)}[c_x^{(k)}] = 1, ~~\forall~c,x \nonumber \\
140
+ U^{(k)}_c&=&\frac{\sum_{x^{(k)} \in S^{(k)}_c} x^{(k)}} {|S^{(k)}_c|} ~~~\forall~c.
141
+ \end{eqnarray}
142
+ The Lloyd's algorithm is known to converge to a local minimum (except in pathological cases where it may oscillate between equivalent solutions)~\cite{bottou94kmeans}. Also, note that the resulting quantizers are always the Euclidean means of their corresponding partitions, and hence, Lemma~\ref{thm:unbiased} is applicable to (\ref{eqn:mse}) as well, leading to an unbiased estimator.
143
+
144
+ The above procedure requires the non-centered query covariance matrix $\Sigma_{\mathbf{Q}}$, which will not be known if query samples are not available at the training time. In that case, one possibility is to assume
145
+ that the queries come from the same distribution as the database vectors, i.e., $\Sigma_{\mathbf{Q}}=\Sigma_{X}$. In the experiments we will show that this version performs reasonably well. However, if a small set of example queries is available at the training time, besides estimating the query covariance matrix, we propose to impose novel constraints that lead to improved quantization, as described next.
146
+
147
+ \subsection{Learning quantization codebook from database and example query samples}
148
+ \label{sec:opt}
149
+ In most applications, it is possible to have access to a small set of example queries, $Q$. Of course, the actual queries used at the test-time are different from this set. Given these exemplar queries, we propose to modify the learning criterion by imposing additional constraints while minimizing the expected quantization error. Given a query $q$, since we are interested in finding the database vector $x^*_q$ with highest dot-product, ideally we want the dot product of query to the quantizer of $x^*_q$ to be larger than the dot product with any other quantizer. Let us denote the matrix containing the $k^{th}$ subspace assignment vectors $\alpha_x^{(k)}$ for all the database vectors by $A^{(k)}$. Thus, the modified optimization is given as,
150
+ \begin{align}
151
+ \begin{split}
152
+ \argmin_{U^{(k)}, A^{(k)}} ~~~~~~& \E_{q \in Q} \sum_{x \in X} [\sum_k q^{(k)T} x^{(k)} - \sum_k q^{(k)T} U^{(k)} \alpha^{(k)}_x ]^2 \\
153
+ s.t. ~~~~~~& \forall q,x,~~ \sum_k q^{(k)T} U^{(k)}\alpha_x^{(k)} \leq \sum_k q^{(k)T}U^{(k)}\alpha_{x_q^*}^{(k)}~\text{where}~~x_q^*=\argmax_{x} q^T x
154
+ \end{split}
155
+ \label{eqn:opt}
156
+ \end{align}
157
+ We relax the above hard constraints using slack variables to allow for some violations, which leads to the following equivalent objective:
158
+ \begin{align}
159
+ \argmin_{U^{(k)}, A^{(k)}} \E_{q \in Q} \sum_{x \in X} \sum_k \big(q^{(k)T} (x^{(k)} - U^{(k)}\alpha_x^{(k)})\big)^2
160
+ + \lambda \sum_{q \in Q} \sum_{x \in X} [\sum_k q^{(k)T} (U^{(k)}\alpha_x^{(k)}-U^{(k)}\alpha_{x^*_q}^{(k)})]_{+}
161
+ \label{eqn:opt2}
162
+ \end{align}
163
+ where $[z]_{+}=max(z,0)$ is the standard hinge loss, and $\lambda$ is a nonnegative coefficient. We use an iterative procedure to solve the above optimization, which alternates between solving $U^{(k)}$ and $A^{(k)}$ for each $k$. In the beginning, each codebook $U^{(k)}$ is initialized with a set of random database vectors mapped to the $k^{th}$ subspace. Then, we iterate through the following three steps:
164
+ \begin{enumerate}[leftmargin=15pt,itemsep=-.3ex]
165
+ \item Find a set of violated constraints $W$ with each element as a triplet, i.e., $W_j = \{q_j, x^*_{q_j}, x_j^-\}_{j=1\cdots J}$, where $q_j \in Q$ is an exemplar query, $x^*_{q_j}$ is the database vector having the maximum dot product with $q_j$, and $x_j^-$ is a vector such that $q_j^T x_{q_j}^* \geq q_j^T x_j^-$ but
166
+ \[
167
+ \sum_k q_j^{(k)T} U^{(k)} \alpha^{(k)}_{x^*_{q_j}} < \sum_k q_j^{(k)T} U^{(k)}\alpha^{(k)}_{x_j^-}
168
+ \]
169
+ \item Fixing $U^{(k)}$ and all columns of $A^{(k)}$ except $\alpha^{(k)}_{x}$, one can update $\alpha^{(k)}_{x}$ $\forall ~ x, k$ as:
170
+ \begin{eqnarray*}
171
+ c^{(k)}_{x}\!=\!\argmin_c\!\big( (x^{(k)} \!-\! U^{(k)}_c)^T \Sigma^{(k)}_{Q} (x^{(k)}\! - \!U^{(k)}_c) \!+\!
172
+ \lambda \big(\sum_j\! q^{(k)T} U^{(k)}_c (\mathbb{I}[x=x_j^-]\! -\! \mathbb{I}[x=x_{q_j}^*]) \big), \\
173
+ \alpha^{(k)}_{x}[c^{(k)}_{x}] = 1
174
+ \end{eqnarray*}
175
+
176
+ Since $C$ is typically small (256 in our experiments), we can find $c^{(k)}_{x}$ by enumerating all possible values of $c$.
177
+
178
+ \item Fixing $A$, and all the columns of $U^{(k)}$ except $U_c^{(k)}$, one can update $U_c^{(k)}$ by gradient descent where gradient can be computed as:
179
+ \begin{align*}
180
+ \nabla U_c^{(k)}= 2 \Sigma^{(k)}_{Q} \sum_{x\in X} \alpha^{(k)}_x[c] (U_c^{(k)} - x^{(k)}) + \lambda \sum_j \big( q_j^{(k)}(\alpha^{(k)}_{x_j^-}[c]-\alpha^{(k)}_{x_{q_j}^*}[c]) \big)
181
+ \end{align*}
182
+ \end{enumerate}
183
+ Note that if no violated constraint is found, step 2 is equivalent to finding the nearest neighbor of $x^{(k)}$ in $U^{(k)}$in Mahalanobis space specified by $\Sigma^{(k)}_{Q}$. Also, in that case, by setting $\nabla U_c^{(k)}=0$, the update rule in step 3 becomes $ U^{(k)}_c=\frac{1}{|S^{(k)}_c|}\sum_{x^{(k)} \in S^{(k)}_c} x^{(k)}$ which is the stationary point for the first term. Thus, if no constraints are violated, the above procedure becomes identical to \emph{k-Means}-like procedure described in Sec.~\ref{sec:kmeans}. The steps 2 and 3 are guaranteed not to increase the value of the objective in (\ref{eqn:opt}).
184
+ In practice, we have found that the iterative procedure can be significantly sped up by modifying the step 3 as perturbation of the stationary point of the first term with a single gradient step of the second term. The time complexity of step 1 is at most $O(nKC|Q|)$, but in practice it is much cheaper because we limit the number of constraints in each iteration to be at most $J$. Step 2 takes $O(nKC)$ and step 3 $O((n+J)KC)$ time. In all the experiments, we use at most $J=1000$ constraints in each iteration, Also, we fix $\lambda=.01$, step size $\eta_t=1/(1+t)$ at each iteration $t$, and the maximum number of iterations $T=30$.
185
+
186
+ \section{Theoretical analysis}
187
+ \label{sec:theory}
188
+
189
+
190
+ In this section we present concentration results about the quality of the quantization-based inner product search method. Due to the space constraints, proofs of the theorems are provided in the appendix. We start by defining a few quantities.
191
+ \begin{definition}
192
+ \label{def:event}
193
+ Given fixed $a, \epsilon > 0$, let $\mathcal{F}(a, \epsilon)$ be an event such that
194
+ the exact dot product $q^{T}x$ is at least $a$, but the quantized version is either smaller than $q^{T}x(1-\epsilon)$ or larger than $q^{T}x(1+\epsilon)$.
195
+ \end{definition}
196
+ Intuitively, the probability of event $\mathcal{F}(a,\epsilon)$ measures the chance that difference between the exact and the quantized dot product is large, when the exact dot product is large. We would like this probability to be small. Next, we introduce the concept of balancedness for subspaces.
197
+
198
+ \begin{definition}
199
+ \label{def:balance}
200
+ Let $v$ be a vector which is chunked into $K$ subspaces: $v^{(1)},...,v^{(K)}$. We say that chunking is $\eta$-balanced if the following holds for every $k \in \{1,...,K\}$:
201
+ $$\|v^{(k)}\|^{2} \leq (\frac{1}{K} + (1-\eta))\|v\|^{2}$$
202
+ \end{definition}
203
+
204
+
205
+ Since the input data may not satisfy the balancedness condition, we next show that random permutation tends to create more balanced subspaces. Obviously, a (fixed) random permutation applied to vector entries does not change the dot product.
206
+
207
+ \begin{theorem}
208
+ \label{perm_theorem_main}
209
+ Let $v$ be a vector of dimensionality $d$ and let $perm(v)$ be its version after applying random permutation of its dimensions. Then the expected $perm(v)$ is $1$-balanced.
210
+ \end{theorem}
211
+
212
+ Another choice of creating balancedness is via a (fixed) random rotation, which also does not change the dot-product. This leads to even better balancedness property as discussed in the appendix (see Theorem 2.1).
213
+ Next we show that the probability of $\mathcal{F}(a,\epsilon)$ can be upper bounded by an exponentially small quantity in $K$, indicating that the quantized dot products accurately approximate large exact dot products when the quantizers are the means obtained from Mahalanobis \emph{k-Means} as described in Sec.~\ref{sec:kmeans}. Note that in this case quantized dot-product is an unbiased estimator of the exact dot-product as shown in Lemma~\ref{thm:unbiased}.
214
+ \begin{theorem}
215
+ \label{lipsch_theory_main}
216
+ Assume that the dataset $X$ of dimensionality $d$ resides entirely in the ball $\mathcal{B}(p,r)$ of radius $r$, centered at $p$ . Further, let $\{x-p : x \in X\}$ be $\eta$-balanced for some $0 < \eta < 1$, where $\backslash$ is applied pointwise, and let $\E[\sum_k (x^{(k)}-u_x^{(k)})]_{k=1\cdots K}$ be a martingale. Denote $q_{max} = \max_{k=1,...,K} \max_{q \in Q}\|q^{(k)}\|$. Then, there exist $K$ sets of codebooks, each with $C$ quantizers, such that the following is true:
217
+ $$\mathbb{P}(\mathcal{F}(a,\epsilon)) \leq
218
+ 2e^{-(\frac{a \epsilon}{r})^{2}\frac{C^{\frac{2K}{d}}}{8q_{max}^{2}(1+(1-\eta)K)}}.$$
219
+ \end{theorem}
220
+
221
+ The above theorem shows that the probability of $\mathcal{F}(a, \epsilon)$ decreases exponentially as the number of subspaces (i.e., blocks) $K$ increases. This is consistent with experimental observation that increasing $K$ leads to more accurate retrieval.
222
+
223
+
224
+ Furthermore, if we assume that each subspace is independent, which is a slightly more restrictive assumption than the martingale assumption made in Theorem~\ref{lipsch_theory_main}, we can use Berry-Esseen~\cite{NT07} inequality to obtain an even stronger upper bound as given below.
225
+
226
+ \begin{theorem}
227
+ \label{ind_theory1_main}
228
+ Suppose, $\Delta = \max_{k=1,...,K} \Delta^{(k)}$, where $\Delta^{(k)}=\max_x ||u^{(k)}_x-x^{(k)}||$ is the maximum distance between a datapoint and its quantizer in subspace $k$. Assume $\Delta \leq \frac{a^{\frac{1}{3}}}{q_{max}}$. Then,
229
+ $$\mathbb{P}(\mathcal{F}(a,\epsilon)) \leq \frac{2\sum_{k=1}^{K}L^{(k)}}{\sqrt{2\pi}|X|a\epsilon}e^{-\frac{a^{2}\epsilon^{2}|X|^{2}}{2(\sum_{k=1}^{K}L^{(k)})^{2}}}+
230
+ \frac{\beta K(\sum_{k=1}^{K} L^{(k)})^{\frac{3}{2}}}{a^{2}\epsilon^{3}|X|^{\frac{3}{2}}},$$
231
+ where
232
+ $L^{(k)} = E_{q \in Q} [\sum_{S^{(k)}_{c}} \sum_{x\in S^{(k)}_{c}}(q^{(k)T} x^{k} - q^{(k)T} u^{(k)}_{x})^2]$ and
233
+ $\beta>0$ is some universal constant.
234
+ \end{theorem}
235
+
236
+
237
+
238
+
239
+
240
+
241
+ \section{Experimental results}
242
+ \label{sec:experiment}
243
+
244
+ \begin{figure}
245
+ \centering
246
+ \begin{subfigure}[c]{1 \textwidth}
247
+ \vskip -9pt
248
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top1/movielens_pr_8.pdf} \hskip -3pt
249
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/movielens_pr_16.pdf} \hskip -3pt
250
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/movielens_pr_32.pdf} \hskip -3pt
251
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/movielens_pr_64.pdf}
252
+ \vskip -9pt
253
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top5/movielens_pr_8.pdf} \hskip -3pt
254
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/movielens_pr_16.pdf} \hskip -3pt
255
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/movielens_pr_32.pdf} \hskip -3pt
256
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/movielens_pr_64.pdf}
257
+ \vskip -9pt
258
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top10/movielens_pr_8.pdf} \hskip -3pt
259
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/movielens_pr_16.pdf} \hskip -3pt
260
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/movielens_pr_32.pdf} \hskip -3pt
261
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/movielens_pr_64.pdf}
262
+ \vskip -3pt
263
+ \subcaption{Movielens dataset}
264
+ \end{subfigure}
265
+ \begin{subfigure}[c]{1 \textwidth}
266
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top1/netflix_pr_8.pdf} \hskip -3pt
267
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/netflix_pr_16.pdf} \hskip -3pt
268
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/netflix_pr_32.pdf} \hskip -3pt
269
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/netflix_pr_64.pdf}
270
+ \vskip -9pt
271
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top5/netflix_pr_8.pdf} \hskip -3pt
272
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/netflix_pr_16.pdf} \hskip -3pt
273
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/netflix_pr_32.pdf} \hskip -3pt
274
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/netflix_pr_64.pdf}
275
+ \vskip -9pt
276
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top10/netflix_pr_8.pdf} \hskip -3pt
277
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/netflix_pr_16.pdf} \hskip -3pt
278
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/netflix_pr_32.pdf} \hskip -3pt
279
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/netflix_pr_64.pdf}
280
+ \vskip -3pt
281
+ \subcaption{Netflix dataset}
282
+ \end{subfigure}
283
+ \begin{subfigure}[c]{1 \textwidth}
284
+ \end{subfigure}
285
+ \caption{\label{fig:fixed}
286
+ Precision Recall curves (higher is better) for different methods on Movielens and Netflix datasets, retrieving Top-1, 5 and 10 items. \textbf{Baselines:} \emph{Signed ALSH}~\cite{shrivastava014e}, \emph{L2 ALSH}~\cite{shrivastava2014asymmetric} and \emph{Simple LSH}~\cite{neyshabur2014simpler}. \textbf{Proposed Methods:} \emph{QUIP-cov(x)}, \emph{QUIP-cov(q)}, \emph{QUIP-opt}. Curves for fixed bit experiments are plotted in solid line for both the baselines and proposed methods, where the number of bits used are $\mathbf{b = 64, 128, 256, 512}$ respectively, from left to right. Curves for fixed time experiment are plotted in dashed lines. The fixed time plots are the same as the fixed bit plots for the proposed methods. For the baseline methods, the number of bits used in fixed time experiments are $\mathbf{b = 192, 384, 768, 1536}$ respectively, so that their running time is comparable with that of the proposed methods.}
287
+ \end{figure}
288
+
289
+ We conducted experiments with 4 datasets which are summarized below:
290
+ \begin{description}[leftmargin=10pt,itemsep=-0.3ex, topsep=0pt]
291
+ \item[Movielens] This dataset consists of user ratings collected by the MovieLens site from web users. We use the same SVD setup as described in the ALSH paper~\cite{shrivastava2014asymmetric} and extract 150 latent dimensions from SVD results. This dataset contains 10,681 database vectors and 71,567 query vectors.
292
+
293
+ \item[Netflix] The Netflix dataset comes from the Netflix Prize challenge~\cite{Bennett07thenetflix}. It contains 100,480,507 ratings that users gave to Netflix movies. We process it in the same way as suggested by~\cite{shrivastava2014asymmetric}. That leads to 300 dimensional data. There are 17,770 database vectors and 480,189 query vectors.
294
+
295
+ \item[ImageNet] This dataset comes from the state-of-the-art GoogLeNet~\cite{szegedy2014cvpr} image classifier trained on ImageNet\footnote{The original paper ensembled 7 models and used 144 different crops. In our experiment, we focus on one global crop using one model.}. The goal is to speed up the maximum dot-product search in the last i.e., classification layer. Thus, the weight vectors for different categories form the database while the query vectors are the last hidden layer embeddings from the ImageNet validation set. The data has 1025 dimensions (1024 weights and 1 bias term). There are 1,000 database and 49,999 query vectors.
296
+
297
+ \item[VideoRec] This dataset consists of embeddings of user interests~\cite{davidson2010recsys}, trained via a deep neural network to predict a set of relevant videos for a user. The number of videos in the repository is 500,000. The network is trained with a multi-label logistic loss. As for the \emph{ImageNet} dataset, the last hidden layer embedding of the network is used as query vector, and the classification layer weights are used as database vectors. The goal is to speed up the maximum dot product search between a query and 500,000 database vectors. Each database vector has 501 dimensions (500 weights and 1 bias term). The query set contains 1,000 vectors.
298
+ \end{description}
299
+ Following~\cite{shrivastava2014asymmetric}, we focus on retrieving Top-1, 5 and 10 highest inner product neighbors for Movielens and Netflix experiments. For ImageNet dataset, we retrieve top-5 categories as common in the literature. For the VideoRec dataset, we retrieve Top-50 videos for recommendation to a user. We experiment with three variants our technique: (1) \emph{\textbf{QUIP-cov(x)}}: uses only database vectors at training, and replaces $\Sigma_{\mathbf{Q}}$ by $\Sigma_{X}$ in the \emph{k-Means} like codebook learning in Sec.~\ref{sec:kmeans}, (2) \emph{\textbf{QUIP-cov(q)}}: uses $\Sigma_{\mathbf{Q}}$ estimated from a held-out exemplar query set for \emph{k-Means} like codebook learning, and (3) \emph{\textbf{QUIP-opt}}: uses full optimization based quantization (Sec.~\ref{sec:opt}). We compare the performance (precision-recall curves) with 3 state-of-the-art methods: (1) \emph{\textbf{Signed ALSH}}~\cite{shrivastava2014asymmetric}, (2) \emph{\textbf{L2 ALSH}}~\cite{shrivastava2014asymmetric}\footnote{The recommended parameters $m=3, U_0=0.85, r=2.5$ were used in the implementation.}; and (3) \emph{\textbf{Simple LSH}}~\cite{neyshabur2014simpler}. We also compare against the PCA-tree version adapted to inner product search as proposed in~\cite{bachrach2014speeding}, which has shown better results than IP-tree~\cite{ram2012kdd}. The proposed quantization based methods perform much better than PCA-tree as shown in the appendix.
300
+
301
+
302
+ We conduct two sets of experiments: (i) \emph{fixed bit} - the number of bits used by all the techniques is kept the same, (ii) \emph{fixed time} - the time taken by all the techniques is fixed to be the same. In the fixed bit experiments, we fix the number of bits to be $b = 64, 128, 256, 512$.
303
+ For all the \emph{QUIP} variants, the codebook size for each subspace, C, was fixed to be $256$, leading to a 8-bit representation of a database vector in each subspace. The number of subspaces (i.e., blocks) was varied to be $k=8,16,32,64$ leading to $64, 128, 256, 512$ bit representation, respectively. For the fixed time experiments, we first note that the proposed \emph{QUIP} variants use table lookup based distance computation while the LSH based techniques use POPCNT-based Hamming distance computation. Depending on the number of bits used, we found POPCNT to be 2 to 3 times faster than table lookup. Thus, in the fixed-time experiments, we increase the number of bits for LSH-based techniques by 3 times to ensure that the time taken by all the methods is the same.
304
+
305
+ \begin{figure}
306
+ \vskip -12pt
307
+ \centering
308
+ \begin{subfigure}[c]{1 \textwidth}
309
+ \centering
310
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted/inception_pr_8.pdf} \hskip -3pt
311
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted/inception_pr_16.pdf} \hskip -3pt
312
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted/inception_pr_32.pdf} \hskip -3pt
313
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted/inception_pr_64.pdf}
314
+ \vskip -3pt
315
+ \subcaption{ImageNet dataset, retrieval of Top 5 items.}
316
+ \end{subfigure}
317
+ \begin{subfigure}[c]{1 \textwidth}
318
+ \centering
319
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted/brain_pr_8.pdf} \hskip -3pt
320
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted/brain_pr_16.pdf} \hskip -3pt
321
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted/brain_pr_32.pdf} \hskip -3pt
322
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted/brain_pr_64.pdf}
323
+ \vskip -3pt
324
+ \subcaption{VideoRec dataset, retrieval of Top 50 items.}
325
+ \end{subfigure}
326
+ \vskip -6pt
327
+ \begin{subfigure}[c]{1 \textwidth}
328
+ \end{subfigure}
329
+ \caption{\label{fig:fixed_add}
330
+ Precision Recall curves for \emph{ImageNet} and \emph{VideoRec}. See appendix for more results.}
331
+ \vskip -12pt
332
+ \end{figure}
333
+
334
+
335
+
336
+ Figure~\ref{fig:fixed} shows the precision recall curves for \emph{Movielens} and \emph{Netflix}, and Figure~\ref{fig:fixed_add} shows the same for the \emph{ImageNet} and \emph{VideoRec} datasets. All the quantization based approaches outperform LSH based methods significantly when all the techniques use the same number of bits. Even in the fixed time experiments, the quantization based approaches remain superior to the LSH-based approaches (shown with dashed curves), even though the former uses 3 times less bits than latter, leading to significant reduction in memory footprint.
337
+ Among the quantization methods, \emph{QUIP-cov(q)} typically performs better than \emph{QUIP-cov(x)}, but the gap in performance is not that large. In theory, the non-centered covariance matrix of the queries ($\Sigma_{Q}$) can be quite different than that of the database ($\Sigma_{X}$), leading to drastically different results. However, the comparable performance implies that it is often safe to use $\Sigma_{X}$ when learning a codebook. On the other hand, when a small set of example queries is available, \emph{QUIP-opt} outperforms both \emph{QUIP-cov(x)} and \emph{QUIP-cov(q)} on all four datasets. This is because it learns the codebook with constraints that steer learning towards retrieving the maximum dot product neighbors in addition to minimizing the quantization error. The overall training for \emph{QUIP-opt} was quite fast, requiring 3 to 30 minutes using a single-thread implementation, depending on the dataset size.
338
+
339
+
340
+
341
+
342
+ \vspace{-2mm}
343
+
344
+ \section{Tree-Quantization Hybrids for Large Scale Search}
345
+ The quantization based inner product search techniques described above provide a significant speedup over the brute force search while retaining high accuracy. However, the search complexity is still linear in the number of database points similar to that for the binary embedding methods that do exhaustive scan using Hamming distance ~\cite{shrivastava014e}. When the database size is very large, such a linear scan even with fast computation may not be able to provide the required search efficiency. In this section, we describe a simple procedure to further enhance the speed of QUIPS based on data partitioning. The basic idea of tree-quantization hybrids is to combine tree-based recursive data partitioning with QUIPS applied to each partition. At the training time, one first learns a locality-preserving tree such as hierarchical k-means tree, followed by applying QUIPS to each partition. In practice only a shallow tree is learned such that each leaf contains a few thousand points. Of course, a special case of tree-based partitioners is a flat partitioner such as k-means. At the query time, a query is assigned to more than one partition to deal with the errors caused by hard partitioning of the data. This soft assignment of query to multiple partitions is crucial for achieving good accuracy for high-dimensional data.
346
+
347
+
348
+
349
+ In the \emph{VideoRec} dataset, where $n=500,000$, the quantization approaches (including \emph{QUIP-cov(x), QUIP-cov(q), QUIP-opt}) reduce the search time by a factor of $7.17$, compared to that of brute force search. The tree-quantization hybrid approaches (\emph{Tree-QUIP-cov(x), Tree-QUIP-cov(q), Tree-QUIP-opt}) use 2000 partitions, and each query is assigned to the nearest 100 partitions based on its dot-product with the partition centers. These Tree-QUIP hybrids lead to a further speed up of $5.97$x over QUIPS, leading to an overall end-to-end speed up of $42.81$x over brute force search. To illustrate the effectiveness of the hybrid approach, we plot the precision recall curve in \emph{Fixed-bit} and \emph{Fixed-time} experiment on \emph{VideoRec} in Figure~\ref{fig:fixed_tree}. From the Fixed-bit experiments, Tree-Quantization methods have almost the same accuracy as their non-hybrid counterparts (note that the curves almost overlap in Fig. \ref{fig:fixed_tree}(a) for these two versions), while resulting in about 6x speed up. From the fixed-time experiments, it is clear that with the same time budget the hybrid approaches return much better results because they do not scan all the datapoints when searching.
350
+
351
+ \begin{figure}
352
+ \vskip -12pt
353
+ \centering
354
+ \begin{subfigure}[c]{0.48 \textwidth}
355
+ \includegraphics[trim=1.3in 3.3in 1.3in 3.3in,clip,width=1 \textwidth]{figure/formatted/tree-fixedbit.pdf}
356
+ \subcaption{Fixed-bit experiment.}
357
+ \end{subfigure}
358
+ \begin{subfigure}[c]{0.48 \textwidth}
359
+ \includegraphics[trim=1.3in 3.3in 1.3in 3.3in,clip,width=1 \textwidth]{figure/formatted/tree-fixedtime.pdf}
360
+ \subcaption{Fixed-time experiment.}
361
+ \end{subfigure}
362
+ \hskip -3pt
363
+ \vskip -3pt
364
+ \caption{\label{fig:fixed_tree}
365
+ Precision recall curves on \emph{VideoRec} dataset, retrieving Top-50 items, comparing quantization based methods and tree-quantization hybrid methods. In (a), we conduct fixed bit comparison where both the non-hybrid methods and hybrid methods use the same 512 bits. The non-hybrid methods are considerable slower in this case (5.97x). In (b), we conduct fixed time experiment, where the time of retrieval is fixed to be the same as taken by the hybrid methods (2.256ms). The non-hybrid approaches give much lower accuracy in this case. }
366
+ \vskip -12pt
367
+ \end{figure}
368
+
369
+ \section{Conclusion}
370
+ \label{sec:con}
371
+ \vspace{-3mm}
372
+ We have described a quantization based approach for fast approximate inner product search, which relies on robust learning of codebooks in multiple subspaces. One of the proposed variants leads to a very simple kmeans-like learning procedure and yet outperforms the existing state-of-the-art by a significant margin. We have also introduced novel constraints in the quantization error minimization framework that lead to even better codebooks, tuned to the problem of highest dot-product search. Extensive experiments on retrieval and classification tasks show the advantage of the proposed method over the existing techniques. In the future, we would like to analyze the theoretical guarantees associated with the constrained optimization procedure. In addition, in the tree-quantization hybrid approach, the tree partitioning and the quantization codebooks are trained separately. As a future work, we will consider training them jointly.
373
+
374
+ \section{Appendix}
375
+ \subsection{Additional Experimental Results}
376
+ The results on \emph{ImageNet} and \emph{VideoRec} datasets for different number of top neighbors and different number of bits are shown in Figure~\ref{fig:top}. In addition, we compare the performance of our approach against \emph{PCA-Tree}. The recall curves with respect to different number of returned neighbors are shown in Figure~\ref{fig:recall}.
377
+
378
+ \begin{figure}
379
+ \vskip -6pt
380
+ \centering
381
+ \begin{subfigure}[c]{1 \textwidth}
382
+ \centering
383
+ \vskip -9pt
384
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top1/inception_pr_8.pdf} \hskip -3pt
385
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/inception_pr_16.pdf} \hskip -3pt
386
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/inception_pr_32.pdf} \hskip -3pt
387
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/inception_pr_64.pdf}
388
+ \vskip -9pt
389
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top5/inception_pr_8.pdf} \hskip -3pt
390
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/inception_pr_16.pdf} \hskip -3pt
391
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/inception_pr_32.pdf} \hskip -3pt
392
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/inception_pr_64.pdf}
393
+ \vskip -9pt
394
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top10/inception_pr_8.pdf} \hskip -3pt
395
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/inception_pr_16.pdf} \hskip -3pt
396
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/inception_pr_32.pdf} \hskip -3pt
397
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/inception_pr_64.pdf}
398
+ \vskip -3pt
399
+ \subcaption{ImageNet dataset, retrieval of Top-1, 5 and 10 items.}
400
+ \end{subfigure}
401
+ \begin{subfigure}[c]{1 \textwidth}
402
+ \centering
403
+ \vskip -3pt
404
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top1/brain_pr_8.pdf} \hskip -3pt
405
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/brain_pr_16.pdf} \hskip -3pt
406
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/brain_pr_32.pdf} \hskip -3pt
407
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top1/brain_pr_64.pdf}
408
+ \vskip -9pt
409
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top5/brain_pr_8.pdf} \hskip -3pt
410
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/brain_pr_16.pdf} \hskip -3pt
411
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/brain_pr_32.pdf} \hskip -3pt
412
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top5/brain_pr_64.pdf}
413
+ \vskip -9pt
414
+ \includegraphics[trim=0.6in 2.5in 0.9in 2.5in,clip,width=.26 \textwidth]{figure/formatted_top10/brain_pr_8.pdf} \hskip -3pt
415
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/brain_pr_16.pdf} \hskip -3pt
416
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/brain_pr_32.pdf} \hskip -3pt
417
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.248 \textwidth]{figure/formatted_top10/brain_pr_64.pdf}
418
+ \vskip -3pt
419
+ \subcaption{VideoRec dataset, retrieval of Top-10, 50 and 100 items.}
420
+ \vskip -3pt
421
+ \end{subfigure}
422
+ \begin{subfigure}[c]{1 \textwidth}
423
+ \end{subfigure}
424
+ \caption{\label{fig:top}
425
+ Precision Recall curves using different methods on \emph{ImageNet} and \emph{VideoRec}.}
426
+ \vskip -9pt
427
+ \end{figure}
428
+
429
+ \begin{figure}
430
+ \begin{subfigure}[c]{.49 \textwidth}
431
+ \includegraphics[trim=0.6in 2.5in 0.75in 2.5in,clip,width=1 \textwidth]{figure/formatted/movielens_recall.pdf}
432
+ \subcaption{Movielens, top-10}
433
+ \end{subfigure}
434
+ \begin{subfigure}[c]{.49 \textwidth}
435
+ \includegraphics[trim=0.6in 2.5in 0.75in 2.5in,clip,width=1 \textwidth]{figure/formatted/netflix_recall.pdf}
436
+ \subcaption{Netflix, top-10}
437
+ \end{subfigure}
438
+ \begin{subfigure}[c]{.49 \textwidth}
439
+ \includegraphics[trim=0.6in 2.5in 0.75in 2.5in,clip,width=1 \textwidth]{figure/formatted/brain_recall.pdf}
440
+ \subcaption{VideoRec, top-50}
441
+ \end{subfigure}
442
+ \begin{subfigure}[c]{.49 \textwidth}
443
+ \includegraphics[trim=0.6in 2.5in 0.75in 2.5in,clip,width=1 \textwidth]{figure/formatted/inception_recall.pdf}
444
+ \subcaption{ImageNet, top-5}
445
+ \end{subfigure}
446
+ \caption{\label{fig:recall}
447
+ Recall curves for different techniques under different numbers of returned neighbors (shown as the percentage of total number of points in the database). We plot the recall curve instead of the precision recall curve because \emph{PCA-Tree} uses original vectors to compute distances therefore the precision will be the same as recall in Top-K search. The number of bits used for all the plots is $512$, except for \emph{Signed ALSH-FixedTime}, \emph{L2 ALSH-FixedTime} and \emph{Simple LSH-FixedTime}, which use $1536$ bits. \emph{PCA-Tree} does not perform well on these datasets, mostly due to the fact that the dimensionality of our datasets is relatively high ($150$ to $1025$ dimensions), and trees are known to be more susceptible to dimensionality. Note the the original paper from Bachrach et al. [2] used datasets with dimensionality $50$.}
448
+ \end{figure}
449
+
450
+ \subsection{Theoretical analysis - proofs}
451
+
452
+
453
+ In this section we present proofs of all the theorems presented in the main body of the paper. We also show some additional theoretical results on our quantization based method.
454
+
455
+
456
+ \subsubsection{Vectors' balancedness - proof of Theorem \ref{perm_theorem_main}}
457
+
458
+ In this section we prove Theorem \ref{perm_theorem_main}
459
+ and show that one can also obtain balancedness property with the use of the random rotation.
460
+
461
+
462
+
463
+
464
+ \begin{proof}
465
+ Let us denote $v=(v_{1},...,v_{d})$ and $perm(v)=[B_{1},...,B_{K}]$,
466
+ where $B_{i}$ is the $i$th block ($i=1,...,K$).
467
+ Let us fix some block $B_{j}$.
468
+ For a given $i$ denote by $X^{j}_{i}$ a random variable such that $X^{j}_{i}=v_{i}^{2}$ if $v_{i}$ is the block $B_{j}$ after applying random permutation and $X^{j}_{i} = 0$ otherwise.
469
+ Notice that a random variable $N^{j} = \sum_{i=1}^{d} X^{j}_{i}$
470
+ captures this part of the squared norm of the vector $v$ that resides in block $j$. We have:
471
+ \begin{equation}
472
+ E[N^{j}] = \sum_{i=1}^{d} E[X^{j}_{i}] = \sum_{i=1}^{d} \frac{1}{K}v_{i}^{2} = \frac{1}{K}\|v\|^{2}_{2}.
473
+ \end{equation}
474
+
475
+ Since the analysis presented above can be conducted for every block $B_{j}$, we complete the proof.
476
+
477
+ \end{proof}
478
+
479
+ Another possibility is to use random rotation, that can be performed for instance by applying
480
+ random normalized Hadamard matrix $\mathcal{H}_{n}$. The Hadamard matrix is a matrix with entries taken from the set $\{-1,1\}$,
481
+ where the rows form an orthogonal system. Random normalized Hadamard matrix can be obtained from the above one by first multiplying
482
+ by the random diagonal matrix $\mathcal{D}$, (where the entries on the diagonal are taken uniformly and independently from the
483
+ set $\{-1,1\}$) and then by rescaling by the factor $\frac{1}{\sqrt{d}}$, where $d$ is the dimensionality of the data. Since dot
484
+ product is invariant in regards to permutations or rotations, we end up with the equivalent problem.
485
+
486
+ If we take the random rotation approach then we have the following:
487
+
488
+ \begin{theorem}
489
+ \label{hadamard_theorem}
490
+ Let $v$ be a vector of dimensionality $d$ and let $0 < \eta <1$.
491
+ Then after applying to $v$ linear transformation $\mathcal{H}_{n}$, the transformed vector is
492
+ $\eta$-balanced with probability at least $1-2de^{-\frac{(1-\eta)^{2}K^{2}}{2}}$, where $K$ is the number of blocks.
493
+ \end{theorem}
494
+
495
+ \begin{proof}
496
+
497
+ \begin{figure}
498
+ \centering
499
+ \begin{subfigure}[c]{1 \textwidth}
500
+ \includegraphics[trim=0.8in 2.5in 0.9in 2.5in,clip,width=.49 \textwidth]{figure/graph1.pdf}
501
+ \includegraphics[trim=0.8in 2.5in 0.9in 2.5in,clip,width=.49 \textwidth]{figure/graph2.pdf}
502
+ \end{subfigure}
503
+ \caption{Upper bound on the probability
504
+ of an event $A(\eta)$ that a vector $v$ obtained by the random rotation is not $\eta$-balanced as a function of the number of
505
+ subspaces $K$. The left figure corresponds to $\eta=0.75$ and the right one to $\eta=0.5$.
506
+ Different curves correspond to different data dimensionality ($d=128,256,512,1024$).}
507
+ \label{pfail_upper_bound}
508
+ \end{figure}
509
+
510
+ We start with the following Azuma's concentration inequality that we will also use later:
511
+
512
+ \begin{lemma}
513
+ \label{conc_lemma}
514
+ Let $X_{1},X_{2},...$ be random variables such that $E[X_{1}]=0$, $E[X_{i}|X_{1},...,X_{i-1}] = 0$
515
+ and $-\alpha_{i} \leq X_{i} \leq \beta_{i}$ for $i=1,2,...$
516
+ and some $\alpha_{1},\alpha_{1},...,\beta_{1},\beta_{2},...>0$. Then $\{X_{1},X_{2},...\}$ is a martingale and the following holds for any $a>0$:
517
+ $$
518
+ \mathbb{P}(\sum_{i=1}^{n}X_{i} \geq a) \leq exp(-\frac{2a^{2}}{\sum_{i=1}^{n}(\alpha_{i}+\beta_{i})^{2}}).
519
+ $$
520
+ \end{lemma}
521
+
522
+ Let us denote: $v=(v_{1},...,v_{d})$.
523
+ The $j$th entry of the transformed $x$ is of the form:
524
+ $h_{j,1}v_{1} + ... + h_{j,d}v_{d}$, where $(h_{j,1},...,h_{j,d})$ is the $j$th row of $\mathcal{H}_{n}$ and thus each $h_{j,i}$ (for the fixed $j$) takes uniformly at random and independently a value from the set $\{-\frac{1}{\sqrt{d}},\frac{1}{\sqrt{d}}\}$.
525
+
526
+ Let us consider random variable $Y_{1} = \sum_{j=1}^{\frac{d}{K}} (h_{j,1}v_{1}+...+h_{j,d}v_{d})^{2}$ that captures the squared $L_{2}$-norm of the first block of the transformed vector $v$. We have:
527
+ \begin{equation}
528
+ E[Y_{1}] = \sum_{j=1}^{\frac{d}{K}} (\frac{1}{d}v_{1}^{2}+...+\frac{1}{d}v_{d}^{2})+
529
+ 2\sum_{j=1}^{\frac{d}{K}} \sum_{1\leq i_{1} < i_{2} \leq d}v_{i_{1}}v_{i_{2}}E[h_{j,i_{1}}h_{j,i_{2}}]=\frac{\|v\|_{2}^{2}}{K},
530
+ \end{equation}
531
+
532
+ where the last inequality comes from the fact that $E[h_{j,i_{1}}h_{j,i_{2}}]=0$
533
+ for $i_{1} \neq i_{2}$.
534
+ Of course the same argument is valid for other blocks, thus we can conclude that in expectation the transformed vector is $1$-balanced. Let us prove now some concentration inequalities regarding this result.
535
+ Let us fix some $j \in \{1,...,d\}$.
536
+ Denote $\xi_{i_{1},i_{2}} = v_{i_{1}}v_{i_{2}}h_{j,i_{1}}h_{j,i_{2}}$.
537
+ Let us find an upper bound on the probability $\mathbb{P}(|\sum_{1 \leq i_{1} < i_{2} \leq d} \xi_{i_{1},i_{2}}|>a)$ for some fixed $a>0$.
538
+ We have already noted that $E[\sum_{1 \leq i_{1} < i_{2} \leq d} \xi_{i_{1},i_{2}}]=0$.
539
+
540
+ Thus, by applying Lemma \ref{conc_lemma}, we get the following:
541
+ \begin{equation}
542
+ \mathbb{P}(|\sum_{1 \leq i_{1} < i_{2} \leq d} \xi_{i_{1},i_{2}}|>a) \leq
543
+ 2e^{-\frac{a^{2}d^{2}}{2(\sum_{i=1}^{d} v_{i}^{2})^{2}}}.
544
+ \end{equation}
545
+ Therefore, by the union bound, $\mathbb{P}(|Y_{1}-\frac{\|v\|^{2}_{2}}{K}| > \frac{da}{K}) \leq \frac{2d}{K}e^{-\frac{a^{2}d^{2}}{2(\sum_{i=1}^{d} v_{i}^{2})^{2}}}$.
546
+ Let us fix $\eta > 0$.
547
+ Thus by taking $a=\frac{\sigma K \|v\|^{2}_{2}}{d}$, and again applying the union bound (over all the blocks) we conclude that the transformed vector $v$ is not $\eta$-balanced with probability at most $2de^{-\frac{(1-\eta)^{2}K^{2}}{2}}$.
548
+ That completes the proof.
549
+ \end{proof}
550
+
551
+ Calculated upper bound on the probability of failure from Theorem \ref{hadamard_theorem} as a function of
552
+ the number of blocks $K$ is presented on Fig. \ref{pfail_upper_bound}. We clearly see that failure probability
553
+ exponentially decreases with number of blocks $K$.
554
+
555
+ \subsubsection{Proof of Theorem \ref{lipsch_theory_main}}
556
+
557
+ If some boundedness and balancedness conditions regarding datapoints can be assumed, we can
558
+ obtain exponentially-strong concentration results regarding unbiased estimator considered in the paper.
559
+ Next we show some results that can be obtained even if the boundedness and balancedness conditions do not hold.
560
+ Below we present the proof of Theorem \ref{lipsch_theory_main}.
561
+
562
+ \begin{proof}
563
+
564
+
565
+ Let us define: $\mathcal{Z} = \sum_{k=1}^{K} \mathcal{Z}^{(k)}$, where: $\mathcal{Z}^{(k)} = q^{(k)T}x^{(k)} - q^{(k)T}u_{x}^{(k)}$.
566
+ We have:
567
+ \begin{align}
568
+ \begin{split}
569
+ \label{main_ineq}
570
+ \mathbb{P}(\mathcal{F}(a,\epsilon)) = \mathbb{P}((q^{T}x > a) \land (q^{T}u_{x}>q^{T}x(1+\epsilon)) \lor (q^{T}u_{x}<q^{T}x(1-\epsilon)))\\
571
+ \leq \mathbb{P}(|q^{T}x-q^{T}u_{x}| > a \epsilon)\\
572
+ =\mathbb{P}(|\sum_{k=1}^{K}(q^{(k)T}x^{(k)}-q^{(k)T}u_{x}^{(k)})| > a \epsilon)\\
573
+ = \mathbb{P}(|\sum_{k=1}^{K} \mathcal{Z}^{(k)}| > a\epsilon).
574
+ \end{split}
575
+ \end{align}
576
+
577
+
578
+ Note that from Eq. (\ref{main_ineq}), we get:
579
+ \begin{equation}
580
+ \label{imp_eq}
581
+ \mathbb{P}(\mathcal{F}(a,\epsilon)) \leq \mathbb{P}(|\sum_{k=1}^{K} \mathcal{Z}^{(k)}| > a \epsilon).
582
+ \end{equation}
583
+
584
+ Let us fix now the $k$th block ($k=1,...,K$).
585
+ From the $\eta$-balancedness we get that every datapoint truncated to its $k$th block is within distance
586
+ $\gamma = \sqrt{(\frac{1}{K}+(1-\eta))}r$ to $p^{(k)}$ (i.e. $z$ truncated to its $k$th block). Now consider in the linear space related to the $k$th block the ball $\mathcal{B}^{'}(p^{(k)},\gamma)$. Note that
587
+ since the dimensionality of each datapoint truncated to the $k$th block is $\frac{d}{K}$, we can conclude that all datapoints truncated to their $k$th blocks that reside in $\mathcal{B}^{'}(p^{(k)},\gamma)$ can be covered by $c$ balls of radius $r^{'}$ each, where: $(\frac{\gamma}{r^{'}})^{\frac{d}{K}} = c$. We take as the set of quantizers $u^{(k)}_{1},...,u^{(k)}_{C}$ for the $k$th block the centers of mass of sets consisting of points from these balls.
588
+ We will show now that sets: $\{u^{(k)}_{1},...,u^{(k)}_{C}\}$ ($k=1,...,K$) defined in such a way are the codebooks we are looking for.
589
+
590
+
591
+
592
+ From the triangle inequality and Cauchy-Schwarz inequality, we get:
593
+ \begin{equation}
594
+ \label{z_bound}
595
+ |\mathcal{Z}^{(k)}| \leq (\max_{q \in Q} \|q^{(k)}\|_{2})(\max_{x \in X} \|x^{(k)}-u_{x}^{(k)}\|_{2}) \leq 2q_{max}r^{'} = 2q_{max}\gamma c^{-\frac{K}{d}}.
596
+ \end{equation}
597
+
598
+ This comes straightforwardly from the way we defined sets:
599
+ $\{u^{(k)}_{1},...,u^{(k)}_{C}\}$ for $k=1,...,K$.
600
+
601
+ \begin{comment}
602
+ We will need now the following definition:
603
+
604
+ \begin{definition}
605
+ \label{lip_defi}
606
+ Let $\Omega = \Omega_{1} \times ... \times \Omega_{K}$ and let $f:\Omega \rightarrow \mathbb{R}$ be a (measurable) function, i.e. a real random variable on $\Omega$.
607
+ We say that the $k$th coordinate has effect at most $c_{k}$ on $f$ if $|f(\omega)-f(\omega^{'})| \leq c_{k}$ for all $\omega$, $\omega^{'} \in \Omega$ that differ only in the $k$th coordinate.
608
+ \end{definition}
609
+ \end{comment}
610
+
611
+ \begin{comment}
612
+ \begin{lemma}
613
+ \label{conc_lemma}
614
+ Let $(\Omega, \Sigma, P)$
615
+ be the product of probability spaces $(\Omega_{i}, \Sigma_{i}, P_{i})$ for $k=1,...,K$ and let $f:\Omega \rightarrow \mathbb{R}$ be a function such that the $k$th
616
+ coordinate has effect at most $c_{k}$. Then:
617
+ $$\mathbb{P}(|f-E[f]| \geq t) \leq 2e^{-\frac{t^{2}}{2\sigma^{2}}},$$
618
+ where $\sigma^{2}=\sum_{k=1}^{K} c_{k}^{2}$.
619
+ \end{lemma}
620
+ \end{comment}
621
+
622
+ Let us take: $X_{i}=\mathcal{Z}^{(i)}$.
623
+ Thus, from (\ref{z_bound}), we see that $\{X_{1},...,X_{K}\}$ defined in such a way satisfies assumptions of Lemma \ref{conc_lemma} for $c_{k}=2q_{max}\gamma c^{-\frac{K}
624
+ {d}}$.
625
+
626
+
627
+ Therefore, from Lemma \ref{conc_lemma}, we get:
628
+
629
+ \begin{equation}
630
+ \mathbb{P}(|\sum_{k=1}^{K} \mathcal{Z}^{(k)}| > a\epsilon) \leq
631
+ 2e^{-(\frac{a \epsilon}{r})^{2}\frac{C^{\frac{2K}{d}}}{8q^{2}_{max}(1+(1-\eta)K)}},
632
+ \end{equation}
633
+
634
+ and that, by (\ref{imp_eq}), completes the proof.
635
+
636
+ The dependence of the probability of failure $\mathcal{F}(a,\epsilon)$
637
+ from Theorem \ref{lipsch_theory_main} on the number of subspaces $K$ is presented on Fig. \ref{f_bound}.
638
+
639
+ \end{proof}
640
+
641
+ \begin{figure}
642
+ \centering
643
+ \begin{subfigure}[c]{1 \textwidth}
644
+ \includegraphics[trim=0.7in 2.5in 0.9in 2.5in,clip,width=.49 \textwidth]{figure/bound1.pdf}
645
+ \includegraphics[trim=0.7in 2.5in 0.9in 2.5in,clip,width=.49 \textwidth]{figure/bound2.pdf}
646
+ \end{subfigure}
647
+ \caption{Upper bound on the probability
648
+ of an event $\mathcal{F}(a,\epsilon)$ as a function of the number of
649
+ subspaces $K$ for $\epsilon=0.2$. The left figure corresponds to $\eta=0.75$ and the right one to $\eta=0.5$.
650
+ Different curves correspond to different data dimensionality ($d=128,256,512,1024$). We assume that the entire data is in the unit-ball and
651
+ the norm of $q$ is uniformly split across all $K$ chunks.}
652
+ \label{f_bound}
653
+ \end{figure}
654
+
655
+ \begin{comment}
656
+ Let us see what the result above means in practice. Assume that $d=10$, $b=1$, $R=100$ and we are interested only
657
+ in dot products of order $100^{\frac{7}{4}}$ or more (notice that the maximum dot product if the queries are also taken from $\mathcal{B}(r)$ is of order $O(100^{2})$. Then the probability that the quantization method
658
+ returns datapoint for which dot product differs from the exact one by at least $10\%$ is no more than $3.33\%$.
659
+ \end{comment}
660
+
661
+ \begin{comment}
662
+ \begin{figure}
663
+ \label{pfail_upper_bound}
664
+ \centering
665
+ \begin{subfigure}[c]{1 \textwidth}
666
+ \includegraphics[trim=0.9in 2.5in 0.9in 2.5in,clip,width=.98 \textwidth]{figure/second_plot.pdf}
667
+ \end{subfigure}
668
+ \caption{...}
669
+ \end{figure}
670
+ \end{comment}
671
+
672
+ The following result is of its own interest since it does not assume anything about balancedness or boundedness.
673
+ It shows that minimizing the objective function $L = \sum_{k=1}^{K}L^{(k)}$, where:
674
+ $L^{(k)} = E_{q \sim \mathbf{Q}} [\sum_{S^{(k)}_{c}} \sum_{x\in S^{(k)}_{c}}(q^{(k)T} x^{k} - q^{(k)T} u^{(k)}_{x})^2]$,
675
+ leads to concentration results regarding error made by the algorithm.
676
+
677
+ \begin{theorem}
678
+ \label{variance_theorem}
679
+ The following is true:
680
+ $$\mathbb{P}(\mathcal{F}(a,\epsilon)) \leq \frac{K^{3}\max_{k=1,...,K}L^{(k)}}{|X|a^{2}\epsilon^{2}}.$$
681
+ \end{theorem}
682
+
683
+ \begin{proof}
684
+ Fix some $k \in \{1,...,K\}$.
685
+ Let us consider first the expression
686
+ $L^{(k)} = E_{q \sim \mathbf{Q}} [\sum_{S^{(k)}_{c}} \sum_{x\in S^{(k)}_{c}}(q^{(k)T} x^{k} - q^{(k)T} u_{x}^{(k)})^2]$
687
+ that our algorithm aims to minimize. We will show that it is a rescaled version of the variance of the random variable
688
+ $\mathcal{Z}$.
689
+
690
+ We have:
691
+
692
+ \begin{align}
693
+ \begin{split}
694
+ Var(\mathcal{Z}^{(k)}) = E_{q \sim \mathbf{Q}, x \sim \mathbf{X}}[(q^{(k)T}x^{(k)}-q^{(k)T}u_{x}^{(k)})^{2}]
695
+ -(E_{q \sim \mathbf{Q}, x \sim \mathbf{X}}[q^{(k)T}x^{(k)}-q^{(k)T}u_{x}^{(k)}])^{2} \\
696
+ =E_{q \sim \mathbf{Q}, x \sim \mathbf{X}}[(q^{(k)T}x^{(k)}-q^{(k)T}u_{x}^{k})^{2}],
697
+ \label{var_eq1}
698
+ \end{split}
699
+ \end{align}
700
+
701
+ where the last inequality comes from the unbiasedness of the estimator (Lemma \ref{thm:unbiased}).
702
+
703
+ Thus we obtain:
704
+
705
+ \begin{align}
706
+ \begin{split}
707
+ Var(\mathcal{Z}^{(k)}) = E_{q \sim \mathbf{Q}} [\sum_{x \in X} \frac{1}{|X|} (q^{(k)T}x^{(k)}-q^{(k)T}u_{x}^{(k)})^{2}] =\frac{1}{|X|}L^{(k)}.
708
+ \end{split}
709
+ \end{align}
710
+
711
+
712
+ Therefore, by minimizing $L^{(k)}$ we minimize the variance of the random variable that measures the discrepancy between exact answer and quantized answer to the dot product query for the space truncated to the fixed $k$th block.
713
+ Denote $u_{x}=(u_{x}^{(1)},...,u_{x}^{(K)})$.
714
+ We are ready to give an upper bound on $\mathbb{P}(\mathcal{F}(a,\epsilon))$.
715
+
716
+ We have:
717
+
718
+ \begin{align}
719
+ \begin{split}
720
+ \mathbb{P}(\mathcal{F}(a,\epsilon))
721
+ \leq \mathbb{P}(|q^{T}x-q^{T}u_{x}| > a \epsilon)
722
+ =\mathbb{P}(|\sum_{k=1}^{K}(q^{(k)T}x^{(k)}-q^{(k)T}u_{x}^{(k)})| > a \epsilon)\\
723
+ \leq \mathbb{P}(\sum_{k=1}^{K}|(q^{(k)T}x^{(k)}-q^{(k)T}u_{x}^{(k)})| > a \epsilon) \\
724
+ \leq \mathbb{P}(\exists_{k \in \{1,...,K\}}|q^{(k)T}x^{(k)}-q^{(k)T}u_{x}^{(k)})| > \frac{a \epsilon}{K})\\
725
+ \leq \frac{K^{3}\max_{k \in \{1,...,K\}}(Var(q^{(k)T}x^{(k)}-q^{(k)T}u_{x}^{(k)}))}{a^{2}\epsilon^{2}}\\
726
+ = \frac{K^{3}\max_{k=1,...,K} Var(\mathcal{Z}^{(k)})}{a^{2}\epsilon^{2}}.
727
+ \end{split}
728
+ \end{align}
729
+
730
+ The last inequality comes from Markov's inequality applied to the random variable $(\mathcal{Z}^{(k)})^{2}$ and the union bound.
731
+ Thus, by applying obtained bound on $Var(\mathcal{Z}^{(k)})$, we complete the proof.
732
+
733
+ \end{proof}
734
+
735
+
736
+ \subsubsection{Independent blocks - the proof of Theorem \ref{ind_theory1_main}}
737
+
738
+ Let us assume that different blocks correspond to independent sets of dimensions.
739
+ Such an assumption is often reasonable in practice.
740
+ If this is the case, we can strengthen our methods for obtaining tight concentration inequalities.
741
+ The proof of Theorem \ref{ind_theory1_main} that covers this scenario is given below.
742
+
743
+ \begin{proof}
744
+ Let us assume first the most general case, when no balancedness is assumed. We begin the proof in the same way as we did
745
+ in the previous section, i.e. fix some $k \in \{1,...,K\}$ and consider random variable $\mathcal{Z}^{(k)}$.
746
+ The goal is again to first find an upper bound on $Var(\mathcal{Z}^{(k)})$.
747
+ From the proof of Theorem \ref{variance_theorem} we get:
748
+ $Var(\mathcal{Z}^{(k)}) = \frac{1}{|X|}L^{(k)}$. Then again, following the proof of
749
+ Theorem \ref{variance_theorem}, we have:
750
+
751
+
752
+ \begin{equation}
753
+ \label{berry_intro}
754
+ \mathbb{P}(\mathcal{F}(a,\epsilon)) \leq \mathbb{P}(|\sum_{i=k}^{K} \mathcal{Z}^{(k)}| > a \epsilon)
755
+ \end{equation}
756
+
757
+ We will again bound the expression $\mathbb{P}(|\sum_{i=k}^{K} \mathcal{Z}^{(k)}| > a \epsilon)$.
758
+ We will use now the following version of the Berry-Esseen inequality ([11]):
759
+
760
+ \begin{theorem}
761
+ \label{berry}
762
+ Let $\{S_{1},...,S_{n}\}$ be a sequence of independent random variables with mean $0$, not necessarily identically distributed, with finite third moment each.
763
+ Assume that $\sum_{i=1}^{n}E[S_{i}^{2}] = 1$. Define: $W = \sum_{i=1}^{n} S_{i}$.
764
+ Then the following holds:
765
+ $$
766
+ |\mathbb{P}(W_{n} \leq x) - \phi(x)| \leq \frac{C}{1+|x|^{3}}\sum_{i=1}^{n}E[|S_{i}|^{3}],
767
+ $$
768
+ for every $x$ and some universal constant $C>0$, where
769
+ $\phi(x) = \mathbb{P}(g \leq x)$ and $g \sim \mathcal{N}(0,1)$.
770
+ \end{theorem}
771
+
772
+ Note that if dimensions corresponding to different blocks are independent, then
773
+ $\{\mathcal{Z}^{(1)},...,\mathcal{Z}^{(K)}\}$ is the family of independent random variables. This is the case, since every $\mathcal{Z}^{(k)}$ is defined as:
774
+ $\mathcal{Z}^{(k)} = q^{(k)T}x^{(k)}-q^{(k)T}u_{x}^{(k)})$.
775
+ Note that we have already noticed that the following holds:
776
+ $E[\mathcal{Z}^{(k)}]=0$.
777
+ Let us take: $S^{(k)} = \frac{\mathcal{Z}^{(k)}}{\sqrt{\sum_{k=1}^{K} Var(\mathcal{Z}^{(k)})}}$. Clearly, we have: $\sum_{k=1}^{K}E[(S^{(k)})^{2}]=1$.
778
+ Besides, random variables $S^{(k)}$ defined in this way are independent and
779
+ $E[S^{(k)}]=0$ for $k=1,...,K$.
780
+ Denote:
781
+
782
+ \begin{equation}
783
+ F = \sum_{k=1}^{K} E[|S^{(k)}|^{3}] = \frac{1}{(\sum_{k=1}^{K}Var(\mathcal{Z}^{(k)}))^{\frac{3}{2}}}\sum_{k=1}^{K} E[|\mathcal{Z}^{(k)}|^{3}]
784
+ \end{equation}
785
+
786
+ Thus, from Theorem \ref{berry} we get:
787
+
788
+ \begin{equation}
789
+ |\mathbb{P}\left(\frac{\sum_{k=1}^{K}\mathcal{Z}^{(k)}}
790
+ {\sqrt{\sum_{k=1}^{K}Var(\mathcal{Z}^{(k)})}} \leq x\right)-\phi(x)| \leq
791
+ \frac{C}{1+x^{3}}F.
792
+ \end{equation}
793
+
794
+ Therefore, for every $c>0$ we have:
795
+
796
+ \begin{align}
797
+ \begin{split}
798
+ \mathbb{P}\left(\frac{|\sum_{k=1}^{K}\mathcal{Z}^{(k)}|}
799
+ {\sqrt{\sum_{k=1}^{K}Var(\mathcal{Z}^{(k)})}} > c\right)
800
+ = 1 - \mathbb{P}\left(\frac{\sum_{k=1}^{K}\mathcal{Z}^{(k)}}
801
+ {\sqrt{\sum_{k=1}^{K}Var(\mathcal{Z}^{(k)})}} \leq c\right) \\
802
+ + \mathbb{P}\left(\frac{\sum_{k=1}^{b}\mathcal{Z}^{(k)}}
803
+ {\sqrt{\sum_{k=1}^{K}Var(\mathcal{Z}^{(k)})}} < -c\right) \\
804
+ \leq 1 - \phi(c) + \phi(-c) + \frac{2C}{1+c^{3}}F
805
+ \end{split}
806
+ \end{align}
807
+
808
+ Denote $\hat{\phi}(x) = 1 - \phi(x)$.
809
+ Thus, we have:
810
+
811
+ \begin{align}
812
+ \label{esseen_final}
813
+ \begin{split}
814
+ \mathbb{P}\left(|\sum_{k=1}^{K} \mathcal{Z}^{(k)}| >
815
+ c\sqrt{\sum_{k=1}^{K} Var(\mathcal{Z}^{(k)})}\right) \leq 1-\phi(c)+\phi(-c)+\frac{2C}{1+c^{3}}F \\
816
+ = 2\hat{\phi}(c) + \frac{2C}{1+c^{3}}F \\
817
+ \leq \frac{2}{\sqrt{2 \pi}c}e^{-\frac{c^{2}}{2}}+\frac{2C}{1+c^{3}}F,
818
+ \end{split}
819
+ \end{align}
820
+
821
+ where in the last inequality we used a well-known fact that:
822
+ $\hat{\phi}(x) \leq \frac{1}{\sqrt{2\pi}x}e^{-\frac{x^{2}}{2}}$.
823
+
824
+ If we now take: $c = \frac{a \epsilon}{\sqrt{\sum_{k=1}^{K} Var(\mathcal{Z}^{(k)}}}$,
825
+ then by applying (\ref{esseen_final}) to (\ref{berry_intro}), we get:
826
+ \begin{align}
827
+ \begin{split}
828
+ \mathbb{P}(|\sum_{k=1}^{K}\mathcal{Z}^{(k)}| > a\epsilon) \leq
829
+ \frac{2\sum_{k=1}^{b}Var(\mathcal{Z}^{(k)})}{\sqrt{2 \pi} a\epsilon}
830
+ e^{-\frac{a^{2}\epsilon^{2}}{2(\sum_{k=1}^{K}Var(\mathcal{Z}^{(k)}))^{2}}}\\
831
+ + \frac{2C}{1+(\sum_{k=1}^{K}Var(\mathcal{Z}^{(k)}))^{\frac{3}{2}}}\sum_{k=1}^{K}E[|\mathcal{Z}^{(k)}|^{3}],
832
+ \end{split}
833
+ \end{align}
834
+
835
+ Substituting the exact expression for $Var(\mathcal{Z}^{(k)})$, we get:
836
+
837
+ \begin{equation}
838
+ \label{final_equation}
839
+ \mathbb{P}(|\sum_{k=1}^{K}\mathcal{Z}^{(k)}| > a\epsilon) \leq
840
+ \frac{2\sum_{k=1}^{K}L^{(k)}}{\sqrt{2\pi}|X|a\epsilon}e^{-\frac{a^{2}\epsilon^{2}|X|^{2}}{2(\sum_{k=1}^{K}L^{(k)})^{2}}}+
841
+ \frac{2C(\sum_{k=1}^{K} L^{(k)})^{\frac{3}{2}}}{a^{3}\epsilon^{3}|X|^{\frac{3}{2}}}
842
+ \sum_{k=1}^{K}E[|\mathcal{Z}^{(k)}|^{3}].
843
+ \end{equation}
844
+
845
+ Note that $|\mathcal{Z}^{(k)}| = |q^{(k)T}x^{(k)}-q^{(k)T}u_{x}^{(k)}|=
846
+ |q^{(k)T}(x^{(k)} - u_{x}^{(k)})| \leq
847
+ \|q^{(k)}\|_{2}\|x^{(k)}-u_{x}^{(k)}\|_{2}$.
848
+ The latter expression is at most $q_{max}\Delta$, by the definition of $\Delta$
849
+ and $q_{max}$. Thus we get: $|\mathcal{Z}^{(k)}|^{3} \leq q_{max}^{3}\Delta^{3} \leq a$,
850
+ where the last inequality follows from the assumptions on $\Delta$ from the statement of the theorem. Therefore, from \ref{final_equation} we get:
851
+
852
+ \begin{equation}
853
+ \mathbb{P}(|\sum_{k=1}^{K}\mathcal{Z}^{(k)}| > a\epsilon) \leq
854
+ \frac{2\sum_{k=1}^{K}L^{(k)}}{\sqrt{2\pi}|X|a\epsilon}e^{-\frac{a^{2}\epsilon^{2}|X|^{2}}{2(\sum_{k=1}^{K}L^{(k)})^{2}}}+
855
+ \frac{2CK(\sum_{k=1}^{K} L^{(k)})^{\frac{3}{2}}}{a^{2}\epsilon^{3}|X|^{\frac{3}{2}}}.
856
+ \end{equation}
857
+
858
+ Thus, taking into account (\ref{berry_intro}) and putting $\beta = 2C$, we complete the proof.
859
+ \end{proof}
860
+
861
+ {
862
+ \bibliographystyle{ieee}
863
+ \bibliography{thesisrefs}
864
+ }
865
+
866
+
867
+ \end{document}
papers/1509/1509.06825.tex ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ \documentclass[letterpaper, 10 pt, conference]{ieeeconf}
4
+
5
+
6
+
7
+ \IEEEoverridecommandlockouts
8
+
9
+ \overrideIEEEmargins \usepackage{hyperref}
10
+ \usepackage{url}
11
+ \usepackage{cite}
12
+ \usepackage{graphicx}
13
+ \usepackage{gensymb}
14
+ \usepackage{amsmath}
15
+ \usepackage{tabularx}
16
+ \usepackage{color,soul}
17
+
18
+
19
+
20
+
21
+ \title{\LARGE \bf
22
+ Supersizing Self-supervision: Learning to Grasp \\ from 50K Tries and 700 Robot Hours
23
+ }
24
+ \author{Lerrel Pinto and Abhinav Gupta
25
+ \\ The Robotics Institute, Carnegie Mellon University
26
+ \\ \texttt{(lerrelp, abhinavg)@cs.cmu.edu}
27
+ \vspace*{-0.1in}
28
+ }
29
+
30
+ \begin{document}
31
+ \maketitle
32
+
33
+ \thispagestyle{empty}
34
+ \pagestyle{empty}
35
+
36
+
37
+
38
+
39
+ \begin{abstract}
40
+ Current learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.
41
+ \end{abstract}
42
+ \section{INTRODUCTION}
43
+ Consider the object shown in Fig.~\ref{fig:intro_fig}(a). How do we predict grasp locations for this object? One approach is to fit 3D models to these objects, or to use a 3D depth sensor, and perform analytical 3D reasoning to predict the grasp locations~\cite{brooks1983planning,shimoga1996robot,lozano1989task,nguyen1988constructing}. However, such an approach has two drawbacks: (a) fitting 3D models is an extremely difficult problem by itself; but more importantly, (b) a geometry based-approach ignores the densities and mass distribution of the object which may be vital in predicting the grasp locations. Therefore, a more practical approach is to use visual recognition to predict grasp locations and configurations, since it does not require explicit modelling of objects. For example, one can create a grasp location training dataset for hundreds and thousands of objects and use standard machine learning algorithms such as CNNs~\cite{le1990handwritten,krizhevsky2012imagenet} or autoencoders~\cite{olshausen1997sparse} to predict grasp locations in the test data. However, creating a grasp dataset using human labeling can itself be quite challenging for two reasons. First, most objects can be grasped in multiple ways which makes exhaustive labeling impossible (and hence negative data is hard to get; see Fig.~\ref{fig:intro_fig}(b)). Second, human notions of grasping are biased by semantics. For example, humans tend to label handles as the grasp location for objects like cups even though they might be graspable from several other locations and configurations. Hence, a randomly sampled patch cannot be assumed to be a negative data point, even if it was not marked as a positive grasp location by a human. Due to these challenges, even the biggest vision-based grasping dataset~\cite{jiang2011efficient} has about only 1K images of objects in isolation (only {\bf one} object visible without any clutter).
44
+
45
+ \begin{figure}[t!]
46
+ \begin{center}
47
+ \includegraphics[width=3.5in]{intro_fig.png}
48
+ \end{center}
49
+ \vspace{-0.1in}
50
+ \caption{ We present an approach to train robot grasping using 50K trial and error grasps. Some of the sample objects and our setup are shown in (a). Note that each object in the dataset can be grasped in multiple ways (b) and therefore exhaustive human labeling of this task is extremely difficult.}
51
+ \vspace{-0.2in}
52
+ \label{fig:intro_fig}
53
+ \end{figure}
54
+
55
+ In this paper, we break the mold of using manually labeled grasp datasets for training grasp models. We believe such an approach is not scalable. Instead, inspired by reinforcement learning (and human experiential learning), we present a self-supervising algorithm that learns to predict grasp locations via trial and error. But how much training data do we need to train high capacity models such as Convolutional Neural Networks (CNNs)~\cite{krizhevsky2012imagenet} to predict meaningful grasp locations for new unseen objects? Recent approaches have tried to use reinforcement learning with a few hundred datapoints and learn a CNN with hundreds of thousand parameters~\cite{levine2015end}. We believe that such an approach, where the training data is substantially fewer than the number of model parameters, is bound to overfit and would fail to generalize to new unseen objects. Therefore, what we need is a way to collect hundreds and thousands of data points (possibly by having a robot interact with objects 24/7) to learn a meaningful representation for this task. But is it really possible to scale trial and errors experiments to learn visual representations for the task of grasp prediction?
56
+
57
+ \begin{figure*}[t!]
58
+ \begin{center}
59
+ \includegraphics[width=7in]{data_collection_summary.png}
60
+ \end{center}
61
+ \vspace{-0.1in}
62
+ \caption{Overview of how random grasp actions are sampled and executed.
63
+ }
64
+ \label{fig:data_collection_method}
65
+ \vspace{-0.1in}
66
+ \end{figure*}
67
+
68
+ Given the success of high-capacity learning algorithms such as CNNs, we believe it is time to develop large-scale robot datasets for foundational tasks such as grasping. Therefore, we present a large-scale experimental study that not only substantially increases the amount of data for learning to grasp, but provides complete labeling in terms of whether an object can be grasped at a particular location and angle. This dataset, collected with robot executed interactions, will be released for research use to the community. We use this dataset to fine-tune an AlexNet~\cite{krizhevsky2012imagenet} CNN model pre-trained on ImageNet, with 18M new parameters to learn in the fully connected layers, for the task of prediction of grasp location. Instead of using regression loss, we formulate the problem of grasping as an 18-way binary classification over 18 angle bins. Inspired by the reinforcement learning paradigm~\cite{ross2010reduction,mnih2015human}, we also present a staged-curriculum based learning algorithm where we learn how to grasp, and use the most recently learned model to collect more data.
69
+
70
+
71
+
72
+ The contributions of the paper are three-fold: (a) we introduce one of the largest robot datasets for the task of grasping. Our dataset has more than 50K datapoints and has been collected using 700 hours of trial and error experiments using the Baxter robot. (b) We present a novel formulation of CNN for the task of grasping. We predict grasping locations by sampling image patches and predicting the grasping angle. Note that since an object may be graspable at multiple angles, we model the output layer as an 18-way binary classifier. (c) We present a multi-stage learning approach to collect hard-negatives and learn a better grasping model. Our experiments clearly indicate that a larger amount of data is helpful in learning a better grasping model. We also show the importance of multi-stage learning using ablation studies and compare our approach to several baselines. Real robot testing is performed to validate our method and show generalization to grasping unseen objects. \section{Related Work}
73
+
74
+ Object manipulation is one of the oldest problems in the field of robotics. A comprehensive literature review of this area can be found in~\cite{bicchi2000robotic,bohg2014data}. Early attempts in the field focused on using analytical methods and 3D reasoning for predicting grasp locations and configurations~\cite{brooks1983planning,shimoga1996robot,lozano1989task,nguyen1988constructing}. These approaches assumed the availability of complete knowledge of the objects to be grasped, such as the complete 3D model of the given object, along with the object's surface friction properties and mass distribution. However, perception and inference of 3D models and other attributes such as friction/mass from RGB or RGBD cameras is an extremely difficult problem. To solve these problems people have constructed grasp databases \cite{goldfeder2009columbia,kootstra2012visgrab}. Grasps are sampled and ranked based on similarities to grasp instances in a pre-existing database. These methods however do not generalize well to objects outside the database.
75
+
76
+ Other approaches to predict grasping includes using simulators such as Graspit!\cite{miller2004graspit,miller2003automatic}. In these approaches, one samples grasp candidates and ranks them based on an analytical formulation. However questions often arise as to how well a simulated environment mirrors the real world. \cite{bohg2014data,diankov2010automated,weisz2012pose} offer reasons as to why a simulated environment and an analytic metric would not parallel the real world which is highly unstructured.
77
+
78
+ Recently, there has been more focus on using visual learning to predict grasp locations directly from RGB or RGB-D images~\cite{saxena2008robotic,montesano2012active} . For example, \cite{saxena2008robotic} uses vision based features (edge and texture filter responses) and learns a logistic regressor over synthetic data. On the other hand, \cite{lenz2013deep,ramisa2012using} use human annotated grasp data to train grasp synthesis models over RGB-D data. However, as discussed above, large-scale collection of training data for the task of grasp prediction is not trivial and has several issues. Therefore, none of the above approaches are scalable to use big data.
79
+
80
+ Another common way to collect data for robotic tasks is using the robot's own trial and error experiences ~\cite{morales2004using,detry2009learning,Paolini_2014_7585}. However, even recent approaches such as~\cite{levine2015end,levine2015learning} only use a few hundred trial and error runs to train high capacity deep networks. We believe this causes the network to overfit and often no results are shown on generalizability to new unseen objects. Other approaches in this domain such as \cite{BoulariasBS15} use reinforcement learning to learn grasp attributes over depth images of a cluttered scene. However the grasp attributes are based on supervoxel segmentation and facet detection. This creates a prior on grasp synthesis and may not be desirable for complex objects.
81
+
82
+ Deep neural networks have seen immense success in image classification \cite{krizhevsky2012imagenet} and object detection \cite{girshick2014rich}. Deep networks have also been exploited in robotics systems for grasp regression \cite{redmon2014real} or learning policy for variety of tasks~\cite{levine2015learning}. Furthermore DAgger~\cite{ross2010reduction} shows a simple and practical method of sampling the interesting regions of a state space by dataset aggregation. In this paper, we propose an approach to scale up the learning from few hundred examples to thousands of examples. We present an end-to-end self-supervising staged curriculum learning system that uses thousands of trial-error runs to learn deep networks. The learned deep network is then used to collect greater amounts of positive and hard negative (model thinks as graspable but in general are not) data which helps the network to learn faster. \section{Approach}
83
+
84
+
85
+ We first explain our robotic grasping system and how we use it to collect more than 50K data points.
86
+ Given these training data points, we train a CNN-based classifier which given an input image patch predicts the grasp likelihood for different grasp directions. Finally, we explain our staged-curriculum learning framework which helps our system to find hard negatives: data points on which the model performs poorly and hence causes high loss with greater back propagation signal.
87
+
88
+ \noindent {\bf Robot Grasping System:} Our experiments are carried out on a Baxter robot from Rethink Robotics and we use ROS \cite{quigley2009ros} as our development system. For gripping we use the stock two fingered parallel gripper with a maximum width (open state) of $75$mm and a minimum width (close state) of $37$mm.
89
+
90
+ A Kinect V2 is attached to the head of the robot that provides $1920\times1280$ resolution image of the workspace(dull white colored table-top). Furthermore, a $1280\times720$ resolution camera is attached onto each of Baxter's end effector which provides rich images of the objects Baxter interacts with. For the purposes of trajectory planning a stock Expansive Space Tree (EST) planner \cite{sucan2012the-open-motion-planning-library} is used. It should be noted that we use both the robot arms to collect the data more quickly.
91
+
92
+ During experiments, human involvement is limited to switching on the robot and placing the objects on the table in an arbitrary manner. Apart from initialization, we have {\bf no human involvement} in the process of data collection. Also, in order to gather data as close to real world test conditions, we perform trial and error grasping experiments in cluttered environment.
93
+ Grasped objects, on being dropped, at times bounce/roll off the robot workspace, however using cluttered environments also ensures that the robot always has an object to grasp. This experimental setup negates the need for constant human supervision. The Baxter robot is also robust against break down, with experiments running for 8-10 hours a day.
94
+
95
+ \noindent {\bf Gripper Configuration Space and Parametrization:} In this paper, we focus on the planar grasps only. A planar grasp is one where the grasp configuration is along and perpendicular to the workspace. Hence the grasp configuration lies in 3 dimensions, $(x,y)$: position of grasp point on the surface of table and $\theta$: angle of grasp.
96
+
97
+
98
+
99
+ \subsection{Trial and Error Experiments}
100
+
101
+
102
+ The data collection methodology is succinctly described in Fig.~\ref{fig:data_collection_method}. The workspace is first setup with multiple objects of varying difficulty of graspability placed haphazardly on a table with a dull white background. Multiple random trials are then executed in succession.
103
+
104
+ A single instance of a random trial goes as follows:
105
+
106
+ \textbf{Region of Interest Sampling:} An image of the table, queried from the head-mounted Kinect, is passed through an off-the-shelf Mixture of Gaussians (MOG) background subtraction algorithm that identifies regions of interest in the image. This is done solely to reduce the number of random trials in empty spaces without objects in the vicinity. A random region in this image is then selected to be the region of interest for the specific trial instance.
107
+
108
+ \textbf{Grasp Configuration Sampling:} Given a specific region of interest, the robot arm moves to $25$cm above the object. Now a random point is uniformly sampled from the space in the region of interest. This will be the robot's grasp point. To complete the grasp configuration, an angle is now chosen randomly in range$(0,\pi)$ since the two fingered gripper is symmetric.
109
+
110
+ \textbf{Grasp Execution and Annotation:} Now given the grasp configuration, the robot arm executes a pick grasp on the object. The object is then raised by $20$cm and annotated as a success or a failure depending on the gripper's force sensor readings.
111
+
112
+ Images from all the cameras, robot arm trajectories and gripping history are recorded to disk during the execution of these random trials.
113
+
114
+ \begin{figure}[t!]
115
+ \begin{center}
116
+ \includegraphics[width=3.4in]{grasp_config.png}
117
+ \end{center}
118
+ \vspace{-0.1in}
119
+ \caption{(a) We use 1.5 times the gripper size image patch to predict the grasp-ability of a location and the angle at which it can be grasped. Visualization for showing the grasp location and the angle of gripper for grasping is derived from \cite{jiang2011efficient}. (b) At test time we sample patches at different positions and choose the top graspable location and corresponding gripper angle.}
120
+ \vspace{-0.1in}
121
+ \label{fig:grasp_config}
122
+ \end{figure}
123
+ \subsection{Problem Formulation}
124
+ \begin{figure*}[t!]
125
+ \begin{center}
126
+ \includegraphics[width=7in]{training_data.png}
127
+ \end{center}
128
+ \vspace{-0.15in}
129
+ \caption{Sample patches used for training the Convolutional Neural Network.}
130
+ \label{fig:training_data}
131
+ \end{figure*}
132
+
133
+
134
+
135
+ The grasp synthesis problem is formulated as finding a successful grasp configuration $(x_S,y_S,\theta_S)$ given an image of an object $I$.
136
+ A grasp on the object can be visualised using the rectangle representation \cite{jiang2011efficient} in Fig.~\ref{fig:grasp_config}. In this paper, we use CNNs to predict grasp locations and angle. We now explain the input and output to the CNN.
137
+
138
+
139
+ \noindent {\bf Input:} The input to our CNN is an image patch extracted around the grasp point. For our experiments, we use patches 1.5 times as large as the projection of gripper fingertips on the image, to include context as well. The patch size used in experiments is 380x380. This patch is resized to 227x227 which is the input image size of the ImageNet-trained AlexNet~\cite{krizhevsky2012imagenet}.
140
+
141
+ \noindent {\bf Output:} One can train the grasping problem as a regression problem: that is, given an input image predict $(x,y,\theta)$. However, this formulation is problematic since: (a) there are multiple grasp locations for each object; (b) CNNs are significantly better at classification than the regressing to a structured output space. Another possibility is to formulate this as a two-step classification: that is, first learn a binary classifier model that classifies the patch as graspable or not and then selects the grasp angle for positive patches. However graspability of an image patch is a function of the angle of the gripper, and therefore an image patch can be labeled as both graspable and non-graspable.
142
+
143
+ Instead, in our case, given an image patch we estimate an 18-dimensional likelihood vector where each dimension represents the likelihood of whether the center of the patch is graspable at $0^{\circ}$, $10^{\circ}$, \dots $170^{\circ}$. Therefore, our problem can be thought of an 18-way binary classification problem.
144
+
145
+ \noindent {\bf Testing:} Given an image patch, our CNN outputs whether an object is graspable at the center of the patch for the 18 grasping angles. At test time on the robot, given an image, we sample grasp locations and extract patches which is fed into the CNN. For each patch, the output is 18 values which depict the graspability scores for each of the 18 angles. We select the maximum score across all angles and all patches, and execute grasp at the corresponding grasp location and angle.
146
+
147
+ \subsection{Training Approach}
148
+ \noindent{\bf Data preparation:} Given a trial experiment datapoint $(x_i, y_i, \theta_i)$, we sample 380x380 patch with $(x_i,y_i)$ being the center. To increase the amount of data seen by the network, we use rotation transformations: rotate the dataset patches by $\theta_{rand}$ and label the corresponding grasp orientation as $\{\theta_{i} + \theta_{rand}\}$. Some of these patches can be seen in Fig.~\ref{fig:training_data}
149
+
150
+
151
+ \noindent {\bf Network Design:} Our CNN, seen in Fig.~\ref{fig:network}, is a standard network architecture: our first five convolutional layers are taken from the AlexNet~\cite{krizhevsky2012imagenet,jia2014caffe} pretrained on ImageNet. We also use two fully connected layers with 4096 and 1024 neurons respectively. The two fully connected layers, fc6 and fc7 are trained with gaussian initialisation.
152
+
153
+ \begin{figure*}[t!]
154
+ \begin{center}
155
+ \includegraphics[trim=0.2in 4.5in 0.4in 2in, clip=true, width=7in]{network_arch.pdf}
156
+ \end{center}
157
+ \vspace{-0.3in}
158
+ \caption{Our CNN architecture is similar to AlexNet~\cite{krizhevsky2012imagenet}. We initialize our convolutional layers from ImageNet-trained Alexnet.}
159
+ \vspace{-0.1in}
160
+ \label{fig:network}
161
+ \end{figure*}
162
+
163
+ \noindent{\bf Loss Function:} The loss of the network is formalized as follows. Given a batch size $B$, with a patch instance $P_i$, let the label corresponding to angle $\theta_i$ be defined by $l_i\in \{0,1\}$ and the forward pass binary activations $A_{ji}$ (vector of length 2) on the angle bin $j$ \, we define our batch loss $L_B$ as:
164
+
165
+ \begin{equation}
166
+ L_B = \sum\limits_{i=1}^B\sum\limits_{j=1}^{N=18}\delta(j,\theta_i)\cdotp \textrm{softmax}(A_{ji},\l_i)
167
+ \end{equation}
168
+
169
+ where, $\delta(j,\theta_i) = 1$ when $\theta_i$ corresponds to $j^{th}$ bin. Note that the last layer of the network involves 18 binary layers instead of one multiclass layer to predict the final graspability scores. Therefore, for a single patch, only the loss corresponding to the trial angle bin is backpropagated.
170
+
171
+ \subsection{Staged Learning}
172
+ Given the network trained on the random trial experience dataset, the robot now uses this model as a prior on grasping. At this stage of data collection, we use both previously seen objects and novel objects. This ensures that in the next iteration, the robot corrects for incorrect grasp modalities while reinforcing the correct ones. Fig.~\ref{fig:patch_proposal} shows how top ranked patches from a learned model focus more on important regions of the image compared to random patches. Using novel objects further enriches the model and avoids over-fitting.
173
+ \begin{figure}[t!]
174
+ \begin{center}
175
+ \includegraphics[width=3.4in]{patch_proposal.png}
176
+ \end{center}
177
+ \vspace{-0.15in}
178
+ \caption{Highly ranked patches from learnt algorithm (a) focus more on the objects in comparison to random patches (b).}
179
+ \vspace{-0.2in}
180
+ \label{fig:patch_proposal}
181
+ \end{figure}
182
+
183
+ Note that for every trial of object grasp at this stage, 800 patches are randomly sampled and evaluated by the deep network learnt in the previous iteration. This produces a $800\times18$ grasp-ability prior matrix where entry ($i,j$) corresponds to the network activation on the $j^{th}$ angle bin for the $i^{th}$ patch. Grasp execution is now decided by importance sampling over the grasp-ability prior matrix.
184
+
185
+ Inspired by data aggregation techniques\cite{ross2010reduction}, during training of iteration $k$, the dataset $D_k$ is given by $\{D_k\} = \{D_{k-1},\Gamma d_{k}\}$, where $d_{k}$ is the data collected using the model from iteration $k-1$. Note that $D_0$ is the random grasp dataset and iteration $0$ is simply trained on $D_0$. The importance factor $\Gamma$ is kept at 3 as a design choice.
186
+
187
+ The deep network to be used for the $k^{th}$ stage is trained by finetuning the previously trained network with dataset $D_k$. Learning rate for iteration $0$ is chosen as 0.01 and trained over 20 epochs. The remaining iterations are trained with a learning rate of 0.001 over 5 epochs.
188
+
189
+
190
+
191
+
192
+
193
+
194
+
195
+
196
+
197
+
198
+
199
+
200
+ \section{Results}
201
+ \subsection{Training dataset}
202
+ The training dataset is collected over 150 objects with varying graspability. A subset of these objects can be seen in Fig.~\ref{fig:table_top}. At the time of data collection, we use a cluttered table rather than objects in isolation. Through our large data collection and learning approach, we collect 50K grasp experience interactions. A brief summary of the data statistics can be found in Table \ref{tab:grasp_data_stat}.
203
+
204
+
205
+ \begin{figure}[t!]
206
+ \begin{center}
207
+ \includegraphics[width=3in]{table_top_objects.png}
208
+ \end{center}
209
+ \vspace{-0.15in}
210
+ \caption{Random Grasp Sampling Scenario: Our data is collected in clutter rather than objects in isolation. This allows us to generalize and tackle tasks like clutter removal.}
211
+ \vspace{-0.15in}
212
+ \label{fig:table_top}
213
+ \end{figure}
214
+
215
+ \begin{table}[h]
216
+ \caption{Grasp Dataset Statistics}
217
+ \label{tab:grasp_data_stat}
218
+ \centering
219
+ \begin{tabular}{l|c|c|c|c}
220
+ \hline
221
+ \multicolumn{1}{|l|}{{\bf \begin{tabular}{@{}c@{}}Data Collection \\ Type\end{tabular}}} & {\bf Positive} & {\bf Negative} & {\bf Total } & \multicolumn{1}{c|}{{\bf Grasp Rate}} \\ \hline
222
+ \multicolumn{1}{|l|}{Random Trials} & 3,245 & 37,042 & 40,287 & \multicolumn{1}{c|}{8.05\%} \\ \hline
223
+ \multicolumn{1}{|l|}{Multi-Staged} & 2,807 & 4,500 & 7,307 & \multicolumn{1}{c|}{38.41\%} \\ \hline
224
+ \multicolumn{1}{|l|}{Test Set} & 214 & 2,759 & 2,973 & \multicolumn{1}{c|}{7.19\%} \\ \hline
225
+ \multicolumn{1}{c|}{} & {\bf 6,266} & {\bf 44,301} & {\bf 50,567} & \\ \cline{2-4}
226
+ \end{tabular}
227
+ \end{table}
228
+
229
+ \subsection{Testing and evaluation setting}
230
+ For comparisons with baselines and to understand the relative importance of the various components in our learning method, we report results on a held out test set with objects not seen in the training (Fig.~\ref{fig:novel_test_objects}). Grasps in the test set are collected via 3K physical robot interactions on 15 novel and diverse test objects in multiple poses. Note that this test set is balanced by random sampling from the collected robot interactions. The accuracy measure used to evaluate is binary classification i.e. given a patch and executed grasp angle in the test set, to predict whether the object was grasped or not.
231
+
232
+ Evaluation by this method preserves two important aspects for grasping: (a) It ensures that the test data is exactly the same for comparison which isn't possible with real robot experiments. (b) The data is from a real robot which means methods that work well on this test set should work well on the real robot. Our deep learning based approach followed by multi-stage reinforcement yields an accuracy of \textbf{79.5\%} on this test set. A summary of the baselines can be seen in Table. \ref{Tab:baseline_comp}.
233
+
234
+ We finally demonstrate evaluation in the real robot settings for grasping objects in isolation and show results on clearing a clutter of objects.
235
+
236
+ \subsection{Comparison with heuristic baselines}
237
+
238
+ \begin{table*}[]
239
+ \centering
240
+ \caption{Comparing our method with baselines}
241
+ \label{Tab:baseline_comp}
242
+ \begin{tabular}{cccc|cccc}
243
+ \hline
244
+ & \multicolumn{3}{c|}{Heuristic} & \multicolumn{4}{c}{Learning based} \\ \hline
245
+ & \begin{tabular}[c]{@{}c@{}}Min \\ eigenvalue\end{tabular} & \begin{tabular}[c]{@{}c@{}}Eigenvalue \\ limit\end{tabular} & \begin{tabular}[c]{@{}c@{}}Optimistic \\ param. select\end{tabular} & kNN & SVM & \begin{tabular}[c]{@{}c@{}}Deep Net\\ (ours)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Deep Net + Multi-stage\\ (ours)\end{tabular} \\ \hline
246
+ Accuracy & 0.534 & 0.599 & 0.621 & 0.694 & 0.733 & 0.769 & \textbf{0.795}
247
+ \end{tabular}
248
+ \end{table*}
249
+
250
+ A strong baseline is the "common-sense" heuristic which is discussed in \cite{katz2014perceiving}. The heuristic, modified for the RGB image input task, encodes obvious grasping rules:
251
+ \begin{enumerate}
252
+ \item Grasp about the center of the patch. This rule is implicit in our formulation of patch based grasping.
253
+ \item Grasp about the smallest object width. This is implemented via object segmentation followed by eigenvector analysis. Heuristic's optimal grasp is chosen along the direction of the smallest eigenvalue. If the test set executed successful grasp is within an error threshold of the heuristic grasp, the prediction is a success. This leads to an accuracy of 53.4\%
254
+ \item Do not grasp too thin objects, since the gripper doesn't close completely. If the largest eigenvalue is smaller than the mapping of the gripper's minimum width in image space, the heuristic predicts no viable grasps; i.e no object is large enough to be grasped. This leads to an accuracy of 59.9\%
255
+ \end{enumerate}
256
+ By iterating over all possible parameters (error thresholds and eigenvalue limits) in the above heuristic over the test set, the maximal accuracy obtained was 62.11\% which is significantly lower than our method's accuracy. The low accuracy is understandable since the heuristic doesn't work well for objects in clutter.
257
+
258
+ \subsection{Comparison with learning based baselines}
259
+ We now compare with a couple of learning based algorithms. We use HoG\cite{dalal2005histograms} features in both the following baselines since it preserves rotational variance which is important to grasping:
260
+ \begin{enumerate}
261
+ \item k Nearest Neighbours (kNN): For every element in the test set, kNN based classification is performed over elements in the train set that belong to the same angle class. Maximal accuracy over varying k (optimistic kNN) is 69.4\%.
262
+ \item Linear SVM: 18 binary SVMs are learnt for each of the 18 angle bins. After choosing regularisation parameters via validation, the maximal accuracy obtained is 73.3\%
263
+ \end{enumerate}
264
+
265
+ \subsection{Ablative analysis}
266
+
267
+ \subsubsection*{Effects of data} It is seen in Fig.~\ref{fig:data_size_effect} that adding more data definitely helps in increasing accuracy. This increase is more prominent till about 20K data points after which the increase is small.
268
+
269
+ \begin{figure}[t!]
270
+ \begin{center}
271
+ \includegraphics[trim=0in 2in 0in 2in, clip=true, width=3.5in]{data_comp.pdf}
272
+
273
+ \end{center}
274
+ \vspace{-0.1in}
275
+ \caption{Comparison of the performance of our learner over different training set sizes. Clear improvements in accuracy can be seen in both seen and unseen objects with increasing amounts of data.}
276
+ \vspace{-0.15in}
277
+ \label{fig:data_size_effect}
278
+ \end{figure}
279
+
280
+
281
+
282
+ \subsubsection*{Effects of pretraining} An important question is how much boost does using pretrained network give. Our experiments suggest that this boost is significant: from accuracy of 64.6\% on scratch network to 76.9\% on pretrained networks. This means that visual features learnt from task of image classification~\cite{krizhevsky2012imagenet} aides the task of grasping objects.
283
+
284
+ \subsubsection*{Effects of multi-staged learning} After one stage of reinforcement, testing accuracy increases from 76.9\% to 79.3\%. This shows the effect of hard negatives in training where just 2K grasps improve more than from 20K random grasps. However this improvement in accuracy saturates to 79.5\% after 3 stages.
285
+
286
+ \subsubsection*{Effects of data aggregation} We notice that without aggregating data, and training the grasp model only with data from the current stage, accuracy falls from 76.9\% to 72.3\%.
287
+
288
+ \subsection{Robot testing results}
289
+ Testing is performed over novel objects never seen by the robot before as well as some objects previously seen by the robot. Some of the novel objects can be seen in Fig.~\ref{fig:novel_test_objects}.
290
+ \begin{figure}[t!]
291
+ \begin{center}
292
+ \includegraphics[width=3.3in]{robot_tasks.png}
293
+ \end{center}
294
+ \vspace{-0.13in}
295
+ \caption{Robot Testing Tasks: At test time we use both novel objects and training objects with different conditions. Clutter Removal is performed to show robustness of the grasping model}
296
+ \vspace{-0.21in}
297
+ \label{fig:novel_test_objects}
298
+ \end{figure}
299
+
300
+
301
+
302
+ \noindent\textbf{Re-ranking Grasps:} One of the issues with Baxter is the precision of the arm. Therefore, to account for the imprecision, we sample the top 10 grasps and re-rank them based on neighborhood analysis: given an instance ($P_{topK}^i$,$\theta_{topK}^i$) of a top patch, we further sample 10 patches in the neighbourhood of $P_{topK}^i$. The average of the best angle scores for the neighbourhood patches is assigned as the new patch score $R_{topK}^i$ for the grasp configuration defined by ($P_{topK}^i$,$\theta_{topK}^i$). The grasp configuration associated with the largest $R_{topK}^i$ is then executed. This step ensures that even if the execution of the grasp is off by a few millimeters, it should be successful.
303
+
304
+ \noindent\textbf{Grasp Results:} We test the learnt grasp model both on novel objects and training objects under different pose conditions. A subset of the objects grasped along with failures in grasping can be seen in Fig.~\ref{fig:Testing_results_viz}. Note that some of the grasp such as the red gun in the second row are reasonable but still not successful due to the gripper size not being compatible with the width of the object. Other times even though the grasp is ``successful'', the object falls out due to slipping (green toy-gun in the third row). Finally, sometimes the impreciseness of Baxter also causes some failures in precision grasps.
305
+ Overall, of the 150 tries, Baxter grasps and raises novel objects to a height of 20 cm at a success rate of {\bf 66\%}. The grasping success rate for previously seen objects but in different conditions is {\bf 73\%}.
306
+
307
+ \noindent\textbf{Clutter Removal:} Since our data collection involves objects in clutter, we show that our model works not only on the objects in isolation but also on the challenging task of clutter removal~\cite{BoulariasBS15}. We attempted 5 tries at removing a clutter of 10 objects drawn from a mix of novel and previously seen objects. On an average, Baxter is successfully able to clear the clutter in 26 interactions.
308
+
309
+
310
+
311
+
312
+
313
+
314
+ \begin{figure*}[t!]
315
+ \begin{center}
316
+ \includegraphics[width=7in]{testing_results_viz.png}
317
+
318
+ \end{center}
319
+ \vspace{-0.15in}
320
+ \caption{Grasping Test Results: We demonstrate the grasping performance on both novel and seen objects. On the left (green border), we show the successful grasp executed by the Baxter robot. On the right (red border), we show some of the failure grasps. Overall, our robot shows 66\% grasp rate on novel objects and 73\% on seen objects.}
321
+ \vspace{-0.15in}
322
+ \label{fig:Testing_results_viz}
323
+ \end{figure*}
324
+
325
+ \section{Conclusion}
326
+ We have presented a framework to self-supervise robot grasping task and shown that large-scale trial-error experiments are now possible. Unlike traditional grasping datasets/experiments which use a few hundred examples for training, we increase the training data 40x and collect 50K tries over 700 robot hours. Because of the scale of data collection, we show how we can train a high-capacity convolutional network for this task. Even though we initialize using an Imagenet pre-trained network, our CNN has 18M new parameters to be trained. We compare our learnt grasp network to baselines and perform ablative studies for a deeper understanding on grasping. We finally show our network has good generalization performance with the grasp rate for novel objects being $66\%$. While this is just a small step in bringing big data to the field of robotics, we hope this will inspire the creation of several other public datasets for robot interactions.
327
+
328
+ \section*{ACKNOWLEDGMENT}
329
+ This work was supported by ONR MURI N000141010934 and NSF IIS-1320083.
330
+
331
+ \bibliographystyle{unsrt}
332
+ \bibliography{references}
333
+ \end{document}
papers/1510/1510.00726.tex ADDED
The diff for this file is too large to render. See raw diff
 
papers/1511/1511.06335.tex ADDED
@@ -0,0 +1,486 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ \documentclass{article}
4
+
5
+ \usepackage{times}
6
+ \usepackage{graphicx} \usepackage{hyperref}
7
+ \usepackage{url}
8
+ \usepackage{amssymb}
9
+ \usepackage{graphicx}
10
+ \usepackage{caption}
11
+ \usepackage{subcaption}
12
+ \usepackage{sidecap}
13
+ \usepackage{slashbox}
14
+ \usepackage{natbib}
15
+
16
+ \usepackage{algorithm}
17
+ \usepackage{algorithmic}
18
+
19
+ \usepackage{hyperref}
20
+
21
+
22
+
23
+ \usepackage[accepted]{icml2016}
24
+
25
+
26
+
27
+
28
+ \icmltitlerunning{Unsupervised Deep Embedding for Clustering Analysis}
29
+
30
+ \begin{document}
31
+
32
+ \twocolumn[
33
+ \icmltitle{Unsupervised Deep Embedding for Clustering Analysis}
34
+
35
+ \icmlauthor{Junyuan Xie}{jxie@cs.washington.edu}
36
+ \icmladdress{University of Washington}
37
+ \icmlauthor{Ross Girshick}{rbg@fb.com}
38
+ \icmladdress{Facebook AI Research (FAIR)}
39
+ \icmlauthor{Ali Farhadi}{ali@cs.washington.edu}
40
+ \icmladdress{University of Washington}
41
+
42
+ \icmlkeywords{deep learning, machine learning}
43
+
44
+ \vskip 0.3in
45
+ ]
46
+
47
+
48
+ \begin{abstract}
49
+ Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms.
50
+ Relatively little work has focused on learning representations for clustering.
51
+ In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks.
52
+ DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective.
53
+ Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.
54
+ \end{abstract}
55
+
56
+
57
+ \section{Introduction}
58
+ Clustering, an essential data analysis and visualization tool, has been studied
59
+ extensively in unsupervised machine learning from different perspectives: What
60
+ defines a cluster? What is the right distance metric? How to efficiently group
61
+ instances into clusters? How to validate clusters? And so on. Numerous different
62
+ distance functions and embedding methods have been explored in the literature.
63
+ Relatively little work has focused on the unsupervised learning of the feature
64
+ space in which to perform clustering.
65
+
66
+ A notion of \emph{distance} or \emph{dissimilarity} is central to data
67
+ clustering algorithms. Distance, in turn, relies on representing the data in
68
+ a feature space. The $k$-means clustering algorithm~\citep{macqueen1967some},
69
+ for example, uses the Euclidean distance between points in a given feature
70
+ space, which for images might be raw pixels or gradient-orientation histograms. The choice of feature space is customarily left as an application-specific detail for the end-user to determine. Yet it is clear that the choice of feature space is crucial; for all but the simplest image datasets, clustering with Euclidean distance on raw pixels is completely ineffective.
71
+ In this paper, we revisit cluster analysis and ask: \emph{Can we use a data driven approach to solve for the feature space and cluster memberships jointly?}
72
+
73
+
74
+ We take inspiration from recent work on deep learning for computer vision~\citep{krizhevsky2012imagenet,girshick2014rich,zeiler2014visualizing,long2014fully}, where clear gains on benchmark tasks have resulted from learning better features.
75
+ These improvements, however, were obtained with \emph{supervised} learning, whereas our goal is \emph{unsupervised} data clustering.
76
+ To this end, we define a parameterized non-linear mapping from the data space $X$ to a lower-dimensional feature space $Z$, where we optimize a clustering objective.
77
+ Unlike previous work, which operates on the data space or a shallow linear embedded space, we use stochastic gradient descent (SGD) via backpropagation on a clustering objective to learn the mapping, which is parameterized by a deep neural network.
78
+ We refer to this clustering algorithm as \emph{Deep Embedded Clustering}, or DEC.
79
+
80
+ Optimizing DEC is challenging.
81
+ We want to simultaneously solve for cluster assignment and the underlying feature representation.
82
+ However, unlike in supervised learning, we cannot train our deep network with labeled data.
83
+ Instead we propose to iteratively refine clusters with an auxiliary target distribution derived from the current soft cluster assignment.
84
+ This process gradually improves the clustering as well as the feature representation.
85
+
86
+ Our experiments show significant improvements over state-of-the-art clustering methods in terms of both accuracy and running time on image and textual datasets.
87
+ We evaluate DEC on MNIST~\citep{lecun1998gradient}, STL~\citep{coates2011analysis}, and REUTERS~\citep{lewis2004rcv1}, comparing it with standard and state-of-the-art clustering methods~\citep{nie2011spectral,yang2010image}.
88
+ In addition, our experiments show that DEC is significantly less sensitive to the choice of hyperparameters compared to state-of-the-art methods.
89
+ This robustness is an important property of our clustering algorithm since, when applied to real data, supervision is not available for hyperparameter cross-validation.
90
+
91
+ Our contributions are:
92
+ (a) joint optimization of deep embedding and clustering;
93
+ (b) a novel iterative refinement via soft assignment;
94
+ and (c) state-of-the-art clustering results in terms of clustering accuracy and
95
+ speed.
96
+ Our Caffe-based~\citep{jia2014caffe} implementation of DEC is available at \url{https://github.com/piiswrong/dec}.
97
+
98
+ \section{Related work}
99
+ Clustering has been extensively studied in machine learning in terms of feature selection~\citep{boutsidis2009unsupervised,liu2005toward,alelyani2013feature}, distance functions~\citep{xing2002distance,xiang2008learning}, grouping methods~\citep{macqueen1967some,von2007tutorial,li2004entropy}, and cluster validation~\citep{halkidi2001clustering}.
100
+ Space does not allow for a comprehensive literature study and we refer readers to~\citep{aggarwal2013data} for a survey.
101
+
102
+ One branch of popular methods for clustering is $k$-means~\citep{macqueen1967some} and Gaussian Mixture Models (GMM)~\citep{bishop2006pattern}.
103
+ These methods are fast and applicable to a wide range of problems.
104
+ However, their distance metrics are limited to the original data space and they tend to be ineffective when input dimensionality is high~\citep{steinbach2004challenges}.
105
+
106
+ Several variants of $k$-means have been proposed to address issues with higher-dimensional input spaces.
107
+ \citet{de2006discriminative,ye2008discriminative} perform joint dimensionality reduction and clustering by first clustering the data with $k$-means and then projecting the data into a lower dimensions where the inter-cluster variance is maximized.
108
+ This process is repeated in EM-style iterations until convergence.
109
+ However, this framework is limited to linear embedding; our method employs deep neural networks to perform non-linear embedding that is necessary for more complex data.
110
+
111
+ Spectral clustering and its variants have gained popularity recently~\citep{von2007tutorial}.
112
+ They allow more flexible distance metrics and generally perform better than $k$-means.
113
+ Combining spectral clustering and embedding has been explored in \citet{yang2010image,nie2011spectral}.
114
+ \citet{tian2014learning} proposes an algorithm based on spectral clustering, but replaces eigenvalue decomposition with deep autoencoder, which improves performance but further increases memory consumption.
115
+
116
+ Most spectral clustering algorithms need to compute the full graph Laplacian matrix and therefore have quadratic or super quadratic complexities in the number of data points.
117
+ This means they need specialized machines with large memory for any dataset larger than a few tens of thousands of points.
118
+ In order to scale spectral clustering to large datasets, approximate algorithms were invented to trade off performance for speed~\citep{yan2009fast}.
119
+ Our method, however, is linear in the number of data points and scales gracefully to large datasets.
120
+
121
+
122
+ Minimizing the Kullback-Leibler (KL) divergence between a data distribution and
123
+ an embedded distribution has been used for data visualization and dimensionality
124
+ reduction~\citep{van2008visualizing}. T-SNE, for instance, is a non-parametric
125
+ algorithm in this school and a parametric variant of
126
+ t-SNE~\citep{maaten2009learning} uses deep neural network to parametrize the
127
+ embedding. The complexity of t-SNE is $O(n^2)$, where $n$ is the number of data points, but it can be approximated in $O(n\log n)$~\citep{van2014accelerating}.
128
+
129
+ We take inspiration from parametric t-SNE. Instead of minimizing KL divergence to produce an embedding that is faithful to distances in the original data space, we define a centroid-based probability distribution and minimize its KL divergence to an auxiliary target distribution to simultaneously improve clustering assignment and feature representation. A centroid-based method also has the benefit of reducing complexity to $O(nk)$, where $k$ is the number of centroids.
130
+ \section{Deep embedded clustering}
131
+ Consider the problem of clustering a set of $n$ points $\{x_i \in X\}_{i=1}^n$ into $k$ clusters, each represented by a centroid $\mu_j, j = 1,\ldots,k$.
132
+ Instead of clustering directly in the \emph{data space} $X$, we propose to first transform the data with a non-linear mapping $f_\theta: X \rightarrow Z$, where $\theta$ are learnable parameters and $Z$ is the latent \emph{feature space}.
133
+ The dimensionality of $Z$ is typically much smaller than $X$ in order to avoid the ``curse of dimensionality''~\citep{bellman61}.
134
+ To parametrize $f_\theta$, deep neural networks (DNNs) are a natural choice due to their theoretical function approximation properties~\citep{hornik1991approximation} and their demonstrated feature learning capabilities~\citep{bengio2013representation}.
135
+
136
+ The proposed algorithm (DEC) clusters data by \emph{simultaneously} learning a set of $k$ cluster centers $\{\mu_j \in Z\}_{j=1}^{k}$ in the feature space $Z$ and the parameters $\theta$ of the DNN that maps data points into $Z$. DEC has two phases: (1) parameter initialization with a deep autoencoder~\citep{vincent2010stacked} and (2) parameter optimization (i.e., clustering), where we iterate between computing an auxiliary target distribution and minimizing the Kullback--Leibler (KL) divergence to it. We start by describing phase (2) parameter optimization/clustering, given an initial estimate of $\theta$ and $\{\mu_j\}_{j=1}^{k}$.
137
+
138
+ \subsection{Clustering with KL divergence}
139
+ Given an initial estimate of the non-linear mapping $f_\theta$ and the initial cluster centroids $\{\mu_j\}_{j=1}^{k}$, we propose to improve the clustering using an unsupervised algorithm that alternates between two steps.
140
+ In the first step, we compute a soft assignment between the embedded points and the cluster centroids. In the second step, we update the deep mapping $f_\theta$ and refine the cluster centroids by learning from current high confidence assignments using an auxiliary target distribution.
141
+ This process is repeated until a convergence criterion is met.
142
+
143
+ \subsubsection{Soft Assignment}
144
+ Following \citet{van2008visualizing} we use the Student's $t$-distribution as a kernel to measure the similarity between embedded point $z_i$ and centroid $\mu_j$:
145
+ \begin{equation}
146
+ q_{ij} = \frac{(1+\Vert z_i - \mu_j\Vert^2/\alpha)^{-\frac{\alpha+1}{2}}}{\sum_{j'} (1+\Vert z_i - \mu_{j'}\Vert^2/\alpha)^{-\frac{\alpha+1}{2}}},
147
+ \end{equation}
148
+ where $z_i = f_\theta(x_i) \in Z$ corresponds to $x_i \in X$ after embedding, $\alpha$ are the degrees of freedom of the Student's $t$-distribution and $q_{ij}$ can be interpreted as the probability of assigning sample $i$ to cluster $j$ (i.e., a soft assignment). Since we cannot cross-validate $\alpha$ on a validation set in the unsupervised setting, and learning it is superfluous~\citep{maaten2009learning}, we let $\alpha = 1$ for all experiments.
149
+
150
+ \subsubsection{KL divergence minimization}
151
+ We propose to iteratively refine the clusters by learning from their high confidence assignments with the help of an auxiliary target distribution.
152
+ Specifically, our model is trained by matching the soft assignment to the target distribution.
153
+ To this end, we define our objective as a KL divergence loss between the soft assignments $q_i$ and the auxiliary distribution $p_i$ as follows:
154
+ \begin{equation}
155
+ L = \mathrm{KL}(P\Vert Q) = \sum_i \sum_j p_{ij}\log \frac{p_{ij}}{q_{ij}}.
156
+ \end{equation}
157
+ The choice of target distributions $P$ is crucial for DEC's performance. A naive
158
+ approach would be setting each $p_{i}$ to a delta distribution (to the nearest
159
+ centroid) for data points above a confidence threshold and ignore the rest.
160
+ However, because $q_i$ are soft assignments, it is more natural and flexible to
161
+ use softer probabilistic targets. Specifically, we would like our target distribution to
162
+ have the following properties: (1) strengthen predictions (i.e., improve cluster
163
+ purity), (2) put more emphasis on data points assigned with high confidence, and
164
+ (3) normalize loss contribution of each centroid to prevent large clusters from distorting the hidden feature space.
165
+
166
+ In our experiments, we compute $p_i$ by first raising $q_i$ to the second power and then normalizing by frequency per cluster:
167
+ \begin{equation}
168
+ p_{ij} = \frac{q_{ij}^2/f_j}{\sum_{j'} q_{ij'}^2/f_{j'}},
169
+ \end{equation}
170
+ where $f_j = \sum_i q_{ij}$ are soft cluster frequencies. Please refer to section~\ref{sec:objective} for discussions on empirical properties of $L$ and $P$.
171
+
172
+ Our training strategy can be seen as a form of self-training~\cite{nigam2000analyzing}. As in self-training, we take an initial classifier and an unlabeled dataset, then label the dataset with the classifier in order to train on its own high confidence predictions. Indeed, in experiments we observe that DEC improves upon the initial estimate in each iteration by learning from high confidence predictions, which in turn helps to improve low confidence ones.
173
+
174
+ \subsubsection{Optimization}
175
+ We jointly optimize the cluster centers $\{\mu_j\}$ and DNN parameters $\theta$ using Stochastic Gradient Descent (SGD) with momentum. The gradients of $L$ with respect to feature-space embedding of each data point $z_i$ and each cluster centroid $\mu_j$ are computed as:
176
+ \begin{eqnarray}
177
+ \frac{\partial L}{\partial z_i} &=& \frac{\alpha+1}{\alpha}\sum_j(1+\frac{\Vert z_i - \mu_j\Vert^2}{\alpha})^{-1}\\
178
+ &&\;\;\;\;\times(p_{ij}-q_{ij})(z_i-\mu_j),\nonumber\\
179
+ \frac{\partial L}{\partial \mu_j} &=& -\frac{\alpha+1}{\alpha}\sum_i(1+\frac{\Vert z_i - \mu_j\Vert^2}{\alpha})^{-1}\\
180
+ &&\;\;\;\;\times(p_{ij}-q_{ij})(z_i-\mu_j).\nonumber
181
+ \end{eqnarray}
182
+ The gradients $\partial L / \partial z_i$ are then passed down to the DNN and used in standard backpropagation to compute the DNN's parameter gradient $\partial L / \partial \theta$.
183
+ For the purpose of discovering cluster assignments, we stop our procedure when less than $\mathit{tol}\%$ of points change cluster assignment between two consecutive iterations.
184
+
185
+ \begin{table*}[!t]
186
+ \centering
187
+ \caption{Dataset statistics.}
188
+ \begin{tabular}{l|c|c|c|c}
189
+ Dataset & \# Points & \# classes & Dimension & \% of largest class \\ \hline \hline
190
+ MNIST \cite{lecun1998gradient} & 70000 & 10 & 784 & 11\% \\ \hline
191
+ STL-10 \cite{coates2011analysis} & 13000 & 10 & 1428 & 10\% \\ \hline
192
+ REUTERS-10K & 10000 & 4 & 2000 & 43\% \\ \hline
193
+ REUTERS \cite{lewis2004rcv1} & 685071 & 4 & 2000 & 43\% \\
194
+ \end{tabular}
195
+ \label{table:dataset}
196
+ \end{table*}
197
+
198
+ \subsection{Parameter initialization}
199
+ \begin{figure}[t!]
200
+ \centering
201
+ \includegraphics[width=0.8\linewidth]{network.pdf}
202
+ \caption{Network structure}
203
+ \label{fig:network}
204
+ \end{figure}
205
+
206
+ Thus far we have discussed how DEC proceeds given initial estimates of the DNN parameters $\theta$ and the cluster centroids $\{\mu_j\}$.
207
+ Now we discuss how the parameters and centroids are initialized.
208
+
209
+ We initialize DEC with a stacked autoencoder (SAE) because recent research has shown that they consistently produce semantically meaningful and well-separated representations on real-world datasets~\citep{vincent2010stacked,hinton2006reducing,le2013building}. Thus the unsupervised representation learned by SAE naturally facilitates the learning of clustering representations with DEC.
210
+
211
+ We initialize the SAE network layer by layer with each layer being a denoising autoencoder
212
+ trained to reconstruct the previous layer's output after random
213
+ corruption~\citep{vincent2010stacked}. A denoising autoencoder is a two layer neural network defined as:
214
+ \begin{eqnarray}
215
+ \tilde x \sim \mathit{Dropout}(x)\\
216
+ h = g_1(W_1\tilde x+b_1)\\
217
+ \tilde h \sim \mathit{Dropout}(h)\\
218
+ y = g_2(W_2\tilde h + b_2)
219
+ \end{eqnarray}
220
+ where $\mathit{Dropout}(\cdot)$~\citep{srivastava2014dropout} is a stochastic mapping that randomly sets a portion of its input dimensions to 0, $g_1$ and $g_2$ are activation functions for encoding and decoding layer respectively, and $\theta=\{W_1, b_1, W_2, b_2\}$ are model parameters. Training is performed by minimizing the least-squares loss $\Vert x - y \Vert_2^2$. After training of one layer, we use its output $h$ as the input to train the next layer.
221
+ We use rectified linear units (ReLUs)~\citep{nair2010rectified} in all encoder/decoder pairs, except for $g_2$ of the \emph{first} pair (it needs to reconstruct input data that may have positive and negative values, such as zero-mean images) and $g_1$ of the \emph{last} pair (so the final data embedding retains full information~\citep{vincent2010stacked}).
222
+
223
+ After greedy layer-wise training, we concatenate all encoder layers followed by all decoder layers, in reverse layer-wise training order, to form a deep autoencoder and then finetune it to minimize reconstruction loss. The final result is a multilayer deep autoencoder with a bottleneck coding layer in the middle. We then discard the decoder layers and use the encoder layers as our initial mapping between the data space and the feature space, as shown in Fig.~\ref{fig:network}.
224
+
225
+ To initialize the cluster centers, we pass the data through the initialized DNN to get embedded data points and then perform standard $k$-means clustering in the feature space $Z$ to obtain $k$ initial centroids $\{\mu_j\}_{j=1}^k$.
226
+ \section{Experiments}
227
+
228
+ \begin{figure*}[!ht]
229
+ \includegraphics[width=\textwidth]{accplot.pdf}
230
+ \caption{Clustering accuracy for different hyperparameter choices for each algorithm.
231
+ DEC outperforms other methods and is more robust to hyperparameter changes compared to either LDGMI or SEC.
232
+ Robustness is important because cross-validation is not possible in real-world applications of cluster analysis. This figure is best viewed in color.}
233
+ \label{fig:acc}
234
+ \end{figure*}
235
+
236
+ \begin{table*}[!ht]
237
+ \centering
238
+ \caption{Comparison of clustering accuracy (Eq. \ref{eqn:acc}) on four datasets.}
239
+ \begin{tabular}{l|c|c|c|c}
240
+ Method & MNIST & STL-HOG & REUTERS-10k & REUTERS \\ \hline \hline
241
+ $k$-means & 53.49\% & 28.39\% & 52.42\% & 53.29\% \\ \hline
242
+ LDMGI & 84.09\% & 33.08\% & 43.84\% & N/A \\ \hline
243
+ SEC & 80.37\% & 30.75\% & 60.08\% & N/A \\ \hline
244
+ DEC w/o backprop & 79.82\% & 34.06\% & 70.05\% & 69.62\% \\ \hline
245
+ DEC (ours) & \textbf{84.30\%} & \textbf{35.90\%} & \textbf{72.17\%} & \textbf{75.63\%}
246
+ \end{tabular}
247
+ \label{table:acc}
248
+ \end{table*}
249
+
250
+ \subsection{Datasets}
251
+ We evaluate the proposed method (DEC) on one text dataset and two image datasets and compare it against other algorithms including $k$-means, LDGMI~\citep{yang2010image}, and SEC~\citep{nie2011spectral}.
252
+ LDGMI and SEC are spectral clustering based algorithms that use a Laplacian matrix and various transformations to improve clustering performance.
253
+ Empirical evidence reported in \citet{yang2010image,nie2011spectral} shows that LDMGI and SEC outperform traditional spectral clustering methods on a wide range of datasets.
254
+ We show qualitative and quantitative results that demonstrate the benefit of DEC compared to LDGMI and SEC.
255
+
256
+ In order to study the performance and generality of different algorithms, we perform experiment on two image datasets and one text data set:
257
+ \begin{itemize}
258
+ \item \textbf{MNIST}: The MNIST dataset consists of 70000 handwritten digits of 28-by-28 pixel size. The digits are centered and size-normalized~\citep{lecun1998gradient}.
259
+ \item \textbf{STL-10}: A dataset of 96-by-96 color images. There are 10 classes with 1300 examples each. It also contains 100000 unlabeled images of the same resolution~\citep{coates2011analysis}. We also used the unlabeled set when training our autoencoders. Similar to \citet{doersch2012makes}, we concatenated HOG feature and a 8-by-8 color map to use as input to all algorithms.
260
+ \item \textbf{REUTERS}: Reuters contains about 810000 English news stories labeled with a category tree~\citep{lewis2004rcv1}. We used the four root categories: corporate/industrial, government/social, markets, and economics as labels and further pruned all documents that are labeled by multiple root categories to get 685071 articles. We then computed tf-idf features on the 2000 most frequently occurring word stems. Since some algorithms do not scale to the full Reuters dataset, we also sampled a random subset of 10000 examples, which we call REUTERS-10k, for comparison purposes.
261
+ \end{itemize}
262
+ A summary of dataset statistics is shown in Table \ref{table:dataset}.
263
+ For all algorithms, we normalize all datasets so that $\frac{1}{d}\Vert x_i \Vert_2^2$ is approximately 1, where $d$ is the dimensionality of the data space point $x_i \in X$.
264
+
265
+ \subsection{Evaluation Metric}
266
+ We use the standard unsupervised evaluation metric and protocols for evaluations and comparisons to other algorithms \cite{yang2010image}.
267
+ For all algorithms we set the number of clusters to the number of ground-truth categories and evaluate performance with \emph{unsupervised clustering accuracy ($\mathit{ACC}$)}:
268
+ \begin{equation}\label{eqn:acc}
269
+ \mathit{ACC} = \max_m \frac{\sum_{i=1}^n \mathbf{1}\{l_i = m(c_i)\}}{n},
270
+ \end{equation}
271
+ where $l_i$ is the ground-truth label, $c_i$ is the cluster assignment produced by the algorithm, and $m$ ranges over all possible one-to-one mappings between clusters and labels.
272
+
273
+ Intuitively this metric takes a cluster assignment from an \emph{unsupervised}
274
+ algorithm and a ground truth assignment and then finds the best matching between them.
275
+ The best mapping can be efficiently computed by the Hungarian algorithm~\citep{kuhn1955hungarian}.
276
+
277
+
278
+ \subsection{Implementation}
279
+
280
+ \begin{figure*}[!ht]
281
+ \centering
282
+ \begin{subfigure}[t]{0.40\textwidth}
283
+ \centering
284
+ \includegraphics[width=\linewidth]{viz_mnist_short.png}
285
+ \caption{MNIST}
286
+ \end{subfigure}
287
+ ~\hspace{1cm}
288
+ \begin{subfigure}[t]{0.40\textwidth}
289
+ \centering
290
+ \includegraphics[width=\linewidth]{viz_stl_short.jpg}
291
+ \caption{STL-10}
292
+ \end{subfigure}
293
+ \caption{Each row contains the top 10 scoring elements from one cluster.}
294
+ \label{fig:top10}
295
+ \end{figure*}
296
+
297
+ Determining hyperparameters by cross-validation on a validation set is not an option in unsupervised clustering.
298
+ Thus we use commonly used parameters for DNNs and avoid dataset specific tuning as much as possible.
299
+ Specifically, inspired by \citet{maaten2009learning}, we set network dimensions to $d$--500--500--2000--10 for all datasets, where $d$ is the data-space dimension, which varies between datasets.
300
+ All layers are densely (fully) connected.
301
+
302
+ During greedy layer-wise pretraining we initialize the weights to random numbers drawn from a zero-mean Gaussian distribution with a standard deviation of 0.01.
303
+ Each layer is pretrained for 50000 iterations with a dropout rate of $20\%$.
304
+ The entire deep autoencoder is further finetuned for 100000 iterations without dropout.
305
+ For both layer-wise pretraining and end-to-end finetuning of the autoencoder the minibatch size is set to 256, starting learning rate is set to 0.1, which is divided by 10 every 20000 iterations, and weight decay is set to 0.
306
+ All of the above parameters are set to achieve a reasonably good reconstruction loss and are held constant across all datasets.
307
+ Dataset-specific settings of these parameters might improve performance on each dataset, but we refrain from this type of unrealistic parameter tuning.
308
+ To initialize centroids, we run $k$-means with 20 restarts and select the best solution.
309
+ In the KL divergence minimization phase, we train with a constant learning rate of 0.01.
310
+ The convergence threshold is set to $\mathit{tol} = 0.1\%$.
311
+ Our implementation is based on Python and Caffe~\citep{jia2014caffe} and is available at \url{https://github.com/piiswrong/dec}.
312
+
313
+
314
+
315
+ For all baseline algorithms, we perform 20 random restarts when initializing centroids and pick the result with the best objective value.
316
+ For a fair comparison with previous work~\citep{yang2010image}, we vary one hyperparameter for each algorithm over 9 possible choices and report the best accuracy in Table \ref{table:acc} and the range of accuracies in Fig. \ref{fig:acc}.
317
+ For LDGMI and SEC, we use the same parameter and range as in their corresponding papers.
318
+ For our proposed algorithm, we vary $\lambda$, the parameter that controls annealing speed, over $2^i\times 10, i = 0, 1, ..., 8$.
319
+ Since $k$-means does not have tunable hyperparameters (aside from $k$), we simply run them 9 times.
320
+ GMMs perform similarly to $k$-means so we only report $k$-means results. Traditional spectral clustering performs worse than LDGMI and SEC so we only report the latter~\citep{yang2010image,nie2011spectral}.
321
+
322
+ \subsection{Experiment results}
323
+
324
+
325
+ We evaluate the performance of our algorithm both quantitatively and qualitatively.
326
+ In Table \ref{table:acc}, we report the best performance, over 9 hyperparameter settings, of each algorithm.
327
+ Note that DEC outperforms all other methods, sometimes with a significant margin.
328
+ To demonstrate the effectiveness of end-to-end training, we also show the
329
+ results from freezing the non-linear mapping $f_\theta$ during clustering.
330
+ We find that this ablation (``DEC w/o backprop'') generally performs worse than DEC.
331
+
332
+ In order to investigate the effect of hyperparameters, we plot the accuracy of
333
+ each method under all 9 settings (Fig. \ref{fig:acc}).
334
+ We observe that DEC is more consistent across hyperparameter ranges compared to LDGMI and SEC.
335
+ For DEC, hyperparameter $\lambda = 40$ gives near optimal performance on all dataset, whereas for other algorithms the optimal hyperparameter varies widely.
336
+ Moreover, DEC can process the entire REUTERS dataset in half an hour with GPU acceleration while the second best algorithms, LDGMI and SEC, would need months of computation time and terabytes of memory.
337
+ We, indeed, could not run these methods on the full REUTERS dataset and report
338
+ N/A in Table \ref{table:acc} (GPU adaptation of these methods is non-trivial).
339
+
340
+ In Fig. \ref{fig:top10} we show 10 top scoring images from each cluster in MNIST and STL.
341
+ Each row corresponds to a cluster and images are sorted from left to right based on their distance to the cluster center.
342
+ We observe that for MNIST, DEC's cluster assignment corresponds to natural clusters very well, with the exception of confusing 4 and 9, while for STL, DEC is mostly correct with airplanes, trucks and cars, but spends part of its attention on poses instead of categories when it comes to animal classes.
343
+
344
+
345
+ \section{Discussion}
346
+ \subsection{Assumptions and Objective}
347
+
348
+ \begin{figure}[!h]
349
+ \centering
350
+ \includegraphics[width=0.45\textwidth]{grad_plot.png}
351
+ \caption{Gradient visualization at the start of KL divergence minimization.
352
+ This plot shows the magnitude of the gradient of the loss $L$ vs. the cluster soft assignment probability $q_{ij}$.
353
+ See text for discussion.}
354
+ \label{fig:grad}
355
+ \end{figure}
356
+ \label{sec:objective}
357
+
358
+ \begin{figure*}[t]
359
+ \centering
360
+ \begin{subfigure}[b]{0.25\textwidth}
361
+ \includegraphics[width=\textwidth]{mnist_0.png}
362
+ \caption{Epoch 0}
363
+ \end{subfigure}\quad\quad
364
+ \begin{subfigure}[b]{0.25\textwidth}
365
+ \includegraphics[width=\textwidth]{mnist_3.png}
366
+ \caption{Epoch 3}
367
+ \end{subfigure}\quad\quad
368
+ \begin{subfigure}[b]{0.25\textwidth}
369
+ \includegraphics[width=\textwidth]{mnist_6.png}
370
+ \caption{Epoch 6}
371
+ \end{subfigure}\\
372
+ \begin{subfigure}[b]{0.25\textwidth}
373
+ \includegraphics[width=\textwidth]{mnist_9.png}
374
+ \caption{Epoch 9}
375
+ \end{subfigure}\quad\quad
376
+ \begin{subfigure}[b]{0.25\textwidth}
377
+ \includegraphics[width=\textwidth]{mnist_12.png}
378
+ \caption{Epoch 12}
379
+ \end{subfigure}\quad\quad
380
+ \begin{subfigure}[b]{0.25\textwidth}
381
+ \includegraphics[width=\textwidth]{acc_progress_anno.pdf}
382
+ \caption{Accuracy vs. epochs}
383
+ \end{subfigure}
384
+ \caption{We visualize the latent representation as the KL divergence minimization phase proceeds on MNIST.
385
+ Note the separation of clusters from epoch 0 to epoch 12.
386
+ We also plot the accuracy of DEC at different epochs, showing that KL divergence minimization improves clustering accuracy. This figure is best viewed in color.}
387
+ \label{fig:progress}
388
+ \end{figure*}
389
+
390
+
391
+ \begin{table*}[ht]
392
+ \centering
393
+ \caption{Comparison of clustering accuracy (Eq. \ref{eqn:acc}) on autoencoder (AE) feature.}
394
+ \begin{tabular}{l|c|c|c|c}
395
+ Method & MNIST & STL-HOG & REUTERS-10k & REUTERS \\ \hline \hline
396
+ AE+$k$-means & 81.84\% & 33.92\% & 66.59\% & 71.97\% \\ \hline
397
+ AE+LDMGI & 83.98\% & 32.04\% & 42.92\% & N/A \\ \hline
398
+ AE+SEC & 81.56\% & 32.29\% & 61.86\% & N/A \\ \hline
399
+ DEC (ours) & \textbf{84.30\%} & \textbf{35.90\%} & \textbf{72.17\%} & \textbf{75.63\%}
400
+ \end{tabular}
401
+ \label{table:ae}
402
+ \end{table*}
403
+
404
+ \begin{table*}[ht]
405
+ \centering
406
+ \caption{Clustering accuracy (Eq. \ref{eqn:acc}) on imbalanced subsample of MNIST.}
407
+ \begin{tabular}{l|c|c|c|c|c}
408
+ \backslashbox{Method}{$r_{min}$} & 0.1 & 0.3 & 0.5 & 0.7 & 0.9 \\ \hline \hline
409
+ $k$-means & 47.14\% & 49.93\% & 53.65\% & 54.16\% & 54.39\% \\ \hline
410
+ AE+$k$-means & 66.82\% & 74.91\% & 77.93\% & 80.04\% & 81.31\% \\ \hline
411
+ DEC & 70.10\% & 80.92\% & 82.68\% & 84.69\% & 85.41\% \\
412
+ \end{tabular}
413
+ \label{table:imba}
414
+ \end{table*}
415
+
416
+ The underlying assumption of DEC is that the initial classifier's high confidence predictions are mostly correct.
417
+ To verify that this assumption holds for our task and that our choice of $P$ has the desired properties, we plot the magnitude of the gradient of $L$ with respect to each embedded point, $|\partial L / \partial z_i|$, against its soft assignment, $q_{ij}$, to a randomly chosen MNIST cluster $j$ (Fig. \ref{fig:grad}).
418
+
419
+ We observe points that are closer to the cluster center (large $q_{ij}$) contribute more to the gradient.
420
+ We also show the raw images of 10 data points at each 10 percentile sorted by $q_{ij}$.
421
+ Instances with higher similarity are more canonical examples of ``5''.
422
+ As confidence decreases, instances become more ambiguous and eventually turn into a mislabeled ``8'' suggesting the soundness of our assumptions.
423
+
424
+
425
+ \subsection{Contribution of Iterative Optimization}
426
+
427
+
428
+ In Fig. \ref{fig:progress} we visualize the progression of the embedded representation of a random subset of MNIST during training.
429
+ For visualization we use t-SNE~\citep{van2008visualizing} applied to the embedded points $z_i$.
430
+ It is clear that the clusters are becoming increasingly well separated.
431
+ Fig. \ref{fig:progress} (f) shows how accuracy correspondingly improves over SGD epochs.
432
+
433
+ \subsection{Contribution of Autoencoder Initialization}
434
+ To better understand the contribution of each component, we show the performance of all algorithms with autoencoder features in Table \ref{table:ae}.
435
+ We observe that SEC and LDMGI's performance do not change significantly with autoencoder feature, while $k$-means improved but is still below DEC.
436
+ This demonstrates the power of deep embedding and the benefit of fine-tuning with the proposed KL divergence objective.
437
+
438
+ \subsection{Performance on Imbalanced Data}
439
+ In order to study the effect of imbalanced data, we sample subsets of MNIST with various retention rates.
440
+ For minimum retention rate $r_{min}$, data points of class 0 will be kept with probability $r_{min}$ and class 9 with probability 1, with the other classes linearly in between.
441
+ As a result the largest cluster will be $1/r_{min}$ times as large as the smallest one.
442
+ From Table \ref{table:imba} we can see that DEC is fairly robust against cluster size variation.
443
+ We also observe that KL divergence minimization (DEC) consistently improves clustering accuracy after autoencoder and $k$-means initialization (shown as AE+$k$-means).
444
+
445
+ \subsection{Number of Clusters}
446
+ \begin{figure}[!h]
447
+ \centering
448
+ \includegraphics[width=0.4\textwidth]{nc.pdf}
449
+ \caption{Selection of the centroid count, $k$.
450
+ This is a plot of Normalized Mutual Information (NMI) and Generalizability vs. number of clusters.
451
+ Note that there is a sharp drop of generalizability from 9 to 10 which means that 9 is the optimal number of clusters.
452
+ Indeed, we observe that 9 gives the highest NMI.}
453
+ \label{fig:nc}
454
+ \end{figure}
455
+
456
+ So far we have assumed that the number of natural clusters is given to simplify comparison between algorithms.
457
+ However, in practice this quantity is often unknown.
458
+ Therefore a method for determining the optimal number of clusters is needed.
459
+ To this end, we define two metrics: (1) the standard metric, Normalized Mutual Information (NMI), for evaluating clustering results with different cluster number:
460
+ \begin{displaymath}
461
+ \mathit{NMI}(l, c) = \frac{I(l, c)}{\frac{1}{2}[H(l)+H(c)]},
462
+ \end{displaymath}
463
+ where $I$ is the mutual information metric and $H$ is entropy,
464
+ and (2) generalizability ($G$) which is defined as the ratio between training and validation loss:
465
+ \begin{displaymath}
466
+ G = \frac{L_{train}}{L_{validation}}.
467
+ \end{displaymath}
468
+ $G$ is small when training loss is lower than validation loss, which indicate a high degree of overfitting.
469
+
470
+ Fig. \ref{fig:nc} shows a sharp drop in generalizability when cluster number increases from 9 to 10, which suggests that 9 is the optimal number of clusters. We indeed observe the highest NMI score at 9, which demonstrates that generalizability is a good metric for selecting cluster number. NMI is highest at 9 instead 10 because
471
+ 9 and 4 are similar in writing and DEC thinks that they should form a single cluster. This corresponds well with our qualitative results in Fig. \ref{fig:top10}.
472
+
473
+
474
+ \section{Conclusion}
475
+ This paper presents Deep Embedded Clustering, or DEC---an algorithm that clusters a set of data points in a jointly optimized feature space.
476
+ DEC works by iteratively optimizing a KL divergence based clustering objective with a self-training target distribution. Our method can be viewed as an unsupervised extension of semisupervised self-training. Our framework provide a way to learn a representation specialized for clustering without groundtruth cluster membership labels.
477
+
478
+ Empirical studies demonstrate the strength of our proposed algorithm. DEC offers improved performance as well as robustness with respect to hyperparameter settings, which is particularly important in unsupervised tasks since cross-validation is not possible. DEC also has the virtue of linear complexity in the number of data points which allows it to scale to large datasets.
479
+ \section{Acknowledgment}
480
+ This work is in part supported by ONR N00014-13-1-0720, NSF IIS- 1338054, and Allen Distinguished Investigator Award.
481
+
482
+
483
+ \bibliography{bib}
484
+ \bibliographystyle{icml2016}
485
+
486
+ \end{document}
papers/1511/1511.06422.tex ADDED
@@ -0,0 +1,520 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass{article} \clearpage{}\usepackage{iclr2016_conference,times}
2
+ \usepackage{hyperref}
3
+ \usepackage{url}
4
+ \usepackage{graphicx}
5
+ \usepackage{amsmath}
6
+ \usepackage{caption}
7
+ \usepackage{epstopdf}
8
+ \usepackage{algpseudocode}
9
+ \usepackage{algorithm}
10
+ \clearpage{}
11
+ \clearpage{}\renewcommand{\algorithmicrequire}{\textbf{Input:}}
12
+ \renewcommand{\algorithmicensure}{\textbf{Output:}}
13
+ \newcommand{\algrule}[1][.2pt]{\par\vskip.5\baselineskip\hrule height #1\par\vskip.5\baselineskip}
14
+
15
+ \epstopdfsetup{update}
16
+
17
+ \newenvironment{itemize*}{\begin{itemize}\setlength{\itemsep}{1pt}\setlength{\parskip}{1pt}}{\end{itemize}}
18
+
19
+ \newenvironment{enumerate*}{\begin{enumerate}\setlength{\itemsep}{1pt}\setlength{\parskip}{1pt}}{\end{enumerate}}
20
+ \clearpage{}
21
+ \title{All you need is a good init}
22
+
23
+ \author{Dmytro Mishkin, Jiri Matas \\
24
+ \\
25
+ Center for Machine Perception\\
26
+ Czech Technical University in Prague\\
27
+ Czech Republic
28
+ \texttt{\{mishkdmy,matas\}@cmp.felk.cvut.cz} \\
29
+ }
30
+
31
+ \newcommand{\fix}{\marginpar{FIX}}
32
+ \newcommand{\new}{\marginpar{NEW}}
33
+
34
+ \iclrfinalcopy
35
+
36
+ \begin{document}
37
+ \maketitle
38
+ \begin{abstract}
39
+ Layer-sequential unit-variance (LSUV) initialization -- a simple method for weight initialization for deep net learning -- is proposed. The method consists of the two steps. First, pre-initialize weights of each convolution or inner-product layer with orthonormal matrices. Second, proceed from the first to the final layer, normalizing the variance of the output of each layer to be equal to one.
40
+
41
+ Experiment with different activation functions (maxout, ReLU-family, tanh) show that the proposed initialization leads to learning of very deep nets that (i) produces networks with test accuracy better or equal to standard methods and (ii) is at least as fast as the complex schemes proposed specifically for very deep nets such as FitNets~(\cite{FitNets2014}) and Highway~(\cite{Highway2015}).
42
+
43
+ Performance is evaluated on GoogLeNet, CaffeNet, FitNets and Residual nets and the state-of-the-art, or very close to it, is achieved on the MNIST, CIFAR-10/100 and ImageNet datasets.
44
+ \end{abstract}
45
+
46
+ \section{Introduction}
47
+ \label{intro}
48
+ Deep nets have demonstrated impressive results on a number of computer vision and natural language processing problems.
49
+ At present, state-of-the-art results in image classification~(\cite{VGGNet2015,Googlenet2015}) and speech recognition~(\cite{VGGNetSound2015}), etc., have been achieved with very deep ($\geq 16$ layer) CNNs.
50
+ Thin deep nets are of particular interest, since they are accurate and at the same inference-time efficient~(\cite{FitNets2014}).
51
+
52
+ One of the main obstacles preventing the wide adoption of very deep nets is the absence of a general, repeatable and efficient procedure for their end-to-end training.
53
+ For example, VGGNet~(\cite{VGGNet2015}) was optimized by a four stage procedure that started by training a network with moderate depth, adding progressively more layers.
54
+ \cite{FitNets2014} stated that deep and thin networks are very hard to train by backpropagation if deeper than five layers, especially with uniform initialization.
55
+
56
+ On the other hand, \cite{MSRA2015} showed that it is possible to train the VGGNet in a single optimization run if the network weights are initialized with a specific ReLU-aware initialization. The \cite{MSRA2015} procedure
57
+ generalizes to the ReLU non-linearity the idea of filter-size dependent initialization, introduced for the linear case by~(\cite{Xavier10}).
58
+ Batch normalization~(\cite{BatchNorm2015}), a technique that inserts layers into the the deep net that transform the output for the batch to be zero mean unit variance, has successfully facilitated training of the twenty-two layer GoogLeNet~(\cite{Googlenet2015}).
59
+ However, batch normalization adds a 30\% computational overhead to each iteration.
60
+
61
+ The main contribution of the paper is a proposal of a simple initialization procedure that, in connection with standard stochastic gradient descent~(SGD), leads to state-of-the-art thin and very deep neural nets\footnote{The code allowing to reproduce the experiments is available at \\ \url{https://github.com/ducha-aiki/LSUVinit}}.
62
+ The result highlights the importance of initialization in very deep nets. We review the history of CNN initialization in Section~\ref{sec:initialization-review}, which is followed by a detailed description of the novel initialization method in Section~\ref{sec:algorithm}. The method is experimentally validated in Section~\ref{sec:experiment}.
63
+
64
+ \section {Initialization in neural networks}
65
+ \label{sec:initialization-review}
66
+
67
+ After the success of CNNs in IVSRC 2012~(\cite{AlexNet2012}), initialization with Gaussian noise with mean equal to zero and standard deviation set to 0.01 and adding bias equal to one for some layers become very popular. But, as mentioned before, it is not possible to train very deep network from scratch with it~(\cite{VGGNet2015}).
68
+ The problem is caused by the activation (and/or) gradient magnitude in final layers~(\cite{MSRA2015}). If each layer, not properly initialized, scales input by $k$, the final scale would be $k^{L}$, where $L$ is a number of layers. Values of $k>1$ lead to extremely large values of output layers, $k<1$ leads to a diminishing signal and gradient.
69
+
70
+ \begin{figure}[tb]
71
+ \centering
72
+ \includegraphics[width=0.49\linewidth]{images/conv11_weights_update_relu.pdf}
73
+ \includegraphics[width=0.49\linewidth]{images/conv11_weights_update_vlrelu.pdf}\\
74
+ \includegraphics[width=0.49\linewidth]{images/conv11_weights_update_tanh.pdf}
75
+ \includegraphics[width=0.49\linewidth]{images/conv11_weights_update_maxout.pdf}
76
+ \caption{Relative magnitude of weight updates as a function of the training iteration for different weight initialization scaling after ortho-normalization. The values in the range 0.1\% .. 1\% lead to convergence, larger to divergence, for smaller, the network can hardly leave the initial state. Subgraphs show results for different non-linearities -- ReLU (top left), VLReLU (top right), hyperbolic tangent (bottom left) and Maxout (bottom right).}
77
+ \label{fig:weights-scaling}
78
+ \end{figure}
79
+
80
+ \cite{Xavier10} proposed a formula for estimating the standard deviation on the basis of the number of input and output channels of the layers under assumption of no non-linearity between layers. Despite invalidity of the assumption, Glorot initialization works well in many applications.
81
+ ~\cite{MSRA2015} extended this formula to the ReLU~(\cite{ReLU2011}) non-linearity and showed its superior performance for ReLU-based nets.
82
+ Figure~\ref{fig:weights-scaling} shows why scaling is important. Large weights lead to divergence via updates larger than the initial values, small initial weights do not allow the network to learn since the updates are of the order of 0.0001$\%$ per iteration. The optimal scaling for ReLU-net is around 1.4, which is in line with the theoretically derived $\sqrt{2}$ by \cite{MSRA2015}.
83
+ \cite{Sussillo2014} proposed the so called Random walk initialization, RWI, which keeps constant the log of the norms of the backpropagated errors. In our experiments, we have not been able to obtain good results with our implementation of RWI, that is why this method is not evaluated in experimental section.
84
+
85
+ \cite{KDHinton2015} and \cite{FitNets2014} take another approach to initialization and formulate training as mimicking teacher network predictions (so called knowledge distillation) and internal representations (so called Hints initialization) rather than minimizing the softmax loss.
86
+
87
+ ~\cite{Highway2015} proposed a LSTM-inspired gating scheme to control information and gradient flow through the network. They trained a 1000-layers MLP network on MNIST. Basically, this kind of networks implicitly learns the depth needed for the
88
+ given task.
89
+
90
+ Independently, \cite{OrthoNorm2013} showed that orthonormal matrix initialization works much better for linear networks than Gaussian noise, which is only approximate orthogonal. It also work for networks with non-linearities.
91
+
92
+ The approach of layer-wise pre-training~(\cite{Bengio2006}) which is still useful for multi-layer-perceptron, is not popular for training discriminative convolution networks.
93
+ \section{Layer-sequential unit-variance initialization}
94
+ \label{sec:algorithm}
95
+ To the best of our knowledge, there have been no attempts to generalize~\cite{Xavier10} formulas to non-linearities other than ReLU, such as tanh, maxout, etc. Also, the formula does not cover max-pooling, local normalization layers~\cite{AlexNet2012} and other types of layers which influences activations variance.
96
+ Instead of theoretical derivation for all possible layer types, or doing extensive parameters search as in Figure~\ref{fig:weights-scaling}, we propose a data-driven weights initialization.
97
+
98
+ We thus extend the orthonormal initialization~\cite{OrthoNorm2013} to an iterative procedure, described in Algorithm~\ref{alg:LSUV}. \cite{OrthoNorm2013} could be implemented in two steps. First, fill the weights with Gaussian noise with unit variance. Second, decompose them to orthonormal basis with QR or SVD-decomposition and replace weights with one of the components.
99
+
100
+ The LSUV process then estimates output variance of each convolution and inner product layer and scales the weight to make variance equal to one. The influence of selected mini-batch size on estimated variance is negligible in wide margins, see Appendix.
101
+
102
+ The proposed scheme can be viewed as an orthonormal initialization combined with batch normalization performed only on the first mini-batch. The similarity to batch normalization is the unit variance normalization procedure, while initial ortho-normalization of weights matrices efficiently de-correlates layer activations, which is not done in~\cite{BatchNorm2015}. Experiments show that such normalization is sufficient and computationally highly efficient in comparison with full batch normalization.
103
+
104
+ \begin{algorithm}[b]
105
+ \caption{Layer-sequential unit-variance orthogonal initialization.
106
+ $L$ -- convolution or full-connected layer, $W_L$ - its weights, $B_L$ - its output blob.,
107
+ $\textit{Tol}_\textit{var}$ - variance tolerance, $T_i$ -- current trial, $T_\textit{max}$ -- max number of trials.}
108
+ \label{alg:LSUV}
109
+ \begin{algorithmic}
110
+ \footnotesize
111
+ \algrule
112
+ \State {\bf Pre-initialize} network with orthonormal matrices as in ~\cite{OrthoNorm2013}
113
+ \For {each layer $L$}
114
+ \While { $|\textit{Var}(B_L) - 1.0| \ge \textit{Tol}_\textit{var} \text{ and } (T_i < T_\textit{max})$}
115
+ \State do Forward pass with a mini-batch
116
+ \State calculate $\textit{Var}(B_L)$
117
+ \State $W_L$ = $W_L$ / $\sqrt{\textit{Var}(B_L)}$
118
+ \EndWhile
119
+ \EndFor
120
+ \end{algorithmic}
121
+ \end{algorithm}
122
+
123
+ The LSUV algorithm is summarized in Algorithm~\ref{alg:LSUV}.
124
+ The single parameter $\textit{Tol}_\textit{var}$ influences convergence of the initialization procedure, not the properties of the trained network. Its value does not noticeably influence the performance in a broad range of 0.01 to 0.1.
125
+ Because of data variations, it is often not possible to normalize variance with the desired precision. To eliminate the possibility of an infinite loop, we restricted number of trials to $T_\textit{max}$. However, in experiments described in paper, the $T_\textit{max}$ was never reached. The desired variance was achieved in 1-5 iterations.
126
+
127
+ We tested a variant LSUV initialization which was normalizing input activations of the each layer instead of output ones. Normalizing the input or output is identical for standard feed-forward nets, but normalizing input is much more complicated for networks with maxout~(\cite{Maxout2013}) or for networks like GoogLeNet~(\cite{Googlenet2015}) which use the output of multiple layers as input. Input normalization brought no improvement of results when tested against the LSUV Algorithm~\ref{alg:LSUV},
128
+
129
+ LSUV was also tested with pre-initialization of weights with Gaussian noise instead of orthonormal matrices. The Gaussian initialization led to small, but consistent, decrease in performance.
130
+ \section{Experimental validation}
131
+ \label{sec:experiment}
132
+ Here we show that very deep and thin nets could be trained in a single stage. Network architectures are exactly as proposed by~\cite{FitNets2014}. The architectures are presented in Table~\ref{tab:fitnet-architectures}.
133
+ \begin{table}[htb]
134
+ \caption{ FitNets~\cite{FitNets2014} network architecture used in experiments. Non-linearity: Maxout with 2 linear pieces in convolution layers, Maxout with 5 linear pieces in fully-connected.}
135
+ \label{tab:fitnet-architectures}
136
+ \centering
137
+ \setlength{\tabcolsep}{.3em}
138
+ \begin{tabular}{|c|c|c|c|}
139
+ \hline
140
+ FitNet-1 & FitNet-4 & FitResNet-4& FitNet-MNIST\\
141
+ 250K param & 2.5M param & 2.5M param & 30K param\\
142
+ \hline
143
+ conv 3x3x16 & conv 3x3x32 & conv 3x3x32 & conv 3x3x16 \\
144
+ conv 3x3x16 & conv 3x3x32 & conv 3x3x32 $\rightarrow$sum & conv 3x3x16 \\
145
+ conv 3x3x16 & conv 3x3x32 & conv 3x3x48 & \\
146
+ & conv 3x3x48& conv 3x3x48 $\rightarrow$ssum & \\
147
+ & conv 3x3x48& conv 3x3x48 & \\
148
+ pool 2x2& pool 2x2 & pool 2x2 & pool 4x4, stride2 \\
149
+ \hline
150
+ conv 3x3x32 & conv 3x3x80 & conv 3x3x80 & conv 3x3x16 \\
151
+ conv 3x3x32 & conv 3x3x80 & conv 3x3x80 $\rightarrow$sum & conv 3x3x16 \\
152
+ conv 3x3x32 & conv 3x3x80& conv 3x3x80 & \\
153
+ & conv 3x3x80& conv 3x3x80$\rightarrow$sum & \\
154
+ & conv 3x3x80& conv 3x3x80 & \\
155
+ pool 2x2& pool 2x2 & pool 2x2 & pool 4x4, stride2 \\
156
+ \hline
157
+ conv 3x3x48 & conv 3x3x128 & conv 3x3x128 & conv 3x3x12 \\
158
+ conv 3x3x48 & conv 3x3x128 & conv 3x3x128 $\rightarrow$sum& conv 3x3x12 \\
159
+ conv 3x3x64 & conv 3x3x128& conv 3x3x128 & \\
160
+ & conv 3x3x128& conv 3x3x128 $\rightarrow$sum & \\
161
+ & conv 3x3x128 & conv 3x3x128 & \\
162
+ pool 8x8 (global)& pool 8x8 (global) &pool 8x8 (global) & pool 2x2 \\
163
+ \hline
164
+ fc-500& fc-500 &fc-500& \\
165
+ softmax-10(100) & softmax-10(100) & softmax-10(100) & softmax-10\\
166
+ \hline
167
+ \end{tabular}
168
+ \end{table}
169
+ \subsection{MNIST}
170
+ First, as a "sanity check", we performed an experiment on the MNIST dataset~(\cite{MNIST1998}). It consists of 60,000 28x28 grayscale images of handwritten digits 0 to 9.
171
+ We selected the FitNet-MNIST architecture (see Table~\ref{tab:fitnet-architectures}) of~\cite{FitNets2014} and trained it with the proposed initialization strategy, without data augmentation. Recognition results are shown in Table~\ref{tab:CIFAR-MNIST}, right block.
172
+ LSUV outperforms orthonormal initialization and both LSUV and orthonormal outperform Hints initialization~\cite{FitNets2014}.
173
+ The error rates of the Deeply-Supervised Nets (DSN,~\cite{DSN2015}) and maxout networks~\cite{Maxout2013}, the current state-of-art, are provided for reference.
174
+
175
+ Since the widely cited DSN error rate of 0.39\%, the state-of-the-art (until recently) was obtained after replacing the softmax classifier with SVM, we do the same and also observe improved results (line FitNet-LSUV-SVM in Table~\ref{tab:CIFAR-MNIST}).
176
+
177
+ \subsection{CIFAR-10/100} We validated the proposed initialization LSUV strategy on the CIFAR-10/100~(\cite{CIFAR2010}) dataset. It contains 60,000 32x32 RGB images, which are divided into 10 and 100 classes, respectively.
178
+
179
+ The FitNets are trained with the stochastic gradient descent with momentum set to 0.9, the initial learning rate set to 0.01 and reduced by a factor of 10 after the 100th, 150th and 200th epoch, finishing at 230th epoch. \cite{Highway2015} and \cite{FitNets2014} trained their networks for 500 epochs. Of course, training time is a trade-off dependent on the desired accuracy; one could train a slightly less accurate network much faster.
180
+
181
+ Like in the MNIST experiment, LSUV and orthonormal initialized nets outperformed Hints-trained Fitnets, leading to the new state-of-art when using commonly used augmentation -- mirroring and random shifts. The gain on the fine-grained CIFAR-100 is much larger than on CIFAR-10. Also, note that FitNets with LSUV initialization outperform even much larger networks like Large-All-CNN~\cite{ALLCNN2015} and Fractional Max-pooling~\cite{FractMaxPool2014} trained with affine and color dataset augmentation on CIFAR-100.
182
+ The results of LSUV are virtually identical to the orthonormal initialization.
183
+ \begin{table}[htb]
184
+ \caption{Network performance comparison on the MNIST and CIFAR-10/100 datasets. Results marked '$\dagger$' were obtained with the RMSProp optimizer~\cite{Tieleman2012}.}
185
+ \label{tab:CIFAR-MNIST}
186
+ \footnotesize
187
+ \centering
188
+ \setlength{\tabcolsep}{.3em}
189
+ \begin{tabular}{lll}
190
+ \multicolumn{3}{c}{Accuracy on CIFAR-10/100, with data augmentation}\\
191
+ \hline
192
+ Network &CIFAR-10, $[\%]$ & CIFAR-100,$[\%]$ \\
193
+ \hline
194
+ Fitnet4-LSUV & \textbf{93.94} & 70.04 (\textbf{72.34}$\dagger$) \\
195
+ Fitnet4-OrthoInit & 93.78 & 70.44 (72.30$\dagger$) \\
196
+ Fitnet4-Hints & 91.61 & 64.96 \\
197
+ Fitnet4-Highway & 92.46 & 68.09 \\
198
+ \hline
199
+ ALL-CNN & 92.75 & 66.29\\
200
+ DSN & 92.03 & 65.43\\
201
+ NiN & 91.19 & 64.32\\
202
+ maxout & 90.62 & 65.46\\
203
+ \textit{MIN} & \textit{93.25}& \textit{71.14} \\
204
+ \hline
205
+ \multicolumn{3}{c}{Extreme data augmentation}\\
206
+ \hline
207
+ Large ALL-CNN& 95.59 & n/a\\
208
+ Fractional MP (1 test) & 95.50 & 68.55 \\
209
+ Fractional MP (12 tests)& \textbf{96.53} & \textbf{73.61}\\
210
+ \hline
211
+ \end{tabular}
212
+ \begin{tabular}{llll}
213
+ \multicolumn{4}{c}{Error on MNIST w/o data augmentation}\\
214
+ \hline
215
+ Network & layers & params & Error, $\%$\\
216
+ \hline
217
+ \multicolumn{4}{c}{FitNet-like networks}\\
218
+ \hline
219
+ HighWay-16 & 10 & 39K & 0.57 \\
220
+ FitNet-Hints & 6 &30K & 0.51\\
221
+ FitNet-Ortho & 6 &30K & 0.48\\
222
+ FitNet-LSUV & 6 &30K & 0.48\\
223
+ FitNet-Ortho-SVM & 6 &30K & 0.43\\
224
+ FitNet-LSUV-SVM & 6 &30K & \textbf{0.38}\\
225
+ \hline
226
+ \multicolumn{4}{c}{State-of-art-networks}\\
227
+ \hline
228
+ DSN-Softmax & 3 & 350K & 0.51 \\
229
+ DSN-SVM & 3 & 350K & 0.39 \\
230
+ HighWay-32 & 10 & 151K & 0.45 \\
231
+ maxout & 3 & 420K & 0.45 \\
232
+ \textit{MIN}~\footnotemark &\textit{9} & \textit{447K} & \textit{0.24} \\
233
+ \hline
234
+ \end{tabular}
235
+ \end{table}
236
+ \footnotetext{
237
+ When preparing this submission we have found recent unreviewed paper MIN~\cite{MIN2015} paper, which uses a sophisticated combination of batch normalization, maxout and network-in-network non-linearities and establishes a new state-of-art on MNIST.}
238
+ \section{Analysis of empirical results}
239
+ \label{sec:solver-times-inits}
240
+ \subsection{Initialization strategies and non-linearities}
241
+ \begin{table}[htb]
242
+ \caption{The compatibility of activation functions and initialization.
243
+ \hspace{\textwidth}
244
+ Dataset: CIFAR-10. Architecture: FitNet4, 2.5M params for maxout net, 1.2M for the rest, 17 layers. The n/c symbol stands for ``failed to converge''; n/c$\dagger$ -- after extensive trials, we managed to train a maxout-net with MSRA initialization with very small learning rate and gradient clipping, see Figure~\ref{fig:sgd-cifar10}. The experiment is marked n/c as training time was excessive and parameters non-standard.}
245
+ \label{tab:activations}
246
+ \centering
247
+ \begin{tabular}{llllll}
248
+ \hline
249
+ Init method & maxout & ReLU & VLReLU & tanh & Sigmoid \\
250
+ \hline
251
+ LSUV & \textbf{93.94} & \textbf{92.11} & 92.97 & 89.28 & n/c \\
252
+ OrthoNorm & 93.78 & 91.74 & 92.40 & 89.48 & n/c \\
253
+ OrthoNorm-MSRA scaled & -- & 91.93 & \textbf{93.09} & -- & n/c \\
254
+ Xavier & 91.75 & 90.63 & 92.27 & \textbf{89.82} & n/c \\
255
+ MSRA & n/c$\dagger$ & 90.91 & 92.43 & 89.54 & n/c \\
256
+ \hline
257
+ \end{tabular}
258
+ \end{table}
259
+ For the FitNet-1 architecture, we have not experienced any difficulties training the network with any of the activation functions (ReLU, maxout, tanh), optimizers (SGD, RMSProp) or initialization (Xavier, MSRA, Ortho, LSUV), unlike the uniform initialization used in~\cite{FitNets2014}. The most probable cause is that CNNs tolerate a wide variety of mediocre initialization, only the learning time increases.
260
+ The differences in the final accuracy between the different initialization methods for the FitNet-1 architecture is rather small and are therefore not presented here.
261
+
262
+ The FitNet-4 architecture is much more difficult to optimize and thus we focus on it in the experiments
263
+ presented in this section.
264
+
265
+ We have explored the initializations with different activation functions in very deep networks. More specifically, ReLU, hyperbolic tangent, sigmoid, maxout and the VLReLU -- very leaky ReLU~(\cite{SparseConvNet2014}) -- a variant of leaky ReLU (~\cite{Maas2013}, with a large value of the negative slope 0.333, instead of the originally proposed 0.01) which is popular in Kaggle competitions~\cite{deepsea}, \cite{GrahamCIFAR}).
266
+
267
+ Testing was performed on CIFAR-10 and results are in Table~\ref{tab:activations} and Figure~\ref{fig:sgd-cifar10}. Performance of orthonormal-based methods is superior to the scaled Gaussian-noise approaches for all tested types of activation functions, except tanh. Proposed LSUV strategy outperforms orthonormal initialization by smaller margin, but still consistently (see Table~\ref{tab:activations}). All the methods failed to train sigmoid-based very deep network.
268
+ Figure~\ref{fig:sgd-cifar10} shows that LSUV method not only leads to better generalization error, but also converges faster for all tested activation functions, except tanh.
269
+
270
+
271
+ We have also tested how the different initializations work "out-of-the-box" with the Residual net training~\cite{DeepResNet2015}; a residual net won the ILSVRC-2015 challenge. The original paper proposed different implementations of residual learning. We adopted the simplest one, showed in Table~\ref{tab:fitnet-architectures}, FitResNet-4. The output of each even convolutional layer is summed with the output of the previous non-linearity layer and then fed into the next non-linearity. Results are shown in Table~\ref{tab:activations-resnet}. LSUV is the only initialization algorithm which leads nets to convergence with all tested non-linearities without any additional tuning, except, again, sigmoid.
272
+ It is worth nothing that the residual training improves results for ReLU and maxout, but does not help tanh-based network.
273
+ \begin{table}[htb]
274
+ \caption{The performance of activation functions and initialization in the Residual learning setup~\cite{DeepResNet2015}, FitResNet-4 from Table~\ref{tab:fitnet-architectures}.The n/c symbol stands for ``failed to converge'';}
275
+ \label{tab:activations-resnet}
276
+ \centering
277
+ \begin{tabular}{llllll}
278
+ \hline
279
+ Init method & maxout & ReLU & VLReLU & tanh & Sigmoid \\
280
+ \hline
281
+ LSUV & \textbf{94.16} & \textbf{92.82} & \textbf{93.36}& 89.17& n/c \\
282
+ OrthoNorm & n/c & 91.42 & n/c & 89.31 & n/c \\
283
+ Xavier & n/c & 92.48 & \textbf{93.34} & \textbf{89.62} & n/c \\
284
+ MSRA & n/c & n/c & n/c & 88.59 & n/c \\
285
+ \hline
286
+ \end{tabular}
287
+ \end{table}
288
+
289
+ \begin{figure}
290
+ \centering
291
+ \includegraphics[width=0.48\linewidth]{images/Maxout-HW.pdf}
292
+ \includegraphics[width=0.48\linewidth]{images/ReLU.pdf}\\
293
+ \includegraphics[width=0.48\linewidth]{images/VLReLU.pdf}
294
+ \includegraphics[width=0.48\linewidth]{images/TanH.pdf}\\
295
+ \caption{CIFAR-10 accuracy of FitNet-4 with different activation functions. Note that graphs are cropped at 0.4 accuracy. Highway19 is the network from~\cite{Highway2015}.}
296
+ \label{fig:sgd-cifar10}
297
+ \end{figure}
298
+
299
+ \subsection{Comparison to batch normalization (BN)}
300
+ LSUV procedure could be viewed as batch normalization of layer output done only before the start of training. Therefore, it is natural to compare LSUV against a batch-normalized network, initialized with the standard method.
301
+
302
+ \subsubsection{Where to put BN -- before or after non-linearity?}
303
+ It is not clear from the paper~\cite{BatchNorm2015} where to put the batch-normalization layer -- before input of each layer as stated in Section 3.1, or before non-linearity, as stated in section 3.2, so we have conducted an experiment with FitNet4 on CIFAR-10 to clarify this.
304
+
305
+ Results are shown in Table~\ref{tab:before-or-after}. Exact numbers vary from run to run, but in the most cases, batch normalization put after non-linearity performs better.
306
+
307
+ \begin{table}[htb]
308
+ \centering
309
+ \caption{CIFAR-10 accuracy of batch-normalized FitNet4.\\ Comparison of batch normalization put before and after non-linearity.}
310
+ \label{tab:before-or-after}
311
+ \centering
312
+ \begin{tabular}{lrr}
313
+ \hline
314
+ Non-linearity & \multicolumn{2}{c}{Where to put BN}\\
315
+ & Before & After\\
316
+ \hline
317
+ TanH & 88.10 & 89.22\\
318
+ ReLU & 92.60 & 92.58\\
319
+ Maxout & 92.30 & 92.98 \\
320
+ \hline
321
+ \end{tabular}
322
+ \end{table}
323
+
324
+ In the next experiment we compare BN-FitNet4, initialized with Xavier and LSUV-initialized FitNet4. Batch-normalization reduces training time in terms of needed number of iterations, but each iteration becomes slower because of extra computations.
325
+ The accuracy versus wall-clock-time graphs are shown in Figure~\ref{fig:BN-vs-LSUV}. LSUV-initialized network is as good as batch-normalized one.
326
+
327
+ However, we are not claiming that batch normalization can always be replaced by proper initialization, especially in large datasets like ImageNet.
328
+
329
+ \begin{figure}
330
+ \centering
331
+ \includegraphics[width=0.49\linewidth]{images/Maxout-BN.pdf}
332
+ \includegraphics[width=0.49\linewidth]{images/ReLU-BN.pdf}\\
333
+ \includegraphics[width=0.49\linewidth]{images/VLReLU-BN.pdf}
334
+ \includegraphics[width=0.49\linewidth]{images/TanH-BN.pdf}\\
335
+ \caption{CIFAR-10 accuracy of FitNet-4 LSUV and batch normalized~(BN) networks as function of wall-clock time. BN-half stands for half the number of iterations in each step.}
336
+ \label{fig:BN-vs-LSUV}
337
+ \end{figure}
338
+
339
+ \subsection{Imagenet training}
340
+ We trained CaffeNet (\cite{jia2014caffe}) and GoogLeNet~(\cite{Googlenet2015}) on the ImageNet-1000 dataset(~\cite{ILSVRC15}) with the original initialization and LSUV. CaffeNet is a variant of AlexNet with the nearly identical performance, where the order of pooling and normalization layers is switched to reduce the memory footprint.
341
+
342
+ LSUV initialization reduces the starting flat-loss time from 0.5 epochs to 0.05 for CaffeNet, and starts to converge faster, but it is overtaken by a standard CaffeNet at the 30-th epoch (see Figure~\ref{fig:caffenet-training}) and its final precision is 1.3\% lower. We have no explanation for this empirical phenomenon.
343
+
344
+ On the contrary, the LSUV-initialized GoogLeNet learns faster than hen then original one and shows better test accuracy all the time -- see Figure~\ref{fig:googlenet-training}. The final accuracy is 0.680 vs. 0.672 respectively.
345
+
346
+ \begin{figure}
347
+ \centering
348
+ \includegraphics[width=0.49\linewidth]{images/6_start.pdf}
349
+ \includegraphics[width=0.49\linewidth]{images/0_beg.pdf}\\
350
+ \includegraphics[width=0.49\linewidth]{images/6_mid.pdf}
351
+ \includegraphics[width=0.49\linewidth]{images/0_mid.pdf}\\
352
+ \includegraphics[width=0.49\linewidth]{images/6_all.pdf}
353
+ \includegraphics[width=0.49\linewidth]{images/0_all.pdf}\\
354
+ \caption{CaffeNet training on ILSVRC-2012 dataset with LSUV and original~\cite{AlexNet2012} initialization. Training loss (left) and validation accuracy (right). Top -- first epoch, middle -- first 10 epochs, bottom -- full training.}
355
+ \label{fig:caffenet-training}
356
+ \end{figure}
357
+
358
+ \begin{figure}
359
+ \centering
360
+ \includegraphics[width=0.49\linewidth]{images/6_start_gln.pdf}
361
+ \includegraphics[width=0.49\linewidth]{images/0_start.pdf}\\
362
+ \includegraphics[width=0.49\linewidth]{images/6_mid_gln.pdf}
363
+ \includegraphics[width=0.49\linewidth]{images/0_mid_gln.pdf}\\
364
+ \includegraphics[width=0.49\linewidth]{images/6_all_gln.pdf}
365
+ \includegraphics[width=0.49\linewidth]{images/0_all_gln.pdf}\\
366
+ \caption{GoogLeNet training on ILSVRC-2012 dataset with LSUV and reference~\cite{jia2014caffe} BVLC initializations. Training loss (left) and validation accuracy (right). Top -- first epoch, middle -- first ten epochs, bottom -- full training}
367
+ \label{fig:googlenet-training}
368
+ \end{figure}
369
+
370
+ \subsection{Timings}
371
+ \label{timings}
372
+ A significant part of LSUV initialization is SVD-decomposition of the weight matrices, e.g. for the fc6 layer of CaffeNet, an SVD of a 9216x4096 matrix is required. The computational overhead on top of generating almost instantly the scaled random Gaussian samples is shown in Table~\ref{tab:timings}. In the slowest case -- CaffeNet -- LSUV initialization takes 3.5 minutes, which is negligible in comparison the training time.
373
+ \begin{table}[htb]
374
+ \caption{Time needed for network initialization \\ on top of random Gaussian (seconds).}
375
+ \label{tab:timings}
376
+ \centering
377
+ \begin{tabular}{r|rr|}
378
+ \hline
379
+ Network & \multicolumn{2}{c|}{Init}\\
380
+ & OrthoNorm & LSUV\\
381
+ \hline
382
+ FitNet4 & 1 & 4 \\
383
+ CaffeNet & 188 & 210 \\
384
+ GoogLeNet & 24 & 60 \\
385
+ \hline
386
+ \end{tabular}
387
+ \end{table}
388
+
389
+ \section{Conclusions}
390
+ \label{conclusions}
391
+ LSUV, layer sequential uniform variance, a simple strategy for weight initialization for deep net learning, is proposed.
392
+ We have showed that the LSUV initialization, described fully in six lines of pseudocode, is as good as complex learning schemes which need, for instance, auxiliary nets.
393
+
394
+ The LSUV initialization allows learning of very deep nets via standard SGD, is fast, and leads to (near) state-of-the-art results on MNIST, CIFAR, ImageNet datasets, outperforming the sophisticated systems designed specifically for very deep nets such as FitNets(~\cite{FitNets2014}) and Highway(~\cite{Highway2015}). The proposed initialization works well with different activation functions.
395
+
396
+ Our experiments confirm the finding of ~\cite{FitNets2014} that very thin, thus fast and low in parameters, but deep networks obtain comparable or even better performance than wider, but shallower ones.
397
+ \subsubsection*{Acknowledgments}
398
+ The authors were supported by The Czech Science Foundation Project GACR P103/12/G084 and CTU student grant
399
+ SGS15/155/OHK3/2T/13.
400
+ \bibliography{iclr2016_conference}
401
+ \bibliographystyle{iclr2016_conference}
402
+
403
+ \appendix
404
+ \section{Technical details}
405
+ \subsection{Influence of mini-batch size to LSUV initialization}
406
+ We have selected tanh activation as one, where LSUV initialization shows the worst performance and tested the influence of mini-batch size to training process. Note, that training mini-batch is the same for all initializations, the only difference is mini-batch used for variance estimation. One can see from Table~\ref{tab:batchsize} that there is no difference between small or large mini-batch, except extreme cases, where only two sample are used.
407
+ \begin{table}[htb]
408
+ \caption{FitNet4 TanH final performance on CIFAR-10. Dependence on LSUV mini-batch size}
409
+ \label{tab:batchsize}
410
+ \centering
411
+ \begin{tabular}{l|lllll}
412
+ \hline
413
+ Batch size for LSUV&2 & 16 & 32 & 128 & 1024\\
414
+ Final accuracy, [\%] & 89.27 & 89.30 &89.30 &89.28 & 89.31 \\
415
+ \hline
416
+ \end{tabular}
417
+ \end{table}
418
+ \subsection{LSUV weight standard deviations in different networks}
419
+ Tables~\ref{tab:relu-std} and \ref{tab:lsuv-std} show the standard deviations of the filter weights, found by the LSUV procedure and by other initialization schemes.
420
+ \begin{table}[htb]
421
+ \caption{Standard deviations of the weights per layer for different initializations, FitNet4, CIFAR10, ReLU}
422
+ \label{tab:relu-std}
423
+ \centering
424
+ \begin{tabular}{lllll}
425
+ \hline
426
+ Layer & LSUV & OrthoNorm & MSRA & Xavier\\
427
+ \hline
428
+ conv11 & 0.383 & 0.175 & 0.265 & 0.191 \\
429
+ conv12 & 0.091 & 0.058 & 0.082 & 0.059\\
430
+ conv13 & 0.083 & 0.058 & 0.083 & 0.059\\
431
+ conv14 & 0.076 & 0.058 & 0.083 & 0.059\\
432
+ conv15 & 0.068 & 0.048 & 0.060 & 0.048\\
433
+ \hline
434
+ conv21 & 0.036 & 0.048 & 0.052 & 0.037\\
435
+ conv22 & 0.048 & 0.037 & 0.052 & 0.037\\
436
+ conv23 & 0.061 & 0.037& 0.052 & 0.037\\
437
+ conv24 & 0.052 & 0.037& 0.052 & 0.037\\
438
+ conv25 & 0.067 & 0.037& 0.052 & 0.037\\
439
+ conv26 & 0.055 & 0.037& 0.052 & 0.037\\
440
+ \hline
441
+ conv31 & 0.034 &0.037& 0.052 & 0.037\\
442
+ conv32 & 0.044 &0.029& 0.041& 0.029\\
443
+ conv33 & 0.042 &0.029& 0.041 & 0.029\\
444
+ conv34 & 0.041 &0.029& 0.041 & 0.029\\
445
+ conv35 & 0.040 &0.029& 0.041 & 0.029\\
446
+ conv36 & 0.043 &0.029& 0.041 & 0.029\\
447
+ \hline
448
+ ip1 & 0.048 & 0.044 & 0.124 & 0.088\\
449
+ \hline
450
+ \end{tabular}
451
+ \end{table}
452
+ \begin{table}[htb]
453
+ \caption{Standard deviations of the weights per layer for different non-linearities, found by LSUV, FitNet4, CIFAR10}
454
+ \label{tab:lsuv-std}
455
+ \centering
456
+ \begin{tabular}{lllll}
457
+ \hline
458
+ Layer&TanH&ReLU&VLReLU&Maxout\\
459
+ \hline
460
+ conv11&0.386&0.388&0.384&0.383\\
461
+ conv12&0.118&0.083&0.084&0.058\\
462
+ conv13&0.102&0.096&0.075&0.063\\
463
+ conv14&0.101&0.082&0.080&0.065\\
464
+ conv15&0.081&0.064&0.065&0.044\\
465
+ \hline
466
+ conv21&0.065&0.044&0.037&0.034\\
467
+ conv22&0.064&0.055&0.047&0.040\\
468
+ conv23&0.060&0.055&0.049&0.032\\
469
+ conv24&0.058&0.064&0.049&0.041\\
470
+ conv25&0.061&0.061&0.043&0.040\\
471
+ conv26&0.063&0.049&0.052&0.037\\
472
+ \hline
473
+ conv31&0.054&0.032&0.037&0.027\\
474
+ conv32&0.052&0.049&0.037&0.031\\
475
+ conv33&0.051&0.048&0.042&0.033\\
476
+ conv34&0.050&0.047&0.038&0.028\\
477
+ conv35&0.051&0.047&0.039&0.030\\
478
+ conv36&0.051&0.040&0.037&0.033\\
479
+ \hline
480
+ ip1&0.084&0.044&0.044&0.038\\
481
+ \hline
482
+ \end{tabular}
483
+ \end{table}
484
+ \subsection{Gradients}
485
+ To check how the activation variance normalization influences the variance of the gradient, we measure the average variance of the gradient at all layers after 10 mini-batches. The variance is close to $10^{-9}$ for all convolutional layers. It is much more stable than for the reference methods, except MSRA; see Table~\ref{tab:grad-var}.
486
+ \begin{table}[htb]
487
+ \caption{Variance of the initial gradients per layer, different initializations, FitNet4, ReLU}
488
+ \label{tab:grad-var}
489
+ \centering
490
+ \begin{tabular}{lllll}
491
+ \hline
492
+ Layer&LSUV&MSRA&OrthoInit&Xavier\\
493
+ \hline
494
+ conv11&4.87E-10&9.42E-09&5.67E-15&2.30E-14\\
495
+ conv12&5.07E-10&9.62E-09&1.17E-14&4.85E-14\\
496
+ conv13&4.36E-10&1.07E-08&2.30E-14&9.94E-14\\
497
+ conv14&3.21E-10&7.03E-09&2.95E-14&1.35E-13\\
498
+ conv15&3.85E-10&6.57E-09&6.71E-14&3.10E-13\\
499
+ \hline
500
+ conv21&1.25E-09&9.11E-09&1.95E-13&8.00E-13\\
501
+ conv22&1.15E-09&9.73E-09&3.79E-13&1.56E-12\\
502
+ conv23&1.19E-09&1.07E-08&8.18E-13&3.28E-12\\
503
+ conv24&9.12E-10&1.07E-08&1.79E-12&6.69E-12\\
504
+ conv25&7.45E-10&1.09E-08&4.04E-12&1.36E-11\\
505
+ conv26&8.21E-10&1.15E-08&8.36E-12&2.99E-11\\
506
+ \hline
507
+ conv31&3.06E-09&1.92E-08&2.65E-11&1.05E-10\\
508
+ conv32&2.57E-09&2.01E-08&5.95E-11&2.28E-10\\
509
+ conv33&2.40E-09&1.99E-08&1.21E-10&4.69E-10\\
510
+ conv34&2.19E-09&2.25E-08&2.64E-10&1.01E-09\\
511
+ conv35&1.94E-09&2.57E-08&5.89E-10&2.27E-09\\
512
+ conv36&2.31E-09&2.97E-08&1.32E-09&5.57E-09\\
513
+ \hline
514
+ ip1&1.24E-07&1.95E-07&6.91E-08&7.31E-08\\
515
+ \hline
516
+ var(ip1)/var(conv11)&255&20&12198922&3176821\\
517
+ \hline
518
+ \end{tabular}
519
+ \end{table}
520
+ \end{document}
papers/1511/1511.06732.tex ADDED
@@ -0,0 +1,1011 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass{article} \usepackage{iclr2016_conference,times}
2
+ \usepackage{hyperref}
3
+ \usepackage{url}
4
+ \usepackage{graphicx}
5
+ \usepackage[cmex10]{amsmath}
6
+ \usepackage{array}
7
+ \usepackage{mdwmath}
8
+ \usepackage{mdwtab}
9
+ \usepackage{eqparbox}
10
+ \usepackage{tikz}
11
+ \usepackage{tikz-qtree}
12
+ \usepackage{stfloats}
13
+ \hyphenation{net-works Boltz-mann}
14
+ \usepackage{times}
15
+ \usepackage{xcolor}
16
+ \usepackage{amssymb}
17
+ \usepackage{epsfig}
18
+ \usepackage{amssymb}
19
+ \usepackage[]{algorithm2e}
20
+ \usepackage{floatrow}
21
+
22
+
23
+
24
+
25
+ \newcommand{\argmax}{\arg\!\max}
26
+ \definecolor{green}{rgb}{0.2,0.8,0.2}
27
+ \definecolor{blue}{rgb}{0.2,0.2,0.8}
28
+ \usepackage{xspace}
29
+ \newcommand*{\eg}{e.g.\@\xspace}
30
+ \newcommand*{\ie}{i.e.\@\xspace}
31
+ \newcommand*{\aka}{a.k.a.\@\xspace}
32
+ \makeatletter
33
+ \newcommand*{\etc}{\@ifnextchar{.}{etc}{etc.\@\xspace}}
34
+
35
+
36
+ \title{Sequence Level Training with\\Recurrent Neural Networks}
37
+
38
+
39
+ \author{Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, Wojciech Zaremba \\
40
+ Facebook AI Research\\
41
+ \texttt{\{ranzato, spchopra, michealauli, wojciech\}@fb.com}
42
+ }
43
+
44
+
45
+
46
+ \newcommand{\fix}{\marginpar{FIX}}
47
+ \newcommand{\new}{\marginpar{NEW}}
48
+ \newcommand{\bh}{\mathbf{h}}
49
+ \newcommand{\bb}{\mathbf{b}}
50
+ \newcommand{\bx}{\mathbf{x}}
51
+ \newcommand{\bc}{\mathbf{c}}
52
+ \newcommand{\bone}{\mathbf{1}}
53
+ \newcommand{\by}{\mathbf{y}}
54
+ \newcommand{\bs}{\mathbf{s}}
55
+ \newcommand{\btheta}{\mathbf{\theta}}
56
+ \newcommand{\bo}{\mathbf{o}}
57
+ \newcommand{\setr}{\mathcal{R}}
58
+ \newcommand{\mamark}[1]{\textcolor{orange}{{#1}}}
59
+ \newcommand{\spcmark}[1]{\textcolor{red}{{#1}}}
60
+ \newcommand{\macomment}[1]{\marginpar{\begin{center}\textcolor{orange}{#1}\end{center}}}
61
+ \iclrfinalcopy
62
+
63
+
64
+
65
+ \begin{document}
66
+ \maketitle
67
+
68
+
69
+
70
+ \begin{abstract}
71
+ Many natural language processing applications use language models to generate text.
72
+ These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image.
73
+ However, at test time the model is expected to generate the entire sequence from scratch.
74
+ This discrepancy makes generation brittle, as errors may accumulate along the way.
75
+ We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE.
76
+ On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search,
77
+ while being several times faster.
78
+ \end{abstract}
79
+
80
+ \section{Introduction}
81
+ Natural language is the most natural form of communication for humans. It is therefore essential that interactive AI systems are capable of generating text~\citep{textgen}.
82
+ A wide variety of applications rely on text generation, including machine translation, video/text summarization, question answering, among others.
83
+ From a machine learning perspective, text generation is the problem of predicting a syntactically and semantically correct sequence of consecutive words given some context. For instance, given an image, generate an appropriate caption or given a sentence in English language, translate it into French.
84
+
85
+ Popular choices for text generation models are language models based on n-grams~\citep{kneser+ney1995}, feed-forward neural networks~\citep{nlm}, and recurrent neural networks (RNNs; Mikolov et al., 2010)\nocite{mikolov-2010}. These models when used as is to generate text suffer from two major drawbacks. First, they are trained to predict the next word given the previous ground truth words as input.
86
+ However, at test time, the resulting models are used to generate an entire sequence by predicting one word at a time, and by feeding the generated word back as input at the next time step.
87
+ This process is very brittle because the model was trained on a different distribution of inputs, namely, words drawn from the data distribution, as opposed to words drawn from the model distribution. As a result the errors made along the way will quickly accumulate.
88
+ We refer to this discrepancy as \textit{exposure bias} which occurs when a model is only exposed to the training data distribution, instead of its own predictions.
89
+ Second, the loss function used to train these models is at the word level. A popular choice is the cross-entropy loss used to maximize the probability of the next correct word. However, the performance of these models is typically evaluated using discrete metrics. One such metric is called BLEU~\citep{bleu} for instance, which measures the n-gram overlap between the model generation and the reference text. Training these models to directly optimize metrics like BLEU is hard because a) these are not differentiable \citep{rosti2011}, and b) combinatorial optimization is required to determine which sub-string maximizes them given some context. Prior attempts~\citep{mcallister2010,he12} at optimizing test metrics were restricted to linear models, or required a large number of samples to work well~\citep{auli2014}.
90
+
91
+ This paper proposes a novel training algorithm which results in improved text generation compared to standard models. The algorithm addresses the two issues discussed above as follows. First, while training the generative model we avoid the exposure bias by using model predictions at training time. Second, we directly optimize for our final evaluation metric. Our proposed methodology borrows ideas from the reinforcement learning literature~\citep{sutton-rl}. In particular, we build on the REINFORCE algorithm proposed by~\citet{reinforce}, to achieve the above two objectives. While sampling from the model during training is quite a natural step for the REINFORCE algorithm, optimizing directly for any test metric can also be achieved by it. REINFORCE side steps the issues associated with the discrete nature of the optimization by not requiring rewards (or losses) to be differentiable.
92
+ While REINFORCE appears to be well suited to tackle the text generation problem, it suffers from a significant issue.
93
+ The problem setting of text generation has a very large action space which makes it extremely difficult to learn with an initial random policy.
94
+ Specifically, the search space for text generation is of size $\mathcal{O}(\mathcal{W}^T)$, where $\mathcal{W}$ is the number of words in the vocabulary (typically around $10^4$ or more) and $T$ is the length of the sentence (typically around $10$ to $30$).
95
+
96
+ Towards that end, we introduce Mixed Incremental Cross-Entropy Reinforce (MIXER), which is our first major contribution of this work. MIXER is an easy-to-implement recipe to make REINFORCE work well for text generation applications. It is based on two key ideas:
97
+ incremental learning and the use of a hybrid loss function which
98
+ combines both REINFORCE and cross-entropy (see Sec.~\ref{model-mixer} for details). Both ingredients are essential to training with large action spaces.
99
+ In MIXER, the model starts from the optimal policy given by cross-entropy training (as opposed to a random one), from which it then slowly deviates,
100
+ in order to make use of its own predictions, as is done at test time.
101
+
102
+ Our second contribution is a thorough empirical evaluation on three
103
+ different tasks, namely, Text Summarization, Machine Translation and Image Captioning.
104
+ We compare against several strong baselines, including, RNNs trained with cross-entropy and Data as Demonstrator (DAD) \citep{sbengio-nips2015, dad}.
105
+ We also compare MIXER with another simple yet novel model that we propose in this paper. We call it the End-to-End BackProp model (see Sec.~\ref{model-e2e} for details).
106
+ Our results show that MIXER with a simple greedy search achieves much better accuracy compared to the baselines on all the three tasks. In addition we show that MIXER with greedy search is even more accurate than the cross entropy model augmented with beam search at inference time as a post-processing step. This is particularly remarkable because MIXER with greedy search is at least $10$ times faster than the cross entropy model with a beam of size $10$. Lastly, we note that MIXER and beam search are complementary to each other and can be combined to further improve performance, although the extent of the improvement is task dependent.~\footnote{Code available at: \url{https://github.com/facebookresearch/MIXER}}
107
+
108
+ \section{Related Work}
109
+ Sequence models are typically trained to predict the next
110
+ word using the cross-entropy loss. At test time, it is common to use beam search to
111
+ explore multiple alternative paths~\citep{sutskever2014,bahdanau-iclr2015,rush-2015}.
112
+ While this improves generation by typically one or two BLEU points~\citep{bleu},
113
+ it makes the generation at least $k$ times slower, where $k$ is the number of active
114
+ paths in the beam (see Sec.~\ref{model-xent} for more details).
115
+
116
+ The idea of improving generation by letting the model use its own predictions at training
117
+ time (the key proposal of this work) was first advocated by~\citet{searn}. In their seminal
118
+ work, the authors first noticed that structured prediction problems
119
+ can be cast as a particular instance of reinforcement learning. They then
120
+ proposed SEARN, an algorithm to learn such structured
121
+ prediction tasks. The basic idea is to let the model use its own
122
+ predictions at training time to produce a sequence of actions (e.g.,
123
+ the choice of the next word). Then, a search algorithm is
124
+ run to determine the optimal action at each time step, and a
125
+ classifier (\aka policy) is trained to predict that action. A similar idea was later
126
+ proposed by~\citet{dagger} in an imitation learning framework.
127
+ Unfortunately, for text generation it is generally intractable to compute
128
+ an oracle of the optimal target word given the words predicted so far.
129
+ The oracle issue was later addressed by an algorithm called
130
+ Data As Demonstrator (DAD)~\citep{dad} and applied for text generation by
131
+ \cite{sbengio-nips2015}, whereby the target action at step $k$ is the
132
+ $k$-th action taken by the optimal policy (ground truth sequence) regardless of which
133
+ input is fed to the system, whether it is ground truth, or the model's
134
+ prediction. While DAD usually improves generation, it seems unsatisfactory to force
135
+ the model to predict a certain word regardless of the preceding words
136
+ (see sec.~\ref{model-dad} for more details).
137
+
138
+ Finally, REINFORCE has already been used for other applications, such as in
139
+ computer vision~\citep{vmnih-nips2014,xu-icml2015,ba_iclr15}, and for speech recognition~\cite{graves_icml14}.
140
+ While they simply pre-trained with cross-entropy loss, we found that the use of a mixed loss and a more gentle incremental learning scheduling to be important for all the tasks we considered. \section{Models} \label{model}
141
+ \begin{table}[t]
142
+ \footnotesize
143
+ \caption{Text generation models can be described across three dimensions: whether they suffer from exposure bias,
144
+ whether they are trained in an end-to-end manner using back-propagation,
145
+ and whether they are trained to predict one word ahead or the whole sequence.}
146
+ \begin{tabular}{l || c | c | c |c}
147
+ \multicolumn{1}{l||}{\em PROPERTY} &
148
+ \multicolumn{1}{l|}{XENT} &
149
+ \multicolumn{1}{l|}{DAD} & \multicolumn{1}{c|}{E2E} & \multicolumn{1}{c}{MIXER}\\
150
+ \hline
151
+ \hline
152
+ {\em avoids exposure bias} & No & Yes & Yes & Yes \\
153
+ \hline
154
+ {\em end-to-end} & No & No & Yes & Yes \\
155
+ \hline
156
+ {\em sequence level} & No & No & No & Yes \\
157
+ \end{tabular}
158
+ \label{tab:model_comparison}
159
+ \end{table}
160
+
161
+ The learning algorithms we describe in the following sections are agnostic to
162
+ the choice of the underlying model, as long as it is parametric.
163
+ In this work, we focus on Recurrent Neural Networks (RNNs) as they are a popular choice for text generation. In particular, we use standard Elman RNNs~\citep{elman1990} and LSTMs~\citep{lstm}. For the sake of simplicity but without loss of generality, we discuss next Elman RNNs. This is a parametric
164
+ model that at each time step $t$, takes as input a word $w_t \in \mathcal{W}$ as its input, together
165
+ with an internal representation $\bh_t$. $\mathcal{W}$ is the the vocabulary of input words.
166
+ This internal representation $\bh_t$ is a real-valued vector which encodes the history of
167
+ words the model has seen so far.
168
+ Optionally, the RNN can also take as input an additional context vector $\bc_t$, which
169
+ encodes the context to be used while generating the output.
170
+ In our experiments $\bc_t$ is computed using an attentive decoder
171
+ inspired by \cite{bahdanau-iclr2015} and \citet{rush-2015}, the details of which
172
+ are given in Section ~\ref{sup-material:encoder} of the supplementary material.
173
+ The RNN learns a recursive function to compute $\bh_t$ and
174
+ outputs the distribution over the next word:
175
+ \begin{align}
176
+ \bh_{t + 1} &= \phi_\theta(w_t, \bh_t, \bc_t), \\
177
+ w_{t+1} &\sim p_{\theta}(w | w_t, \bh_{t+1}) = p_{\theta}(w | w_t, \phi_\theta(w_t, \bh_t, \bc_t)).
178
+ \end{align}
179
+ The parametric expression for $p_\theta$ and $\phi_\theta$ depends
180
+ on the type of RNN. For Elman RNNs we have:
181
+ \begin{align}
182
+ \label{eq:elman-rnn}
183
+ \bh_{t + 1} &= \sigma(M_i \bone(w_t) + M_h \bh_t + M_c \bc_t), \\
184
+ \bo_{t+1} &= M_o \bh_{t+1}, \label{eq:softmax_input} \\
185
+ w_{t+1} &\sim \mbox{softmax}(\bo_{t+1}),
186
+ \label{eq:xent-distr}
187
+ \end{align}
188
+ where the parameters of the model $\theta$ are the set of matrices $\{M_o, M_i, M_h, M_c\}$
189
+ and also the additional parameters used to compute $\bc_t$. $\mbox{Softmax}(\bx)$ is a vector whose components are $e^{x_j} / \sum_k{e^{x_k}}$, and $\bone(i)$ is an indicator vector with only the $i$-th component set to $1$ and the rest to $0$. We assume the first word of the sequence is a special token indicating the beginning of a sequence, denoted by $w_1 = \varnothing$. All entries of the first hidden state $\bh_1$ are set to a constant value.
190
+
191
+ Next, we are going to introduce both baselines and the model we propose. As we describe these models, it is useful to keep in mind the key characteristics of a text generation system, as outlined in Table~\ref{tab:model_comparison}. There are three dimensions which are important when training a model for text generation: the exposure bias which can adversely affect generation at test time, the ability to fully back-propagate gradients (including with respect to the chosen inputs at each time step), and a loss operating at the sequence level.
192
+ We will start discussing models that do not possess any of these desirable features, and then move towards models that better satisfy our requirements. The last model we propose, dubbed MIXER, has all the desiderata.
193
+
194
+ \subsection{Word-Level Training}
195
+ We now review a collection of methodologies used for training text generation models which
196
+ optimize the prediction of only one word ahead of time.
197
+ We start with the simplest and the most popular method which optimizes the cross-entropy
198
+ loss at every time step. We then discuss a recently proposed modification to it
199
+ which explicitly uses the model predictions during training.
200
+ We finish by proposing a simple yet novel baseline which uses its model prediction during
201
+ training and also has the ability to back propagate the gradients through the entire sequence.
202
+ While these extensions tend to make generation more robust,
203
+ they still lack explicit supervision at the sequence level.
204
+
205
+ \subsubsection{Cross Entropy Training (XENT)} \label{model-xent}
206
+ \begin{figure}[!t]
207
+ \includegraphics[width=0.65\linewidth]{xent_training.pdf}\\
208
+ \includegraphics[width=0.65\linewidth]{xent_generation.pdf}
209
+ \caption{RNN training using XENT (top), and how it is used at test time for generation (bottom). The RNN is unfolded for three time steps in this example. The red oval is a module computing a loss, while the rectangles represent the computation done by the RNN at one step. At the first step, all inputs are given. In the remaining steps, the input words are clamped to ground truth at training time, while they are clamped to model predictions (denoted by $w^g_t$) at test time. Predictions are produced by either taking the argmax or by sampling from the distribution over words.
210
+ }
211
+ \label{fig:xent}
212
+ \end{figure}
213
+ Cross-entropy loss (XENT) maximizes the probability of the observed sequence according to the model.
214
+ If the target sequence is $[w_1, w_2, \dots, w_T]$, then XENT training involves minimizing:
215
+ \begin{align}
216
+ L = - \log p(w_1, \ldots, w_T) = - \log \prod_{t=1}^T p(w_t | w_1, \ldots, w_{t-1}) = - \sum_{t=1}^T \log p(w_t | w_1, \ldots, w_{t-1}). \label{eq:xenty}
217
+ \end{align}
218
+ When using an RNN, each term $p(w_t | w_1, \ldots, w_{t-1})$ is modeled as a parametric function as given in Equation~\eqref{eq:xent-distr}. This loss function trains the model to be good at greedily predicting the next word at each time step without considering the whole sequence. Training proceeds by truncated back-propagation through time~\citep{bptt} with gradient clipping~\citep{mikolov-2010}.
219
+
220
+ Once trained, one can use the model to generate an entire sequence as follows. Let $w^g_t$ denote the word generated by the model at the $t$-th time step. Then the next word is generated by:
221
+ \begin{align}
222
+ \label{eq:greedy_gen}
223
+ w^g_{t + 1} = \argmax_w p_{\theta}(w | w^g_t, \bh_{t+1}).
224
+ \end{align}
225
+ Notice that, the model is trained
226
+ to maximize $p_{\theta}(w | w_t, \bh_{t+1})$, where $w_t$ is the word in the ground truth sequence. However, during generation the model is used as
227
+ $p_{\theta}(w | w^g_t, \bh_{t+1})$. In other words, during training the model is only exposed
228
+ to the ground truth words. However, at test time the model has only
229
+ access to its own predictions, which may not be correct. As a result, during generation the model can potentially deviate quite far from the actual sequence to be generated. Figure~\ref{fig:xent} illustrates this discrepancy.
230
+
231
+ The generation described by Eq.~\eqref{eq:greedy_gen} is
232
+ a greedy left-to-right process which does not necessarily produce
233
+ the most likely sequence according to the model, because:
234
+ \begin{align*}
235
+ \prod_{t=1}^T ~\max_{w_{t+1}}~p_{\theta}(w_{t+1} | w^g_t, \bh_{t+1}) \leq
236
+ \max_{w_1, \dots, w_T}~\prod_{t=1}^Tp_{\theta}(w_{t+1} | w^g_t, \bh_{t+1})
237
+ \end{align*}
238
+ The most likely sequence $[w_1, w_2, \dots, w_T]$ might contain a word $w_t$ which is sub-optimal at an intermediate time-step $t$. This phenomena is commonly known as a {\it search error}.
239
+ One popular way to reduce the effect of search error is to pursue not only one but $k$ next word
240
+ candidates at each point. While still approximate, this strategy can recover
241
+ higher scoring sequences that are often also better in terms of our final evaluation metric.
242
+ This process is commonly know as {\it Beam Search}. The downside of using beam search is that it significantly slows down
243
+ the generation process. The time complexity
244
+ grows linearly in the number of beams $k$, because we need to perform
245
+ $k$ forward passes for our network, which is the most time intensive operation.
246
+ The details of the
247
+ Beam Search algorithm are described in Section~\ref{sup-material:beam_search}.
248
+
249
+
250
+
251
+
252
+ \subsubsection{Data As Demonstrator (DAD)} \label{model-dad}
253
+ \begin{figure}[!t]
254
+ \begin{center}
255
+ \includegraphics[width=0.7\linewidth]{dad.pdf}
256
+ \end{center}
257
+ \caption{Illustration of DAD~\citep{sbengio-nips2015,
258
+ dad}. Training
259
+ proceeds similar to XENT, except that at each time step we choose with
260
+ a certain probability whether to take the previous model prediction or
261
+ the ground truth word. Notice how a) gradients are not
262
+ back-propagated through the eventual model predictions $w^g_t$, and
263
+ b) the XENT loss always uses as target the next word in the reference
264
+ sequence, even when the input is $w^g_t$.}
265
+ \label{fig:dad}
266
+ \end{figure}
267
+ Conventional training with XENT suffers from exposure bias since
268
+ training uses ground truth words as opposed to model predictions.
269
+ DAD, proposed in~\citep{dad} and also used in~\citep{sbengio-nips2015} for sequence generation, addresses this issue by mixing the ground truth training data with model predictions.
270
+ At each time step and with a certain probability, DAD takes as input either the prediction from the model at the previous time step or the ground truth data. \citet{sbengio-nips2015} proposed different
271
+ annealing schedules for the probability of choosing the ground truth word. The annealing schedules are such that at the beginning, the algorithm always chooses the ground truth words. However, as the training progresses the model predictions are selected more often.
272
+ This has the effect of making the model somewhat more aware of how it will be used at test time. Figure~\ref{fig:dad} illustrates the algorithm.
273
+
274
+ A major limitation of DAD is that at every time step the target labels are always selected from the ground truth data, regardless of how the input was chosen. As a result, the targets may not be aligned with the generated sequence, forcing the model to predict a potentially incorrect sequence.
275
+ For instance, if the ground truth sequence is ``I took a long
276
+ walk" and the model has so far predicted ``I took a walk", DAD will force the model to predict the word ``walk" a second time.
277
+ Finally, gradients are not back-propagated through the samples drawn by the model and the XENT loss is still at the word level.
278
+ It is not well understood how these problems affect generation.
279
+
280
+ \subsubsection{End-to-End BackProp (E2E)}
281
+ \label{model-e2e}
282
+ The novel E2E algorithm is perhaps the most natural and na\"{\i}ve approach approximating sequence level training, which can also be interpreted as a computationally efficient approximation to beam search.
283
+ The key idea is that at time step $t + 1$ we propagate as input the top $k$ words predicted at the previous time step instead of the ground truth word.
284
+ Specifically, we take the output distribution over words from the previous time step $t$, and pass it through a $k$-max layer.
285
+ This layer zeros all but the $k$ largest values and re-normalizes
286
+ them to sum to one. We thus have:
287
+ \begin{equation}
288
+ \{i_{t+1,j}, v_{t+1,j}\}_{j=1, \dots, k} = \mbox{k-max } p_\theta(w_{t+1} | w_t, h_t), \label{eq:e2e}
289
+ \end{equation}
290
+ where $i_{t+1,j}$ are indexes of the words with $k$ largest probabilities and $v_{t+1,j}$ are their corresponding scores.
291
+ At the time step $t+1$, we take the $k$ largest scoring previous words as input whose
292
+ contributions is weighted by their scores $v$'s.
293
+ Smoothing the input this way makes the whole process
294
+ differentiable and trainable using standard back-propagation.
295
+ Compared to beam search, this can be interpreted as fusing the $k$ possible next
296
+ hypotheses together into a single path, as illustrated in Figure~\ref{fig:e2e}.
297
+ In practice we also employ a schedule, whereby we use only the ground truth words
298
+ at the beginning and gradually let the model use its own top-$k$ predictions as training proceeds.
299
+
300
+
301
+ While this algorithm is a simple way to expose the model to its own predictions, the loss function optimized is still XENT at each time step.
302
+ There is no explicit supervision at the sequence level while training the model.
303
+ \begin{figure}[!t]
304
+ \begin{center}
305
+ \includegraphics[width=0.7\linewidth]{e2e.pdf}
306
+ \end{center}
307
+ \caption{Illustration of the End-to-End BackProp method. The first
308
+ steps of the unrolled sequence (here just the first step) are exactly the same
309
+ as in a regular RNN trained with cross-entropy. However, in the remaining
310
+ steps the input to each module is a sparse
311
+ vector whose non-zero entries are the $k$ largest probabilities
312
+ of the distribution predicted at the previous time step. Errors are
313
+ back-propagated through these inputs as well.}
314
+ \label{fig:e2e}
315
+ \end{figure}
316
+
317
+ \subsection{Sequence Level Training}
318
+ We now introduce a novel algorithm for sequence level training, which we call Mixed Incremental Cross-Entropy Reinforce (MIXER). The proposed method avoids the exposure bias problem,
319
+ and also directly optimizes for the final evaluation metric.
320
+ Since MIXER is an extension of the REINFORCE algorithm, we first describe REINFORCE
321
+ from the perspective of sequence generation.
322
+
323
+ \subsubsection{REINFORCE} \label{model-reinforce}
324
+ In order to apply the REINFORCE algorithm~\citep{reinforce, zaremba-arxiv2015} to the problem of sequence generation we cast our problem in the reinforcement learning (RL) framework~\citep{sutton-rl}. Our generative model (the RNN) can be viewed as an {\em agent}, which interacts with the external environment (the words and the context vector it sees as input at every time step). The parameters of this agent defines a {\em policy}, whose execution results in the agent picking an {\em action}. In the sequence generation setting, an action refers to predicting the next word in the sequence at each time step.
325
+ After taking an action the agent updates its internal state (the hidden units of RNN). Once the agent has reached the end of a sequence, it observes a {\em reward}. We can choose any reward function. Here, we use BLEU~\citep{bleu} and ROUGE-2~\citep{rouge} since these are the metrics we use at test time. BLEU is essentially a geometric mean over n-gram precision scores as well as a brevity penalty~\citep{liang2006}; in this work, we consider up to $4$-grams. ROUGE-2 is instead recall over bi-grams.
326
+ Like in {\em imitation learning}, we have a training set of optimal sequences of actions.
327
+ During training we choose actions according to the current policy and only observe a reward at the end of the sequence (or after maximum sequence length), by comparing the sequence of actions from the current policy against the optimal action sequence.
328
+ The goal of training is to find the parameters of the agent that maximize the expected reward.
329
+ We define our loss as the negative expected reward:
330
+ \begin{align}
331
+ L_{\theta} = - \sum_{w^g_1, \dots, w^g_T} p_\theta(w^g_1, \dots,
332
+ w^g_T) r(w^g_1, \dots, w^g_T) = -\mathbb{E}_{[w_1^g, \dots w^g_T] \sim p_\theta} r(w^g_1, \dots, w^g_T), \label{eq:reinforce-loss}
333
+ \end{align}
334
+ where $w^g_n$ is the word chosen by our model at the $n$-th time step, and $r$ is the reward associated with the generated sequence.
335
+ In practice, we approximate this expectation with a single sample
336
+ from the distribution of actions implemented by the RNN (right hand side of the equation above and Figure~\ref{fig:plan} of Supplementary Material).
337
+ We refer the reader to prior work~\citep{zaremba-arxiv2015,reinforce} for the full derivation of the gradients. Here, we directly report the partial derivatives and their interpretation. The derivatives w.r.t. parameters are:
338
+ \begin{equation}
339
+ \frac{\partial L_{\theta}}{\partial \theta} = \sum_t \frac{\partial L_{\theta}}{\partial \bo_t}
340
+ \frac{\partial \bo_t}{\partial \theta} \label{eq:reinf-deriv1}
341
+ \end{equation}
342
+ where $\bo_t$ is the input to the softmax.
343
+ The gradient of the loss $L_{\theta}$ with respect to $\bo_t$ is given by:
344
+ \begin{equation}
345
+ \frac{\partial L_{\theta}}{\partial \bo_t} = \left( r(w^g_1, \dots, w^g_T) - \bar{r}_{t+1} \right) \left( p_\theta(w_{t+1} | w^g_{t}, \bh_{t+1}, \bc_t) - \bone(w^g_{t+1}) \right),
346
+ \label{eq:reinf-deriv2}
347
+ \end{equation}
348
+ where $\bar{r}_{t+1}$ is the average reward at time $t + 1$.
349
+
350
+ The interpretation of this weight update rule is straightforward. While
351
+ Equation~\ref{eq:reinf-deriv1} is standard back-propagation (a.k.a. chain
352
+ rule), Equation~\ref{eq:reinf-deriv2} is almost exactly the same as the gradient of a multi-class logistic regression classifier. In logistic regression, the gradient is the difference between the prediction and the actual 1-of-N representation of the target word:
353
+ \begin{equation*}
354
+ \frac{\partial L^{\mbox{\small XENT}}_{\theta}}{\partial \bo_t}
355
+ = p_\theta(w_{t+1} | w_{t}, \bh_{t+1}, \bc_t) - \bone(w_{t+1})
356
+ \end{equation*}
357
+ Therefore, Equation~\ref{eq:reinf-deriv2} says that the chosen word $w^g_{t+1}$
358
+ acts like a surrogate target for our output distribution,
359
+ $p_\theta(w_{t+1}|w^g_{t}, \bh_{t+1}, \bc_t)$ at time $t$. REINFORCE first establishes a baseline $\bar{r}_{t+1}$,
360
+ and then either encourages a word choice $w^g_{t+1}$ if $r > \bar{r}_{t+1}$,
361
+ or discourages it if $r < \bar{r}_{t+1}$. The actual derivation suggests that the choice of this average reward $\bar{r}_t$ is useful to decrease the variance of the gradient estimator since in Equation~\ref{eq:reinforce-loss} we use a single sample from the distribution of actions.
362
+
363
+ In our implementation, the baseline $\bar{r}_t$ is estimated by a linear regressor which takes as input the hidden states $\bh_t$ of the RNN. The regressor is an unbiased estimator of future rewards since it only uses past information. The parameters of the regressor are
364
+ trained by minimizing the mean squared loss: $||\bar{r}_t - r||^2$.
365
+ In order to prevent feedback loops, we do not backpropagate this error through
366
+ the recurrent network~\citep{zaremba-arxiv2015}.
367
+
368
+ REINFORCE is an elegant algorithm to train at the sequence level using {\em any} user-defined reward. In this work, we use BLEU and ROUGE-2 as reward, however one could just as easily use any other metric.
369
+ When presented as is, one major drawback associated with the algorithm is that it assumes a random
370
+ policy to start with. This assumption can make the learning for large action spaces very challenging.
371
+ Unfortunately, text generation is such a setting where the cardinality of the action set is in the order of $10^4$ (the number of words in the vocabulary).
372
+ This leads to a very high branching factor where it is extremely hard for a random policy to improve in any reasonable amount of time.
373
+ In the next section we describe the MIXER algorithm which addresses these issues, better targeting
374
+ text generation applications.
375
+
376
+ \subsubsection{Mixed Incremental Cross-Entropy Reinforce (MIXER)} \label{model-mixer}
377
+ The MIXER algorithm borrows ideas both from DAGGER~\citep{dagger} and
378
+ DAD~\citep{dad, sbengio-nips2015} and modifies the REINFORCE appropriately.
379
+ The first key idea is to change the initial policy of REINFORCE to make sure
380
+ the model can effectively deal with the large action space of text generation.
381
+ Instead of starting from a poor random policy and training the model to converge
382
+ towards the optimal policy, we do the exact opposite. We start from the optimal
383
+ policy and then slowly deviate from it to let the model explore and make use
384
+ of its own predictions.
385
+ We first train the RNN with the cross-entropy loss for
386
+ $N^{\mbox{\small{XENT}}}$ epochs using the ground truth sequences.
387
+ This ensures that we start off with a much better policy than random
388
+ because now the model can focus on a good part of the search space.
389
+ This can be better understood by comparing the perplexity of a language model
390
+ that is randomly initialized versus one that is trained. Perplexity is a measure of uncertainty of the prediction and, roughly speaking, it corresponds to the average number of words the model is `hesitating' about when making a prediction. A good language model trained on one of our data sets has perplexity of $50$, whereas a random model is likely to have
391
+ perplexity close to the size of the vocabulary, which is about $10,000$.
392
+ \begin{figure}[!t]
393
+ \begin{center}
394
+ \includegraphics[width=0.75\linewidth]{mixer.pdf}
395
+ \end{center}
396
+ \caption{Illustration of MIXER. In the first $s$ unrolling steps (here $s=1$),
397
+ the network resembles a standard RNN trained by XENT. In the
398
+ remaining steps, the input to each module is a sample from the
399
+ distribution over words produced at the previous time step. Once the
400
+ end of sentence is reached (or the maximum sequence length), a
401
+ reward is computed, e.g., BLEU. REINFORCE is then
402
+ used to back-propagate the gradients through the sequence of
403
+ samplers. We employ an annealing schedule on $s$, starting with
404
+ $s$ equal to the maximum sequence length $T$ and finishing with $s = 1$.
405
+ }
406
+ \label{fig:mixer}
407
+ \end{figure}
408
+
409
+ The second idea is to introduce model predictions during training
410
+ with an annealing schedule in order to gradually teach the model to produce stable sequences.
411
+ Let $T$ be the length of the sequence.
412
+ After the initial $N^{\mbox{\small{XENT}}}$ epochs, we continue training the
413
+ model for $N^{\mbox{\small{XE+R}}}$ epochs, such that, for every sequence
414
+ we use the XENT loss for the first ($T - \Delta$) steps, and the REINFORCE algorithm
415
+ for the remaining $\Delta$ steps.
416
+ In our experiments $\Delta$ is typically set to two or three.
417
+ Next we anneal the number of steps for which we use the XENT loss for every sequence to
418
+ ($T - 2 \Delta$) and repeat the training for another $N^{\mbox{\small{XE+R}}}$ epochs.
419
+ We repeat this process until only REINFORCE is used to train the whole sequence.
420
+ See Algorithm~\ref{alg:mixer} for the pseudo-code.
421
+
422
+
423
+
424
+ We call this algorithm Mixed Incremental Cross-Entropy Reinforce (MIXER)
425
+ because we combine both XENT and REINFORCE, and we use incremental
426
+ learning (a.k.a. curriculum learning).
427
+ The overall algorithm is illustrated in Figure~\ref{fig:mixer}.
428
+ By the end of training, the model can make effective use of its own
429
+ predictions in-line with its use at test time.
430
+
431
+ \begin{algorithm}[t]
432
+ \footnotesize
433
+ \KwData{a set of sequences with their corresponding context.}
434
+ \KwResult{RNN optimized for generation.}
435
+ Initialize RNN at random and set $N^{\mbox{\small{XENT}}}$, $N^{\mbox{\small{XE+R}}}$
436
+ and $\Delta$\;
437
+ \For{$s$ = $T$, $1$, $-\Delta$}{
438
+ \eIf{ $s$ == $T$ }{
439
+ train RNN for $N^{\mbox{\small{XENT}}}$ epochs using XENT only\;
440
+ }{
441
+ train RNN for $N^{\mbox{\small{XE+R}}}$ epochs. Use XENT loss in the
442
+ first $s$ steps, and REINFORCE (sampling from the model) in the remaining $T - s$ steps\;
443
+ }
444
+ }
445
+ \caption{MIXER pseudo-code.}
446
+ \label{alg:mixer}
447
+ \end{algorithm}
448
+ \section{Experiments} \label{sec:experiments}
449
+ In all our experiments, we train conditional RNNs by unfolding them
450
+ up to a certain maximum length.
451
+ We chose this length to cover about $95\%$ of the target sentences in the data sets we consider.
452
+ The remaining sentences are cropped to the chosen maximum length.
453
+ For training, we use stochastic gradient descent
454
+ with mini-batches of size $32$ and we reset the hidden states at the
455
+ beginning of each sequence. Before updating the parameters we
456
+ re-scale the gradients if their norm is above $10$~\citep{mikolov-2010}.
457
+ We search over the values of hyper-parameter, such as the initial learning rate,
458
+ the various scheduling parameters, number of epochs, \etc, using a held-out validation set.
459
+ We then take the model that performed best on the validation set and compute BLEU or ROUGE
460
+ score on the test set. In the following sections we report results on the test set only.
461
+ Greedy generation is performed by taking the most likely word at each time step.~\footnote{Code available at: \url{https://github.com/facebookresearch/MIXER}}
462
+
463
+ \subsection{Text Summarization}
464
+ We consider the problem of abstractive summarization where,
465
+ given a piece of ``source" text, we aim at generating its summary (the ``target" text)
466
+ such that its meaning is intact.
467
+ The data set we use to train and evaluate our models consists of a
468
+ subset of the Gigaword corpus~\citep{gigaword} as described in~\citet{rush-2015}.
469
+ This is a collection of news articles taken from different sources over the past two decades.
470
+ Our version is organized as a set of example pairs, where each pair is composed of the
471
+ first sentence of a news article (the source sentence) and its corresponding headline (the target sentence).
472
+ We pre-process the data in the same way as in~\citep{rush-2015}, which consists of lower-casing
473
+ and replacing the infrequent words with a special token denoted by ``$<$unk$>$". After
474
+ pre-processing there are $12321$ unique words in the source dictionary and $6828$ words in the target dictionary. The number of sample pairs in the training, validation and test set are $179414$, $22568$,
475
+ and $22259$ respectively. The average sequence length of the target headline is about $10$ words.
476
+ We considered sequences up to $15$ words to comply with our initial constraint of covering at least
477
+ $95$\% of the data.
478
+
479
+ Our generative model is a conditional Elman RNN (Equation~\ref{eq:elman-rnn}) with $128$ hidden units,
480
+ where the conditioning vector $\bc_t$ is provided by a convolutional attentive encoder,
481
+ similar to the one described in Section 3.2 of ~\citet{rush-2015} and inspired by
482
+ \cite{bahdanau-iclr2015}. The details of our attentive
483
+ encoder are mentioned in Section~\ref{sup-material:encoder} of the Supplementary Material.
484
+ We also tried LSTMs as our generative model for this task, however it did not improve performance. We conjecture this is due to the fact that the target sentences in this data set are rather short.
485
+
486
+ \subsection{Machine Translation}
487
+ For the translation task, our generative model is an LSTM with $256$ hidden units and it uses the same attentive encoder architecture as the one used for summarization.
488
+ We use data from the German-English machine translation track of the
489
+ IWSLT 2014 evaluation campaign \citep{cettolo2014}.
490
+ The corpus consists of sentence-aligned subtitles of TED and TEDx
491
+ talks. We pre-process the training data using the tokenizer of the Moses
492
+ toolkit \citep{koehn2007} and remove sentences longer than $50$ words as well as casing.
493
+ The training data comprises of about $153000$ sentences where the average English
494
+ sentence is $17.5$ words long and the average German sentence is
495
+ $18.5$ words long. In order to retain at least $95\%$ of this data, we unrolled our RNN for $25$ steps.
496
+ Our validation set comprises of $6969$ sentence pairs which was taken from the training data. The test set is a concatenation of dev2010, dev2012, tst2010, tst2011 and tst2012
497
+ which results in $6750$ sentence pairs. The English dictionary has $22822$ words while the German has $32009$ words.
498
+ \subsection{Image Captioning}
499
+ For the image captioning task, we use the MSCOCO
500
+ dataset~\citep{mscoco}. We use the entire training set provided by the authors, which consists of around $80$k images. We then took the original validation set (consisting of around $40$k images) and randomly sampled (without replacement) $5000$ images for validation and another $5000$ for test.
501
+ There are $5$ different captions for each image.
502
+ At training time we sample one of
503
+ these captions, while at test time we report the maximum BLEU score across the five captions.
504
+ The context is represented by 1024 features extracted by a Convolutional Neural Network (CNN) trained
505
+ on the Imagenet dataset~\citep{imagenet_cvpr09}; we do not back-propagate through these features. We use a similar experimental set up as described in ~\citet{sbengio-nips2015}. The RNN is a single layer LSTM with
506
+ $512$ hidden units and the image features are provided to the generative model as the first word in the sequence. We pre-process the captions by lower-casing all words and replacing all the words which appear less than 3 times with a special token ``$<$unk$>$". As a result the total number of unique words in our dataset is $10012$. Keeping in mind the $95\%$ rule, we unroll the RNN for $15$ steps.
507
+
508
+ \subsection{Results}
509
+ In order to validate MIXER, we compute BLEU score on the machine translation and image captioning task, and ROUGE on the summarization task.
510
+ The input provided to the system is only the context and the beginning of sentence token. We apply the same protocol to the baseline methods as well. The scores on the test set are reported in Figure~\ref{fig:gain}. \begin{figure}[!t]
511
+ \centering
512
+ \begin{minipage}[c][][c]{.4\textwidth}
513
+ \centering
514
+ \begin{tabular}{l || l | l | l |l}
515
+ \multicolumn{1}{c||}{\emph{TASK} } &
516
+ \multicolumn{1}{c|}{XENT} &
517
+ \multicolumn{1}{c|}{DAD} & \multicolumn{1}{c|}{E2E} & \multicolumn{1}{c}{MIXER}\\
518
+ \hline
519
+ \hline
520
+ {\em summarization} & 13.01 & 12.18 & 12.78 & \bf{16.22} \\
521
+ \hline
522
+ {\em translation} & 17.74 & 20.12 & 17.77 & {\bf 20.73} \\
523
+ \hline
524
+ {\em image captioning} & 27.8 & 28.16 & 26.42 & \bf{29.16} \\
525
+ \end{tabular}
526
+ \end{minipage}\hfill
527
+ \begin{minipage}[c][][c]{.4\textwidth}
528
+ \centering
529
+ \includegraphics[width=0.8\linewidth]{comparison2.pdf}
530
+ \vspace{-.30cm}
531
+ \caption{Left: BLEU-4 (translation and image captioning) and ROUGE-2 (summarization) scores using greedy generation. Right: Relative gains
532
+ produced by DAD, E2E and MIXER on the three tasks.
533
+ The relative gain is computed as the ratio between the score of a model over the score of the reference XENT model on the same task. The horizontal line indicates the performance of XENT.}
534
+ \label{fig:gain}
535
+ \end{minipage}
536
+ \end{figure}
537
+ \begin{figure}[!t]
538
+ \begin{center}
539
+ \includegraphics[width=.35\linewidth]{summarization_beamsearch.pdf}
540
+ \hspace{-.6cm}
541
+ \includegraphics[width=.35\linewidth]{mt_beamsearch.pdf}
542
+ \hspace{-.6cm}
543
+ \includegraphics[width=.35\linewidth]{ic_beamsearch.pdf}
544
+ \end{center}
545
+ \vspace{-.2cm}
546
+ \caption{Test score (ROUGE for summarization and BLEU for machine translation and image captioning) as a function of the number of hypotheses $k$ in the beam search. Beam search always improves performance, although the amount depends on the task. The dark line shows the performance of MIXER using greedy generation, while the gray line shows MIXER using beam search with $k=10$.}
547
+ \label{fig:beam_search}
548
+ \end{figure}
549
+
550
+ We observe that MIXER produces the best generations and improves generation over XENT by $1$ to $3$ points across all the tasks.
551
+ Unfortunately the E2E approach did not prove to be very effective. Training at the sequence level and directly optimizing for testing score yields better generations than turning a sequence of discrete decisions into a differentiable process amenable to standard back-propagation of the error.
552
+ DAD is usually better than the XENT, but not as good as MIXER.
553
+
554
+ Overall, these experiments demonstrate the importance of optimizing for the metric used at test time. In summarization for instance, XENT and MIXER trained with ROUGE achieve a poor performance in terms of BLEU (8.16 and 5.80 versus 9.32 of MIXER trained with BLEU); likewise, MIXER trained with BLEU does not achieve as good ROUGE score as
555
+ a MIXER optimizing ROUGE at training time as well (15.1 versus 16.22, see also Figure~\ref{fig:summarization_bleu_rouge} in Supplementary Material).
556
+
557
+ Next, we experimented with beam search. The results in Figure~\ref{fig:beam_search} suggest that all methods, including MIXER, improve the quality of their generation by using beam search. However, the extent of the improvement is very much task dependent. We observe that the greedy performance of MIXER (i.e., {\em without} beam search) cannot be matched by baselines using beam search in two out of the three tasks. Moreover, MIXER is several times faster since it relies only on greedy search.
558
+
559
+ It is worth mentioning that the REINFORCE baseline did not work for these applications. Exploration from a random policy has little chance of success. We do not report it since we were never able to make it converge within a reasonable amount of time. Using the hybrid XENT-REINFORCE loss {\em without} incremental learning is also insufficient to make training take off from random chance. In order to gain some insight on what kind of schedule works, we report in Table~\ref{tab:scheduling} of Supplementary Material the best values we found after grid search over the hyper-parameters of MIXER.
560
+ Finally, we report some anecdotal examples of MIXER generation in Figure~\ref{fig:generation} of Supplementary Material.
561
+ \section{Conclusions}
562
+
563
+ Our work is motivated by two major deficiencies in training the current generative models for text generation: exposure bias and a loss which does not operate at the sequence level.
564
+ While Reinforcement learning can potentially address these issues, it struggles in settings when
565
+ there are very large action spaces, such as in text generation. Towards that end,
566
+ we propose the MIXER algorithm, which deals with these issues and enables successful training of reinforcement learning models for text generation.
567
+ We achieve this by replacing the initial random policy with the optimal policy of a cross-entropy trained model and by gradually exposing the model more and more to its own predictions in an incremental learning framework.
568
+
569
+
570
+
571
+
572
+
573
+
574
+
575
+
576
+ Our results show that MIXER outperforms three strong baselines for greedy generation and it is very competitive with beam search.
577
+ The approach we propose is agnostic to the underlying model or the form of the reward function.
578
+ In future work we would like to design better estimation techniques for the average reward $\bar{r}_t$, because poor estimates can lead to slow convergence of both REINFORCE and MIXER.
579
+ Finally, our training algorithm relies on a single sample while it would be interesting to investigate the effect of more comprehensive search methods at training time.
580
+
581
+
582
+
583
+
584
+
585
+
586
+
587
+
588
+
589
+
590
+
591
+
592
+
593
+
594
+
595
+
596
+
597
+
598
+
599
+ \subsubsection*{Acknowledgments}
600
+ The authors would like to thank David Grangier, Tomas Mikolov, Leon Bottou, Ronan Collobert and Laurens van der Maaten for their insightful comments.
601
+ We also would like to thank Alexander M. Rush for his help in preparing the data set for the summarization task and Sam Gross for providing the image features.
602
+
603
+ \bibliography{iclr2016_conference}
604
+ \bibliographystyle{iclr2016_conference}
605
+ \newpage
606
+ \section{Supplementary Material}
607
+
608
+
609
+ \subsection{Experiments}
610
+
611
+
612
+ \subsubsection{Qualitative Comparison}
613
+ \begin{figure}[h!]
614
+ \tiny{
615
+ \begin{verbatim}
616
+
617
+
618
+ CONTEXT: a chinese government official on sunday dismissed reports that the government was delaying the issuing
619
+ of third generation -lrb- #g -rrb- mobile phone licenses in order to give a developing <unk> system an
620
+ advantage
621
+ GROUND TRUTH: foreign phone operators to get equal access to china 's #g market
622
+ XENT: china dismisses report of #g mobile phone phone
623
+ DAD: china denies <unk> <unk> mobile phone licenses
624
+ E2E: china 's mobile phone licenses delayed
625
+ MIXER: china official dismisses reports of #g mobile licenses
626
+
627
+ CONTEXT: greece risks bankruptcy if it does not take radical extra measures to fix its finances , prime minister
628
+ george papandreou warned on tuesday , saying the country was in a `` wartime situation
629
+ GROUND TRUTH: greece risks bankruptcy without radical action
630
+ XENT: greece warns <unk> measures to <unk> finances
631
+ DAD: greece says no measures to <unk> <unk>
632
+ E2E: greece threatens to <unk> measures to <unk> finances
633
+ MIXER: greece does not take radical measures to <unk> deficit
634
+
635
+ CONTEXT: the indonesian police were close to identify the body parts resulted from the deadly explosion in front
636
+ of the australian embassy by the dna test , police chief general <unk> <unk> said on wednesday
637
+ GROUND TRUTH: indonesian police close to <unk> australian embassy bomber
638
+ XENT: indonesian police close to <unk>
639
+ DAD: indonesian police close to <unk>
640
+ E2E: indonesian police close to monitor deadly australia
641
+ MIXER: indonesian police close to <unk> parts of australian embassy
642
+
643
+ CONTEXT: hundreds of catholic and protestant youths attacked security forces with <unk> bombs in a flashpoint
644
+ area of north belfast late thursday as violence erupted for the second night in a row , police said
645
+ GROUND TRUTH: second night of violence erupts in north belfast
646
+ XENT: urgent hundreds of catholic and <unk> <unk> in <unk>
647
+ DAD: hundreds of belfast <unk> <unk> in n. belfast
648
+ E2E: hundreds of catholic protestant , <unk> clash with <unk>
649
+ MIXER: hundreds of catholic <unk> attacked in north belfast
650
+
651
+ CONTEXT: uganda 's lord 's resistance army -lrb- lra -rrb- rebel leader joseph <unk> is planning to join his
652
+ commanders in the ceasefire area ahead of talks with the government , ugandan army has said
653
+ GROUND TRUTH: rebel leader to move to ceasefire area
654
+ XENT: uganda 's <unk> rebel leader to join ceasefire
655
+ DAD: ugandan rebel leader to join ceasefire talks
656
+ E2E: ugandan rebels <unk> rebel leader
657
+ MIXER: ugandan rebels to join ceasefire in <unk>
658
+
659
+ CONTEXT: a russian veterinary official reported a fourth outbreak of dead domestic poultry in a suburban
660
+ moscow district sunday as experts tightened <unk> following confirmation of the presence of the
661
+ deadly h#n# bird flu strain
662
+ GROUND TRUTH: tests confirm h#n# bird flu strain in # <unk> moscow <unk>
663
+ XENT: russian official reports fourth flu in <unk>
664
+ DAD: bird flu outbreak in central china
665
+ E2E: russian official official says outbreak outbreak outbreak in <unk>
666
+ MIXER: russian official reports fourth bird flu
667
+
668
+ CONTEXT: a jewish human rights group announced monday that it will offer <unk> a dlrs ##,### reward for
669
+ information that helps them track down those suspected of participating in nazi atrocities during
670
+ world war ii
671
+ GROUND TRUTH: jewish human rights group offers reward for information on nazi suspects in lithuania
672
+ XENT: jewish rights group announces <unk> to reward for war during world war
673
+ DAD: rights group announces <unk> dlrs dlrs dlrs reward
674
+ E2E: jewish rights group offers reward for <unk>
675
+ MIXER: jewish human rights group to offer reward for <unk>
676
+
677
+ CONTEXT: a senior u.s. envoy reassured australia 's opposition labor party on saturday that no decision
678
+ had been made to take military action against iraq and so no military assistance had been sought
679
+ from australia
680
+ GROUND TRUTH: u.s. envoy meets opposition labor party to discuss iraq
681
+ XENT: australian opposition party makes progress on military action against iraq
682
+ DAD: australian opposition party says no military action against iraq
683
+ E2E: us envoy says no decision to take australia 's labor
684
+ MIXER: u.s. envoy says no decision to military action against iraq
685
+
686
+ CONTEXT: republican u.s. presidential candidate rudy giuliani met privately wednesday with iraqi president
687
+ jalal talabani and indicated that he would keep a u.s. presence in iraq for as long as necessary ,
688
+ campaign aides said
689
+ GROUND TRUTH: giuliani meets with iraqi president , discusses war
690
+ XENT: <unk> meets with president of iraqi president
691
+ DAD: republican presidential candidate meets iraqi president
692
+ E2E: u.s. president meets with iraqi president
693
+ MIXER: u.s. presidential candidate giuliani meets with iraqi president
694
+ \end{verbatim}
695
+ }
696
+ \caption{Examples of greedy generations after conditioning on sentences from the test summarization dataset. The "$<$unk$>$" token is produced by our tokenizer and it replaces rare words.}
697
+ \label{fig:generation}
698
+ \end{figure}
699
+
700
+
701
+ \subsubsection{Hyperparameters}
702
+ \begin{table}[!h]
703
+ \caption{Best scheduling parameters found by hyper-parameter search of MIXER.}
704
+ \begin{tabular}{l || l | l | l}
705
+ \multicolumn{1}{c||}{\emph{TASK} } &
706
+ \multicolumn{1}{c|}{$N^{\mbox{\small XENT}}$} &
707
+ \multicolumn{1}{c|}{$N^{\mbox{\small XE+R}}$} & \multicolumn{1}{c}{$\Delta$} \\
708
+ \hline
709
+ \hline
710
+ {\em summarization} & 20 & 5 & 2 \\
711
+ \hline
712
+ {\em machine translation} & 25 & 5 & 3 \\
713
+ \hline
714
+ {\em image captioning} & 20 & 5 & 2 \\
715
+ \end{tabular}
716
+ \label{tab:scheduling}
717
+ \end{table}
718
+
719
+ \subsubsection{Relative Gains}
720
+
721
+
722
+ \begin{figure}[!h]
723
+ \begin{center}
724
+ \includegraphics[width=0.6\linewidth]{summarization_bleu_rouge.pdf}
725
+ \end{center}
726
+ \caption{Relative gains on summarization with respect to the XENT baseline. Left: relative BLEU score. Right: relative ROUGE-2.
727
+ The models are: DAD, E2E, MIXER trained for the objective used at test time (method proposed in this paper), and MIXER trained with a different metric.
728
+ When evaluating for BLEU, the last column on the left reports the evaluation of MIXER trained using ROUGE-2.
729
+ When evaluating for ROUGE-2, the last column on the right reports the evaluation of MIXER trained using BLEU.}
730
+ \label{fig:summarization_bleu_rouge}
731
+ \end{figure}
732
+
733
+
734
+
735
+ \vspace{0.3in}
736
+ \begin{figure}[!h]
737
+ \centering
738
+ \begin{minipage}{0.05\linewidth}
739
+ \includegraphics[width=2.5\linewidth]{time.pdf}\\
740
+ \end{minipage}
741
+ \begin{minipage}{0.46\linewidth}
742
+ \centering
743
+ \scalebox{0.75}{
744
+ \tiny
745
+ \begingroup\makeatletter\def\f@size{1}\check@mathfonts
746
+ \begin{tikzpicture}
747
+ \node [draw, circle] (w) at (0,0) {$w$};
748
+ \node [draw, circle] (w0) at (-0.5,1) {$w_0$};
749
+ \node [draw, circle] (w1) at (0.5,1) {$w_1$};
750
+ \node [draw, circle] (w00) at (-1.5,2) {$w_{00}$};
751
+ \node [draw, circle] (w01) at (-0.5,2) {$w_{01}$};
752
+ \node [draw, circle] (w10) at (0.5,2) {$w_{10}$};
753
+ \node [draw, circle] (w11) at (1.5,2) {$w_{11}$};
754
+ \node [draw, circle] (w000) at (-3.5,3) {$w_{000}$};
755
+ \node [draw, circle] (w001) at (-2.5,3) {$w_{001}$};
756
+ \node [draw, circle] (w010) at (-1.5,3) {$w_{010}$};
757
+ \node [draw, circle] (w011) at (-0.5,3) {$w_{011}$};
758
+ \node [draw, circle] (w100) at (0.5,3) {$w_{100}$};
759
+ \node [draw, circle] (w101) at (1.5,3) {$w_{101}$};
760
+ \node [draw, circle] (w110) at (2.5,3) {$w_{110}$};
761
+ \node [draw, circle] (w111) at (3.5,3) {$w_{111}$};
762
+ \node [draw, circle] (w0000) at (-3.75,4) {$w_{\dots}$};
763
+ \node [draw, circle] (w0001) at (-3.25,4) {$w_{\dots}$};
764
+ \node [draw, circle] (w0010) at (-2.75,4) {$w_{\dots}$};
765
+ \node [draw, circle] (w0011) at (-2.25,4) {$w_{\dots}$};
766
+ \node [draw, circle] (w0100) at (-1.75,4) {$w_{\dots}$};
767
+ \node [draw, circle] (w0101) at (-1.25,4) {$w_{\dots}$};
768
+ \node [draw, circle] (w0110) at (-0.75,4) {$w_{\dots}$};
769
+ \node [draw, circle] (w0111) at (-0.25,4) {$w_{\dots}$};
770
+ \node [draw, circle] (w1000) at (0.25,4) {$w_{\dots}$};
771
+ \node [draw, circle] (w1001) at (0.75,4) {$w_{\dots}$};
772
+ \node [draw, circle] (w1010) at (1.25,4) {$w_{\dots}$};
773
+ \node [draw, circle] (w1011) at (1.75,4) {$w_{\dots}$};
774
+ \node [draw, circle] (w1100) at (2.25,4) {$w_{\dots}$};
775
+ \node [draw, circle] (w1101) at (2.75,4) {$w_{\dots}$};
776
+ \node [draw, circle] (w1110) at (3.25,4) {$w_{\dots}$};
777
+ \node [draw, circle] (w1111) at (3.75,4) {$w_{\dots}$};
778
+ \draw [thick] [->] (w) to (w0);
779
+ \draw [thick] [dashed] [green] [->] (w) to (w1);
780
+ \draw [thick] [dashed] [green] [->] (w0) to (w00);
781
+ \draw [thick] [->] (w0) to (w01);
782
+ \draw [dotted] [->] (w1) to (w10);
783
+ \draw [dotted] [->] (w1) to (w11);
784
+ \draw [dotted] [->] (w00) to (w000);
785
+ \draw [dotted] [->] (w00) to (w001);
786
+ \draw [thick] [dashed] [green] [->] (w01) to (w010);
787
+ \draw [thick] [->] (w01) to (w011);
788
+ \draw [dotted] [->] (w10) to (w100);
789
+ \draw [dotted] [->] (w10) to (w101);
790
+ \draw [dotted] [->] (w11) to (w110);
791
+ \draw [dotted] [->] (w11) to (w111);
792
+ \draw [dotted] [->] (w000) to (w0000);
793
+ \draw [dotted] [->] (w000) to (w0001);
794
+ \draw [dotted] [->] (w001) to (w0010);
795
+ \draw [dotted] [->] (w001) to (w0011);
796
+ \draw [dotted] [->] (w010) to (w0100);
797
+ \draw [dotted] [->] (w010) to (w0101);
798
+ \draw [thick] [->] (w011) to (w0110);
799
+ \draw [thick] [dashed] [green] [->] (w011) to (w0111);
800
+ \draw [dotted] [->] (w100) to (w1000);
801
+ \draw [dotted] [->] (w100) to (w1001);
802
+ \draw [dotted] [->] (w101) to (w1010);
803
+ \draw [dotted] [->] (w101) to (w1011);
804
+ \draw [dotted] [->] (w110) to (w1100);
805
+ \draw [dotted] [->] (w110) to (w1101);
806
+ \draw [dotted] [->] (w111) to (w1110);
807
+ \draw [dotted] [->] (w111) to (w1111);
808
+ \end{tikzpicture}
809
+ \endgroup
810
+ }\\
811
+ Training with exposure bias
812
+ \end{minipage}
813
+ \begin{minipage}{0.46\linewidth}
814
+ \centering
815
+ \scalebox{0.75}{
816
+ \tiny
817
+ \begingroup\makeatletter\def\f@size{1}\check@mathfonts
818
+ \begin{tikzpicture}
819
+ \node [draw, circle] (w) at (0,0) {$w$};
820
+ \node [draw, circle] (w0) at (-0.5,1) {$w_0$};
821
+ \node [draw, circle] (w1) at (0.5,1) {$w_1$};
822
+ \node [draw, circle] (w00) at (-1.5,2) {$w_{00}$};
823
+ \node [draw, circle] (w01) at (-0.5,2) {$w_{01}$};
824
+ \node [draw, circle] (w10) at (0.5,2) {$w_{10}$};
825
+ \node [draw, circle] (w11) at (1.5,2) {$w_{11}$};
826
+ \node [draw, circle] (w000) at (-3.5,3) {$w_{000}$};
827
+ \node [draw, circle] (w001) at (-2.5,3) {$w_{001}$};
828
+ \node [draw, circle] (w010) at (-1.5,3) {$w_{010}$};
829
+ \node [draw, circle] (w011) at (-0.5,3) {$w_{011}$};
830
+ \node [draw, circle] (w100) at (0.5,3) {$w_{100}$};
831
+ \node [draw, circle] (w101) at (1.5,3) {$w_{101}$};
832
+ \node [draw, circle] (w110) at (2.5,3) {$w_{110}$};
833
+ \node [draw, circle] (w111) at (3.5,3) {$w_{111}$};
834
+ \node [draw, circle] (w0000) at (-3.75,4) {$w_{\dots}$};
835
+ \node [draw, circle] (w0001) at (-3.25,4) {$w_{\dots}$};
836
+ \node [draw, circle] (w0010) at (-2.75,4) {$w_{\dots}$};
837
+ \node [draw, circle] (w0011) at (-2.25,4) {$w_{\dots}$};
838
+ \node [draw, circle] (w0100) at (-1.75,4) {$w_{\dots}$};
839
+ \node [draw, circle] (w0101) at (-1.25,4) {$w_{\dots}$};
840
+ \node [draw, circle] (w0110) at (-0.75,4) {$w_{\dots}$};
841
+ \node [draw, circle] (w0111) at (-0.25,4) {$w_{\dots}$};
842
+ \node [draw, circle] (w1000) at (0.25,4) {$w_{\dots}$};
843
+ \node [draw, circle] (w1001) at (0.75,4) {$w_{\dots}$};
844
+ \node [draw, circle] (w1010) at (1.25,4) {$w_{\dots}$};
845
+ \node [draw, circle] (w1011) at (1.75,4) {$w_{\dots}$};
846
+ \node [draw, circle] (w1100) at (2.25,4) {$w_{\dots}$};
847
+ \node [draw, circle] (w1101) at (2.75,4) {$w_{\dots}$};
848
+ \node [draw, circle] (w1110) at (3.25,4) {$w_{\dots}$};
849
+ \node [draw, circle] (w1111) at (3.75,4) {$w_{\dots}$};
850
+
851
+ \draw [thick] [->] (w) to (w0);
852
+ \draw [thick] [blue] [->] (w) to (w1);
853
+ \draw [thick] [dashed] [green] [->] (w0) to (w00);
854
+ \draw [thick] [->] (w0) to (w01);
855
+ \draw [thick] [blue] [->] (w1) to (w10);
856
+ \draw [thick] [dashed] [green] [->] (w1) to (w11);
857
+ \draw [thick] [dashed] [green] [->] (w00) to (w000);
858
+ \draw [thick] [dashed] [green] [->] (w00) to (w001);
859
+ \draw [thick] [dashed] [green] [->] (w01) to (w010);
860
+ \draw [thick] [->] (w01) to (w011);
861
+ \draw [thick] [dashed] [green] [->] (w10) to (w100);
862
+ \draw [thick] [blue] [->] (w10) to (w101);
863
+ \draw [thick] [dashed] [green] [->] (w11) to (w110);
864
+ \node [draw, circle] (w000) at (-3.5,3) {$w_{000}$};
865
+ \node [draw, circle] (w001) at (-2.5,3) {$w_{001}$};
866
+ \node [draw, circle] (w010) at (-1.5,3) {$w_{010}$};
867
+ \node [draw, circle] (w011) at (-0.5,3) {$w_{011}$};
868
+ \node [draw, circle] (w100) at (0.5,3) {$w_{100}$};
869
+ \node [draw, circle] (w101) at (1.5,3) {$w_{101}$};
870
+ \node [draw, circle] (w110) at (2.5,3) {$w_{110}$};
871
+ \node [draw, circle] (w111) at (3.5,3) {$w_{111}$};
872
+ \node [draw, circle] (w0000) at (-3.75,4) {$w_{\dots}$};
873
+ \node [draw, circle] (w0001) at (-3.25,4) {$w_{\dots}$};
874
+ \node [draw, circle] (w0010) at (-2.75,4) {$w_{\dots}$};
875
+ \node [draw, circle] (w0011) at (-2.25,4) {$w_{\dots}$};
876
+ \node [draw, circle] (w0100) at (-1.75,4) {$w_{\dots}$};
877
+ \node [draw, circle] (w0101) at (-1.25,4) {$w_{\dots}$};
878
+ \node [draw, circle] (w0110) at (-0.75,4) {$w_{\dots}$};
879
+ \node [draw, circle] (w0111) at (-0.25,4) {$w_{\dots}$};
880
+ \node [draw, circle] (w1000) at (0.25,4) {$w_{\dots}$};
881
+ \node [draw, circle] (w1001) at (0.75,4) {$w_{\dots}$};
882
+ \node [draw, circle] (w1010) at (1.25,4) {$w_{\dots}$};
883
+ \node [draw, circle] (w1011) at (1.75,4) {$w_{\dots}$};
884
+ \node [draw, circle] (w1100) at (2.25,4) {$w_{\dots}$};
885
+ \node [draw, circle] (w1101) at (2.75,4) {$w_{\dots}$};
886
+ \node [draw, circle] (w1110) at (3.25,4) {$w_{\dots}$};
887
+ \node [draw, circle] (w1111) at (3.75,4) {$w_{\dots}$};
888
+ \draw [thick] [dashed] [green] [->] (w000) to (w0000);
889
+ \draw [thick] [dashed] [green] [->] (w000) to (w0001);
890
+ \draw [thick] [dashed] [green] [->] (w001) to (w0010);
891
+ \draw [thick] [dashed] [green] [->] (w001) to (w0011);
892
+ \draw [thick] [dashed] [green] [->] (w010) to (w0100);
893
+ \draw [thick] [dashed] [green] [->] (w010) to (w0101);
894
+ \draw [thick] [->] (w011) to (w0110);
895
+ \draw [thick] [dashed] [green] [->] (w011) to (w0111);
896
+ \draw [thick] [dashed] [green] [->] (w100) to (w1000);
897
+ \draw [thick] [dashed] [green] [->] (w100) to (w1001);
898
+ \draw [thick] [dashed] [green] [->] (w101) to (w1010);
899
+ \draw [thick] [blue] [->] (w101) to (w1011);
900
+ \draw [thick] [dashed] [green] [->] (w110) to (w1100);
901
+ \draw [thick] [dashed] [green] [->] (w110) to (w1101);
902
+ \draw [thick] [dashed] [green] [->] (w111) to (w1110);
903
+ \draw [thick] [dashed] [green] [->] (w111) to (w1111);
904
+ \end{tikzpicture}
905
+ \endgroup
906
+ }
907
+ Training in expectation (Reinforce)
908
+ \end{minipage}
909
+ \caption{Search space for the toy case of a binary vocabulary and sequences of length 4.
910
+ The trees represent all the $2^4$ possible sequences.
911
+ The solid black line is the ground truth sequence.
912
+ \textbf{(Left)} Greedy training such as XENT optimizes only the probability
913
+ of the next word. The model may consider choices indicated by the green arrows, but it starts off from words taken from the ground truth sequence. The model experiences exposure bias, since it sees only words branching off the ground truth path;
914
+ \textbf{(Right)} REINFORCE and MIXER optimize over all possible sequences, using the predictions made by the model itself.
915
+ In practice, the model samples only a single path indicated by the blue solid line. The model does not suffer from exposure bias; the model is trained as it is tested.}
916
+ \label{fig:plan}
917
+ \end{figure}
918
+
919
+
920
+
921
+
922
+
923
+
924
+ \subsection{The Attentive Encoder}
925
+ \label{sup-material:encoder}
926
+ Here we explain in detail how we generate the conditioning vector $\bc_t$
927
+ for our RNN using the source sentence and the current hidden state $\bh_t$.
928
+ Let us denote by $\bs$ the source sentence which is composed of a sequence
929
+ of $M$ words $\bs = [w_1, \ldots, w_M]$.
930
+ With a slight overload of notation let $w_i$ also denote the $d$ dimensional
931
+ learnable embedding of the $i$-th word ($w_i \in \setr^d$). In addition the
932
+ position $i$ of the word $w_i$ is also associated with a learnable embedding $l_i$
933
+ of size $d$ ($l_i \in \setr^d$). Then the full embedding for
934
+ the $i$-th word in the input sentence is given by $a_i = w_i + l_i$.
935
+ In order for the embeddings to capture local context, we associate an
936
+ aggregate embedding $z_i$ to each word in the source sentence.
937
+ In particular for a word in the $i$-th position, its aggregate embedding
938
+ $z_i$ is computed by taking a window of $q$ consecutive words centered
939
+ at position $i$ and averaging the embeddings of all the words in this window. More precisely, the aggregate embedding $z_i$ is given by:
940
+ \begin{equation}
941
+ z_i = \frac{1}{q} \sum_{h = -q/2}^{q/2} a_{i + h}.
942
+ \end{equation}
943
+ In our experiments the width $q$ was set to $5$.
944
+ In order to account for the words at the two boundaries of the input
945
+ sentence we first pad the sequence on both sides with dummy words before
946
+ computing the aggregate vectors $z_i$s. Given these aggregate vectors of words, we compute
947
+ the context vector $c_t$ (the final output of the encoder) as:
948
+ \begin{equation}
949
+ c_t = \sum_{j=1}^M \alpha_{j,t} w_j,
950
+ \end{equation}
951
+ where the weights $\alpha_{j, t}$ are computed as
952
+ \begin{equation}
953
+ \alpha_{j, t} = \frac{\exp(z_j \cdot \bh_{t})}{\sum_{i=1}^M \exp(z_i \cdot \bh_{t})}.
954
+ \end{equation}
955
+
956
+
957
+
958
+ \subsection{Beam Search Algorithm}
959
+ \label{sup-material:beam_search}
960
+ Equation~\ref{eq:greedy_gen} always chooses the highest scoring next word candidate
961
+ at each time step. At test time we can reduce the effect of search error
962
+ by pursuing not only one but $k$ next word candidates at each point, which
963
+ is commonly known as {\it beam search}.
964
+ While still approximate, this strategy can recover higher scoring sequences
965
+ that are often also better in terms of our final evaluation metric.
966
+ The algorithm maintains the $k$ highest scoring partial
967
+ sequences, where $k$ is a hyper-parameter.
968
+ Setting $k=1$ reduces the algorithm to a greedy left-to-right search
969
+ (Eq.~\eqref{eq:greedy_gen}).
970
+ \begin{algorithm}[!h]
971
+ \KwIn{model $p_{\theta}$, beam size $k$}
972
+ \KwResult{sequence of words $[w_1^g, w_2^g, \dots, w_n^g]$}
973
+ empty heaps $\{\mathcal{H}_t\}_{t = 1, \dots T}$\;
974
+ an empty hidden state vector $\bh_1$\;
975
+ $\mathcal{H}_1.\mbox{push}(1, [[\varnothing], \bh_1])$\;
976
+ \For{$t\leftarrow 1$ \KwTo $T - 1$}{
977
+ \For{$i\leftarrow 1$ \KwTo $\min(k, \#\mathcal{H}_t)$}{
978
+ $(p, [[w_1, w_2, \dots, w_t], \bh]) \leftarrow \mathcal{H}_t.\mbox{pop}()$\;
979
+ $\bh' = \phi_\theta(w, \bh)$ \;
980
+ \For{$w' \leftarrow$ $k$-most likely words $w'$ from $p_\theta(w' | w_t, \bh)$}{
981
+ $p' = p * p_\theta(w' | w, \bh)$\;
982
+ $\mathcal{H}_{t + 1}.\mbox{push}(p', [[w_1, w_2, \dots, w_t, w'], \bh'])$\;
983
+ }
984
+ }
985
+ }
986
+ $(p, [[w_1, w_2, \dots, w_T], \bh]) \leftarrow \mathcal{H}_T.\mbox{pop}()$\;
987
+ \KwOut{$[w_1, w_2, \dots, w_T]$}
988
+
989
+ \caption{Pseudo-code of beam search with beam size $k$.}
990
+ \label{alg:beam}
991
+ \end{algorithm}
992
+
993
+
994
+ \newpage
995
+
996
+
997
+
998
+
999
+
1000
+
1001
+ \subsection{Notes}
1002
+ The current version of the paper updates the first version uploaded on arXiv as follows:
1003
+ \begin{itemize}
1004
+ \item on the summarization task, we report results using both ROUGE-2 and BLEU to demonstrate that MIXER can work with any metric.
1005
+ \item on machine translation and image captioning we use LSTM instead of Elman RNN to demonstrate the MIXER can work with any underlying parametric model.
1006
+ \item BLEU is evaluated using up to 4-grams, and it is computed at the corpus level (except in the image captioning case) as this seems the most common practice
1007
+ in the summarization and machine translation literature.
1008
+ \item we have added several references as suggested by our reviewers
1009
+ \item we have shortened the paper by moving some content to the Supplementary Material.
1010
+ \end{itemize}
1011
+ \end{document}
papers/1511/1511.09230.tex ADDED
The diff for this file is too large to render. See raw diff
 
papers/1512/1512.02479.tex ADDED
@@ -0,0 +1,559 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[journal]{IEEEtran}
2
+ \usepackage{amsthm}
3
+ \usepackage{amsmath}
4
+ \usepackage{amssymb}
5
+ \usepackage{graphicx}
6
+ \usepackage{algorithm}
7
+ \usepackage{algpseudocode}
8
+
9
+ \newtheorem{proposition}{Proposition}
10
+ \newtheorem{definition}{Definition}
11
+
12
+ \newcommand{\x}{\boldsymbol{x}}
13
+ \newcommand{\w}{\boldsymbol{w}}
14
+ \newcommand{\R}{\boldsymbol{R}}
15
+
16
+ \begin{document}
17
+ \title{Explaining NonLinear Classification Decisions with Deep Taylor Decomposition}
18
+
19
+ \author{Gr\'egoire Montavon$^*$,
20
+ Sebastian Bach,
21
+ Alexander Binder,
22
+ Wojciech Samek$^*$,
23
+ and~Klaus-Robert M\"{u}ller$^*$
24
+ \thanks{This work was supported by the Brain Korea 21 Plus Program through the National Research Foundation of Korea funded by the Ministry of Education. This work was also supported by the grant DFG (MU~987/17-1) and by the German Ministry for Education and Research as Berlin Big Data Center BBDC (01IS14013A). This publication only reflects the authors views. Funding agencies are not liable for any use that may be made of the information contained herein. {\it Asterisks indicate corresponding author}.}
25
+ \thanks{$^*$G. Montavon is with the Berlin Institute of Technology (TU Berlin), 10587 Berlin, Germany. (e-mail: gregoire.montavon@tu-berlin.de)}
26
+ \thanks{S. Bach is with Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany. (e-mail: sebastian.bach@hhi.fraunhofer.de)}
27
+ \thanks{A. Binder is with the Singapore University of Technology and Design, 487372, Singapore. (e-mail: alexander\_binder@sutd.edu.sg)}
28
+ \thanks{$^*$W. Samek is with Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany. (e-mail: wojciech.samek@hhi.fraunhofer.de)}
29
+ \thanks{$^*$K.-R. M\"uller is with the Berlin Institute of Technology (TU Berlin), 10587 Berlin, Germany, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, Korea (e-mail: klaus-robert.mueller@tu-berlin.de)}}
30
+
31
+ \maketitle
32
+
33
+ \begin{abstract}
34
+ Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems, e.g., image classification, natural language processing or human action recognition. Although these methods perform impressively well, they have a significant disadvantage, the lack of transparency, limiting the interpretability of the solution and thus the scope of application in practice. Especially DNNs act as black boxes due to their multilayer nonlinear structure. In this paper we introduce a novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements. Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures. Our method is based on {\em deep} Taylor decomposition and efficiently utilizes the structure of the network by backpropagating the explanations from the output to the input layer. We evaluate the proposed method empirically on the MNIST and ILSVRC data sets.
35
+ \end{abstract}
36
+
37
+ \section{Introduction}
38
+
39
+ Nonlinear models have been used since the advent of machine learning (ML) methods and are integral part of many popular algorithms. They include, for example, graphical models \cite{jordan1998:learning_graph_mod}, kernels \cite{schoelkopf:book,DBLP:journals/tnn/MullerMRTS01}, Gaussian processes \cite{book:gp}, neural networks \cite{bishop95,nntricks,DBLP:series/lncs/LeCunBOM12}, boosting \cite{book:boosting}, or random forests \cite{DBLP:journals/ml/Breiman01}. Recently, a particular class of nonlinear methods, Deep Neural Networks (DNNs), revolutionized the field of automated image classification by demonstrating impressive performance on large benchmark data sets \cite{DBLP:conf/nips/KrizhevskySH12, DBLP:conf/nips/CiresanGGS12, DBLP:journals/corr/SzegedyLJSRAEVR14}. Deep networks have also been applied successfully to other research fields such as natural language processing \cite{DBLP:journals/jmlr/CollobertWBKKK11,Socher-etal:2013}, human action recognition \cite{DBLP:conf/icml/JiXYY10,DBLP:conf/cvpr/LeZYN11}, or physics \cite{montavon-njp13, baldi14}. Although these models are highly successful in terms of performance, they have a drawback of acting like a {\em black box} in the sense that it is not clear {\em how} and {\em why} they arrive at a particular classification decision. This lack of transparency is a serious disadvantage as it prevents a human expert from being able to verify, interpret, and understand the reasoning of the system.
40
+
41
+ An {\em interpretable classifier} explains its nonlinear classification decision in terms of the inputs. For instance, in image classification problems, the classifier should not only indicate whether an image of interest belongs to a certain category or not, but also explain what structures (e.g. pixels in the image) were the basis for its decision (cf. Figure \ref{figure:overview}). This additional information helps to better assess the quality of a particular prediction, or to verify the overall reasoning ability of the trained classifier. Also, information about which pixels are relevant in a particular image, could be used for determining which region of the image should be the object of further analysis. Linear models readily provide explanations in terms of input variables (see for example \cite{DBLP:journals/neuroimage/HaufeMGDHBB14,Oaxaca1973}). However, because of the limited expressive power of these models, they perform poorly on complex tasks such as image recognition. Extending linear analysis techniques to more realistic nonlinear models such as deep neural networks, is therefore of high practical relevance.
42
+
43
+ Recently, a significant amount of work has been dedicated to make the deep neural network more transparent to its user, in particular, improving the overall interpretability of the learned model, or explaining individual predictions. For example, Zeiler et al. \cite{DBLP:journals/corr/ZeilerF13} have proposed a network propagation technique to identify patterns in the input data that are linked to a particular neuron activation or a classification decision. Subsequently, Bach et al. \cite{bach15} have introduced the concept of pixel-wise decomposition of a classification decision, and how such decomposition can be achieved either by Taylor decomposition, or by a relevance propagation algorithm. Specifically, the authors distinguish between (1) functional approaches that view the neural network as a {\em function} and disregard its topology, and (2) message passing approaches, where the decomposition stems from a simple {\em propagation rule} applied uniformly to all neurons of the deep network.
44
+
45
+ \begin{figure}
46
+ \centering
47
+ \includegraphics[width=1.0\linewidth]{figures/figure1.pdf}\vskip -2mm
48
+ \caption{Overview of our method for explaining a nonlinear classification decision. The method produces a pixel-wise heatmap explaining {\em why} a neural network classifier has come up with a particular decision (here, detecting the digit ``0'' in an input image composed of two digits). The heatmap is the result of a {\em deep} Taylor decomposition of the neural network function. Note that for the purpose of the visualization, the left and right side of the figure are mirrored.}
49
+ \label{figure:overview}
50
+ \end{figure}
51
+
52
+ The main goal of this paper is to reconcile the functional and rule-based approaches for obtaining these decompositions, in a similar way to the error backpropagation algorithm \cite{rumelhart86} that also has a functional and a message passing interpretation. We call the resulting framework {\em deep Taylor decomposition}. This new technique seeks to replace the analytically intractable standard Taylor decomposition problem by a multitude of simpler analytically tractable Taylor decompositions---one per neuron. The proposed method results in a relevance redistribution process like the one illustrated in Figure \ref{figure:overview} for a neural network trained to detect the digit ``0'' in an image, in presence of another distracting digit. The classification decision is first decomposed in terms of contributions $R_1,R_2,R_3$ of respective hidden neurons $x_1,x_2,x_3$, and then, the contribution of each hidden neuron is independently redistributed onto the pixels, leading to a relevance map (or heatmap) in the pixel space, that explains the classification ``0''.
53
+
54
+ A main result of this work is the observation that application of deep Taylor decomposition to neural networks used for image classification, yields rules that are similar to those proposed by \cite{bach15} (the $\alpha\beta$-rule and the $\epsilon$-rule), but with specific instantiations of their hyperparameters, previously set heuristically. Because of the theoretical focus of this paper, we do not perform a broader empirical comparison with other recently proposed methods such as \cite{DBLP:journals/corr/SimonyanVZ13} or \cite{DBLP:journals/corr/ZeilerF13}. However, we refer to \cite{samek-arxiv15} for such a comparison.
55
+
56
+ The paper is organized as follows: Section \ref{section:decomposition} introduces the general idea of decomposition of a classification score in terms of input variables, and how this decomposition arises from Taylor decomposition or deep Taylor decomposition of a classification function. Section \ref{section:onelayer} applies the proposed deep Taylor decomposition method to a simple detection-pooling neural network. Section \ref{section:twolayers} extends the method to deeper networks, by introducing the concept of relevance model and describing how it can be applied to large GPU-trained neural networks without retraining. Several experiments on MNIST and ILSVRC data are provided to illustrate the methods described here. Section \ref{section:conclusion} concludes.
57
+
58
+ \subsection*{Related Work}
59
+
60
+ There has been a significant body of work focusing on the analysis and understanding of nonlinear classifiers such as kernel machines \cite{DBLP:journals/jmlr/BraunBM08, DBLP:journals/jmlr/BaehrensSHKHM10, hansen2011visual, DBLP:journals/spm/MontavonBKM13}, neural networks \cite{Garson1991, DBLP:journals/aei/Goh95, DBLP:conf/nips/GoodfellowLSLN09, DBLP:journals/jmlr/MontavonBM11}, or a broader class of nonlinear models \cite{BazenJoutard2013,bach15}. In particular, some recent analyses have focused on the understanding of state-of-the-art GPU-trained convolutional neural networks for image classification \cite{DBLP:journals/corr/SzegedyZSBEGF13, DBLP:journals/corr/SimonyanVZ13,DBLP:journals/corr/ZeilerF13}, offering new insights on these highly complex models.
61
+
62
+ Some methods seek to provide a general understanding of the trained model, by measuring important characteristics of it, such as the noise and relevant dimensionality of its feature space(s) \cite{DBLP:journals/jmlr/BraunBM08,DBLP:journals/spm/MontavonBKM13,
63
+ DBLP:journals/jmlr/MontavonBM11}, its invariance to certain transformations of the data \cite{DBLP:conf/nips/GoodfellowLSLN09} or the role of particular neurons \cite{understanding_techreport}. In this paper, we focus instead on the interpretation of the prediction of {\em individual} data points, for which portions of the trained model may either be relevant or not relevant.
64
+
65
+ Technically, the methods proposed in \cite{DBLP:journals/jmlr/BaehrensSHKHM10, hansen2011visual} do not explain the decision of a classifier but rather perform sensitivity analysis by computing the gradient of the decision function. This results in an analysis of variations of that function, without however seeking to provide a full explanation why a certain data point has been predicted in a certain way. Specifically, the gradient of a function does not contain information on the saliency of a feature in the data to which the function is applied. Simonyan et al. \cite{DBLP:journals/corr/SimonyanVZ13} incorporate saliency information by multiplying the gradient by the actual data point.
66
+
67
+ The method proposed by Zeiler and Fergus \cite{DBLP:journals/corr/ZeilerF13} was designed to visualize and understand the features of a convolutional neural network with max-pooling and rectified linear units. The method performs a backpropagation pass on the network, where a set of rules is applied uniformly to all layers of the network, resulting in an assignment of values onto pixels. The method however does not aim to attribute a defined meaning to the assigned pixel values, except for the fact that they should form a visually interpretable pattern. \cite{bach15} proposed a layer-wise propagation method where the backpropagated signal is interpreted as relevance, and obeys a conservation property. The proposed propagation rules were designed according to this property, and were shown quantitatively to better support the classification decision \cite{samek-arxiv15}. However, the practical choice of propagation rules among all possible ones was mainly heuristic and lacked a strong theoretical justification.
68
+
69
+ A theoretical foundation to the problem of relevance assignment for a classification decision, can be found in the Taylor decomposition of a nonlinear function. The approach was described by Bazen and Joutard \cite{BazenJoutard2013} as a nonlinear generalization of the Oaxaca method in econometrics \cite{Oaxaca1973}. The idea was subsequently introduced in the context of image analysis \cite{bach15,DBLP:journals/corr/SimonyanVZ13} for the purpose of explaining machine learning classifiers. Our paper extends the standard Taylor decomposition in a way that takes advantage of the deep structure of neural networks, and connects it to rule-based propagation methods, such as \cite{bach15}.
70
+
71
+ As an alternative to propagation methods, spatial response maps \cite{DBLP:journals/corr/FangGISDDGHMPZZ14} build heatmaps by looking at the neural network output while sliding the neural network in the pixel space. Attention models based on neural networks can be trained to provide dynamic relevance assignment, for example, for the purpose of classifying an image from only a few glimpses of it \cite{DBLP:conf/nips/LarochelleH10}. They can also visualize what part of an image is relevant at a given time in some temporal context \cite{DBLP:conf/icml/XuBKCCSZB15}. However, they usually require specific models that are significantly more complex to design and train.
72
+
73
+ \section{Pixel-Wise Decomposition of a Function}
74
+ \label{section:decomposition}
75
+
76
+ In this section, we will describe the general concept of explaining a neural network decision by redistributing the function value (i.e. neural network output) onto the input variables in an amount that matches the respective contributions of these input variables to the function value. After enumerating a certain number of desirable properties for the input-wise relevance decomposition, we will explain in a second step how the Taylor decomposition technique, and its extension, deep Taylor decomposition, can be applied to this problem. For the sake of interpretability---and because all our subsequent empirical evaluations focus on the problem of image recognition,---we will call the input variables ``pixels'', and use the letter $p$ for indexing them. Also, we will employ the term ``heatmap'' to designate the set of redistributed relevances onto pixels. However, despite the image-related terminology, the method is applicable more broadly to other input domains such as abstract vector spaces, time series, or more generally any type of input domain whose elements can be processed by a neural network.
77
+
78
+ Let us consider a positive-valued function $f: \mathbb{R}^d \to \mathbb{R}^+$. In the context of image classification, the input $\x \in \mathbb{R}^d$ of this function can be an image. The image can be decomposed as a set of pixel values $\x = \{x_p\}$ where $p$ denotes a particular pixel. The function $f(\x)$ quantifies the presence (or amount) of a certain type of object(s) in the image. This quantity can be for example a probability, or the number of occurrences of the object. A function value $f(\x) = 0$ indicates the absence of such object(s) in the image. On the other hand, a function value $f(\x) > 0$ expresses the presence of the object(s) with a certain probability or in a certain amount.
79
+
80
+ We would like to associate to each pixel $p$ in the image a {\em relevance score} $R_p(\x)$, that indicates for an image $\x$ to what extent the pixel $p$ contributes to explaining the classification decision $f(\x)$. The relevance of each pixel can be stored in a heatmap denoted by $\R(\x) = \{R_p(\x)\}$ of same dimensions as the image $\x$. The heatmap can therefore also be visualized as an image. In practice, we would like the heatmapping procedure to satisfy certain properties that we define below.
81
+
82
+ \begin{definition}
83
+ \label{def:conservative} A heatmapping $\R(\x)$ is \underline{conservative} if the sum of assigned relevances in the pixel space corresponds to the total relevance detected by the model, that is
84
+ \begin{align*}
85
+ \forall \x:~f(\x) = \sum_p R_p(\x).
86
+ \end{align*}
87
+ \end{definition}
88
+
89
+ \begin{definition}
90
+ \label{def:positive}
91
+ A heatmapping $\R(\x)$ is p\!\!\underline{\,\,ositive} if all values forming the heatmap are greater or equal to zero, that is:
92
+ \begin{align*}
93
+ \forall \x,p:~R_p(\x) \geq 0
94
+ \end{align*}
95
+ \end{definition}
96
+
97
+ The first property was proposed by \cite{bach15} and ensures that the total redistributed relevance corresponds to the extent to which the object in the input image is detected by the function $f(\x)$. The second property forces the heatmapping to assume that the model is devoid of contradictory evidence (i.e. no pixels can be in contradiction with the presence or absence of the detected object in the image). These two properties of a heatmap can be combined into the notion of {\em consistency}:
98
+ \begin{definition}
99
+ \label{def:consistent}
100
+ A heatmapping $\R(\x)$ is \underline{consistent} if it is conservative \underline{and} positive. That is, it is consistent if it complies with Definitions \ref{def:conservative} and \ref{def:positive}.
101
+ \end{definition}
102
+ In particular, a consistent heatmap is forced to satisfy $(f(\x)\!=\!0) \Rightarrow (\R(\x)\!=\!\boldsymbol{0})$. That is, in absence of an object to detect, the relevance is forced to be zero everywhere in the image (i.e. empty heatmap), and not simply to have negative and positive relevance in same amount. We will use Definition \ref{def:consistent} as a formal tool for assessing the correctness of the heatmapping techniques proposed in this paper.
103
+
104
+ It was noted by \cite{bach15} that there may be multiple heatmapping techniques that satisfy a particular definition. For example, we can consider a heatmapping specification that assigns for all images the relevance uniformly onto the pixel grid:
105
+ \begin{align}
106
+ \forall p: \quad R_p(\x) = \frac1d \cdot f(\x),
107
+ \label{eq:conservation-1}
108
+ \end{align}
109
+ where $d$ is the number of input dimensions. Alternately, we can consider another heatmapping specification where all relevance is assigned to the first pixel in the image:
110
+ \begin{align}
111
+ R_p(\x) =
112
+ \left\{
113
+ \begin{array}{ll}
114
+ f(\x) & \quad \text{if}~p= 1\text{st pixel}\\
115
+ 0 & \quad \text{else}.
116
+ \end{array}
117
+ \right.
118
+ \label{eq:conservation-2}
119
+ \end{align}
120
+ Both \eqref{eq:conservation-1} and \eqref{eq:conservation-2} are consistent in the sense of Definition \ref{def:consistent}, however they lead to different relevance assignments. In practice, it is not possible to specify explicitly all properties that a heatmapping technique should satisfy in order to be meaningful. Instead, it can be given {\em implicitly} by the choice of a particular algorithm (e.g. derived from a particular mathematical model), subject to the constraint that it complies with the definitions above.
121
+
122
+ \subsection{Taylor Decomposition}
123
+ \label{section:taylor}
124
+
125
+ We present a heatmapping method for explaining the classification $f(\x)$ of a data point $\x$, that is based on the Taylor expansion of the function $f$ at some well-chosen {\em root point} $\widetilde \x$, where $f(\widetilde \x) = 0$. The first-order Taylor expansion of the function is given as
126
+ \begin{align}
127
+ f(\x)
128
+ &= f(\widetilde \x) + \left(\frac{\partial f }{\partial \x}\Big|_{\x = \widetilde \x}\right)^{\!\top} \!\! \cdot (\x-\widetilde \x) + \varepsilon \notag\\
129
+ &= 0 + \sum_{p} \underbrace{ \frac{\partial f }{\partial x_p}\Big|_{\x = \widetilde \x} \!\!\cdot ( x_p-\widetilde x_p )}_{R_p(\x)} + \, \varepsilon,
130
+ \label{eq:taylordecomposition}
131
+ \end{align}
132
+ where the sum $\sum_p$ runs over all pixels in the image, and $\{ \widetilde x_p \}$ are the pixel values of the root point $\widetilde \x$. We identify the summed elements as the relevances $R_p(\x)$ assigned to pixels in the image. The term $\varepsilon$ denotes second-order and higher-order terms. Most of the terms in the higher-order expansion involve several pixels at the same time and are therefore more difficult to redistribute. Thus, for simplicity, we will consider only the first-order terms for heatmapping. The heatmap (composed of all identified pixel-wise relevances) can be written as the element-wise product ``$\odot$'' between the gradient of the function $\partial f / \partial \x$ at the root point $\widetilde \x$ and the difference between the image and the root $(\x - \widetilde \x)$:
133
+ \begin{align*}
134
+ \R(\x) = \frac{\partial f }{\partial \x} \Big|_{\x = \widetilde \x} \odot (\x - \widetilde \x).
135
+ \end{align*}
136
+ Figure~\ref{fig:heatmap} illustrates the construction of a heatmap in a cartoon example, where a hypothetical function $f$ detects the presence of an object of class ``building'' in an image $\x$. In this example, the root point $\widetilde \x$ is the same image as $\x$ where the building has been blurred. The root point $\widetilde \x$ plays the role of a neutral data point that is similar to the actual data point $\x$ but lacks the particular object in the image that causes $f(\x)$ to be positive. The difference between the image and the root point $(\x - \widetilde \x)$ is therefore an image with only the object ``building''. The gradient $\partial f/\partial \x|_{\x = \widetilde \x}$ measures the sensitivity of the class ``building'' to each pixel when the classifier $f$ is evaluated at the root point $\widetilde \x$. Finally, the sensitivities are multiplied element-wise with the difference $(\x - \widetilde \x)$, producing a heatmap that identifies the most contributing pixels for the object ``building''. Strictly speaking, for images with multiple color channels (e.g.\ RGB), the Taylor decomposition will be performed in terms of pixels {\em and} color channels, thus forming multiple heatmaps (one per color channel). Since we are here interested in pixel contributions and not color contributions, we sum the relevance over all color channels, and obtain as a result a single heatmap.
137
+
138
+ \begin{figure}
139
+ \centering
140
+ \includegraphics[width=1.0\linewidth]{figures/figure2.pdf}\vskip -2mm
141
+ \caption{Cartoon showing the construction of a Taylor-based heatmap from an image $\x$ and a hypothetical function $f$ detecting the presence of objects of class ``building'' in the image. In the heatmap, positive values are shown in red, and negative values are shown in blue.}
142
+ \label{fig:heatmap}
143
+ \end{figure}
144
+
145
+ For a given classifier $f(\x)$, the Taylor decomposition approach described above has one free variable: the choice of the root point $\widetilde \x$ at which the Taylor expansion is performed. The example of Figure \ref{fig:heatmap} has provided some intuition on what are the properties of a good root point. In particular, a good root point should selectively remove information from some pixels (here, pixels corresponding to the building at the center of the image), while keeping the surroundings unchanged. This allows in principle for the Taylor decomposition to produce a complete explanation of the detected object which is also insensitive to the surrounding trees and sky.
146
+
147
+ More formally, a {\em good} root point is one that removes the object (e.g. as detected by the function $f(\x)$, but that minimally deviates from the original point $\x$. In mathematical terms, it is a point $\widetilde \x$ with $f(\widetilde \x)=0$ that lies in the vicinity of $\x$ under some distance metric, for example the nearest root. If $\x,\widetilde \x \in \mathbb{R}^d$, one can show that for a continuously differentiable function $f$ the gradient at the nearest root always points to the same direction as the difference $\x - \widetilde \x$, and their element-wise product is always positive, thus satisfying Definition \ref{def:positive}. Relevance conservation in the sense of Definition \ref{def:conservative} is however not satisfied for general functions $f$ due to the possible presence of non-zero higher-order terms in $\varepsilon$. The nearest root $\widetilde \x$ can be obtained as a solution of an optimization problem \cite{DBLP:journals/corr/SzegedyZSBEGF13}, by minimizing the objective
148
+ \begin{align*}
149
+ \min_{\boldsymbol{\xi}}~\|\boldsymbol{\xi} - \x\|^2
150
+ \quad \text{subject to} \quad
151
+ f(\boldsymbol{\xi}) = 0 \quad \text{and} \quad \boldsymbol{\xi} \in \mathcal{X},
152
+ \end{align*}
153
+ where $\mathcal{X}$ is the input domain. The nearest root $\widetilde \x$ must therefore be obtained in the general case by an iterative minimization procedure. It is time consuming when the function $f(\x)$ is expensive to evaluate or differentiate. Furthermore, it is not necessarily solvable due to the possible non-convexity of the minimization problem.
154
+
155
+ We introduce in the next sections two variants of Taylor decomposition that seek to avoid the high computational requirement, and to produce better heatmaps. The first one called sensitivity analysis makes use of a single gradient evaluation of the function at the data point. The second one called deep Taylor decomposition exploits the structure of the function $f(\x)$ when the latter is a deep neural network in order to redistribute relevance onto pixels using a single forward-backward pass on the network.
156
+
157
+ \subsection{Sensitivity Analysis}
158
+ \label{section:sensitivity}
159
+
160
+ A simple method to assign relevance onto pixels is to set it proportional to the squared derivatives of the classifier \cite{Gevrey2003249}:
161
+ $$
162
+ R(\x) \propto \Big(\frac{\partial f}{\partial \x}\Big)^2,
163
+ $$
164
+ where the power applies element-wise. This redistribution can be viewed as a special instance of Taylor decomposition where one expands the function at a point $\boldsymbol{\xi} \in \mathbb{R}^d$, which is taken at an infinitesimally small distance from the actual point $\x$, in the direction of maximum descent of $f$ (i.e. $\boldsymbol{\xi} = \x - \delta \cdot \partial f / \partial \x$ with $\delta$ small). Assuming that the function is locally linear, and therefore, the gradient is locally constant, we get
165
+ \begin{align*}
166
+ f(\x)
167
+ &= f(\boldsymbol{\xi}) + \Big(\frac{\partial f }{\partial \x} \Big|_{\x=\boldsymbol{\xi}}\Big)^{\!\top} \!\! \cdot \Big(\x- \Big(\x - \delta \frac{\partial f}{\partial \x}\Big)\Big) + 0\\
168
+ &= f(\boldsymbol{\xi}) + \delta \Big(\frac{\partial f }{\partial \x}\Big)^{\!\top} \frac{\partial f }{\partial \x} + 0\\
169
+ &= f(\boldsymbol{\xi}) + \sum_p \underbrace{\delta \Big(\frac{\partial f }{\partial x_p}\Big)^{\!2}}_{R_p} + 0,
170
+ \end{align*}
171
+ where the second-order terms are zero because of the local linearity. The resulting heatmap is positive, but not conservative since almost all relevance is absorbed by the zero-order term $f(\boldsymbol{\xi})$, which is not redistributed. Sensitivity analysis only measures a {\em local} effect and does provide a full explanation of a classification decision. In that case, only relative contributions between different values of $R_p$ are meaningful.
172
+
173
+ \begin{figure*}
174
+ \centering
175
+ \includegraphics[width=0.98\linewidth]{figures/figure3.pdf}
176
+ \caption{Graphical depiction of the computational flow of deep Taylor decomposition. A score $f(\x)$ indicating the presence of the class ``cat'' is obtained by forward-propagation of the pixel values $\{x_p\}$ into a neural network. The function value is encoded by the output neuron $x_f$. The output neuron is assigned relevance $R_f = x_f$. Relevances are backpropagated from the top layer down to the input, where $\{R_p\}$ denotes the relevance scores of all pixels. The last neuron of the lowest hidden layer is perceived as relevant by higher layers and redistributes its assigned relevance onto the pixels. Other neurons of the same layer are perceived as less relevant and do not significantly contribute to the heatmap.
177
+ }
178
+ \label{figure:neuralnet}
179
+ \end{figure*}
180
+
181
+ \subsection{Deep Taylor Decomposition}
182
+ \label{section:deeptaylor}
183
+
184
+ A rich class of functions $f(\x)$ that can be trained to map input data to classes is the deep neural network (DNN). A deep neural network is composed of multiple layers of representation, where each layer is composed of a set of neurons. The neural network is trained by adapting its set of parameters at each layer, so that the overall prediction error is minimized. As a result of training a deep network, a particular structure or factorization of the learned function emerges \cite{DBLP:conf/icml/LeeGRN09}. For example, each neuron in the first layer may react to a particular pixel activation pattern that is localized in the pixel space. The resulting neuron activations may then be used in higher layers to compose more complex nonlinearities \cite{DBLP:journals/ftml/Bengio09} that involve a larger number of pixels.
185
+
186
+ The deep Taylor decomposition method presented here is inspired by the divide-and-conquer paradigm, and exploits the property that the function learned by a deep network is structurally decomposed into a set of simpler subfunctions that relate quantities in adjacent layers. Instead of considering the whole neural network function $f$, we consider the mapping of a set of neurons $\{x_i\}$ at a given layer to the relevance $R_j$ assigned to a neuron $x_j$ in the next layer. Assuming that these two objects are functionally related by some function $R_j(\{x_i\})$, we would like to apply Taylor decomposition on this local function in order to redistribute relevance $R_j$ onto lower-layer relevances $\{R_i\}$. For these simpler subfunctions, Taylor decomposition should be made easier, in particular, root points should be easier to find. Running this redistribution procedure in a backward pass leads eventually to the pixel-wise relevances $\{R_p\}$ that form the heatmap.
187
+
188
+ Figure \ref{figure:neuralnet} illustrates in details the procedure of layer-wise relevance propagation on a cartoon example where an image of a cat is presented to a hypothetical deep network. If the neural network has been trained to detect images with an object ``cat'', the hidden layers have likely implemented a factorization of the pixels space, where neurons are modeling various features at various locations. In such factored network, relevance redistribution is easier in the top layer where it has to be decided which neurons, and not pixels, are representing the object ``cat''. It is also easier in the lower layer where the relevance has already been redistributed by the higher layers to the neurons corresponding to the location of the object ``cat''.
189
+
190
+ Assuming the existence of a function that maps neuron activities $\{ x_i \}$ to the upper-layer relevance $R_j$, and of a neighboring root point $\{ \widetilde x_i\}$ such that $R_j(\{ \widetilde x_i\}) = 0$, we can then write the Taylor decomposition of $\sum_j R_j$ at $\{x_i\}$ as
191
+ \begin{align}
192
+ \sum_j R_j
193
+ &= \bigg(\frac{\partial \big(\sum_j R_j\big)}{\partial \{x_i\}}\Big|_{\{ \widetilde x_i\}}\bigg)^{\!\top} \!\! \cdot (\{x_i\}-\{ \widetilde x_i\}) + \varepsilon
194
+ \nonumber\\
195
+ &= \sum_i \underbrace{\sum_j \frac{\partial R_j}{\partial x_i}\Big|_{\{ \widetilde x_i\}} \!\! \cdot (x_i-\widetilde x_i)}_{R_{i}} + \varepsilon,
196
+ \label{eq:taylorprop}
197
+ \end{align}
198
+ that redistributes relevance from one layer to the layer below, where $\varepsilon$ denotes the Taylor residual, where $\big|_{\{ \widetilde x_i\}}$ indicates that the derivative has been evaluated at the root point $\{ \widetilde x_i\}$, where $\sum_j$ runs over neurons at the given layer, and where $\sum_i$ runs over neurons in the lower layer. Equation \ref{eq:taylorprop} allows us to identify the relevance of individual neurons in the lower layer in order to apply the same Taylor decomposition technique one layer below.
199
+
200
+ If each local Taylor decomposition in the network is conservative in the sense of Definition \ref{def:conservative}, then, the chain of equalities $R_f = \hdots = \sum_j R_j = \sum_i R_i = \hdots = \sum_p R_p$ should hold. This chain of equalities is referred by \cite{bach15} as layer-wise relevance conservation. Similarly, if Definition \ref{def:positive} holds for each local Taylor decomposition, the positivity of relevance scores at each layer $R_f,\dots,\{R_j\},\{R_i\},\dots,\{R_p\} \geq 0$ is also ensured. Finally, if all Taylor decompositions of local subfunctions are consistent in the sense of Definition \ref{def:consistent}, then, the whole deep Taylor decomposition is also consistent in the same sense.
201
+
202
+ \section{Application to One-Layer Networks}
203
+ \label{section:onelayer}
204
+
205
+ As a starting point for better understanding deep Taylor decomposition, in particular, how it leads to practical rules for relevance propagation, we work through a simple example, with advantageous analytical properties. We consider a detection-pooling network made of one layer of nonlinearity. The network is defined as
206
+ \begin{align}
207
+ x_j &= \max\big(0,\textstyle{\sum}_i x_i w_{ij} + b_j\big) \label{eq:onelayer-1}\\
208
+ x_k &= \textstyle{\sum_j} x_j \label{eq:onelayer-2}
209
+ \end{align}
210
+ where $\{x_i\}$ is a $d$-dimensional input, $\{x_j\}$ is a detection layer, $x_k$ is the output, and $\theta = \{w_{ij},b_j\}$ are the weight and bias parameters of the network. The one-layer network is depicted in Figure \ref{fig:onelayer}. The mapping $\{x_i\} \to x_k$ defines a function $g \in \mathcal{G}$, where $\mathcal{G}$ denotes the set of functions representable by this one-layer network. We will set an additional constraint on biases, where we force $b_j \leq 0$ for all $j$. Imposing this constraint guarantees the existence of a root point $\{\widetilde x_i\}$ of the function $g$ (located at the origin), and thus also ensures the applicability of standard Taylor decomposition, for which a root point is needed.
211
+
212
+ \begin{figure}
213
+ \centering
214
+ \includegraphics[scale=0.65]{figures/figure4.pdf}
215
+ \vskip -2mm
216
+ \caption{\label{fig:onelayer} Detection-pooling network that implements Equations \ref{eq:onelayer-1} and \ref{eq:onelayer-2}: The first layer detects features in the input space, the second layer pools the detected features into an output score.}
217
+ \end{figure}
218
+
219
+ We now perform the deep Taylor decomposition of this function. We start by equating the predicted output to the amount of total relevance that must be backpropagated. That is, we define $R_k = x_k$. The relevance for the top layer can now be expressed in terms of lower-layer neurons as:
220
+ \begin{align}
221
+ R_k &= \textstyle{\sum}_j x_j
222
+ \label{eq:pooling-rel}
223
+ \end{align}
224
+ Having established the mapping between $\{x_j\}$ and $R_k$, we would like to redistribute $R_k$ onto neurons $\{x_j\}$. Using Taylor decomposition (Equation \ref{eq:taylordecomposition}), redistributed relevances $R_j$ can be written as:
225
+ \begin{align}
226
+ R_j = \frac{\partial R_k}{\partial x_j}\Big|_{\{\widetilde x_j \}} \cdot (x_j - \widetilde x_j).
227
+ \label{eq:toplayer-taylor}
228
+ \end{align}
229
+ We still need to choose a root point $\{\widetilde x_j\}$. The list of all root points of this function is given by the plane equation $\sum_j \widetilde x_j = 0$. However, for the root to play its role of reference point, it should be admissible. Here, because of the application of the function $\max(0,\cdot)$ in the preceding layer, the root point must be positive. The only point that is both a root ($\sum_j \widetilde x_j = 0$) and admissible ($\forall j: \widetilde x_j \geq 0$) is $\{\widetilde x_j\} = \boldsymbol{0}$. Choosing this root point in Equation \ref{eq:toplayer-taylor}, and observing that the derivative $\frac{\partial R_k}{\partial x_j} = 1$, we obtain the first rule for relevance redistribution:
230
+ \begin{align}
231
+ \boxed{R_j = x_j}
232
+ \label{eq:relprop-x}
233
+ \end{align}
234
+ In other words, the relevance must be redistributed on the neurons of the detection layer in same proportion as their activation value. Trivially, we can also verify that the relevance is conserved during the redistribution process ($\sum_j R_j = \sum_j x_j = R_k$) and positive ($R_j = x_j \geq 0$).
235
+
236
+ Let us now express the relevance $R_j$ as a function of the input neurons $\{x_i\}$. Because $R_j = x_j$ as a result of applying the propagation rule of Equation \ref{eq:relprop-x}, we can write
237
+ \begin{align}
238
+ R_j = \max\big(0,\textstyle{\sum}_i x_i w_{ij} + b_j\big),
239
+ \label{eq:onelayer-rel}
240
+ \end{align}
241
+ that establishes a mapping between $\{x_i\}$ and $R_j$. To obtain redistributed relevances $\{R_i\}$, we will apply Taylor decomposition again on this new function. The identification of the redistributed total relevance $\sum_j R_j$ onto the preceding layer was identified in Equation \ref{eq:taylorprop} as:
242
+ \begin{align}
243
+ R_i &= \sum_j \frac{\partial R_j}{\partial x_i} \Big|_{\{\widetilde x_i\}^{(j)}} \cdot (x_i - \widetilde x_i^{(j)}).
244
+ \label{eq:onelayer}
245
+ \end{align}
246
+ Relevances $\{R_i\}$ can therefore be obtained by performing as many Taylor decompositions as there are neurons in the hidden layer. Note that a superscript $^{(j)}$ has been added to the root point $\{\widetilde x_{i}\}$ in order to emphasize that a different root point is chosen for decomposing each relevance $R_j$. We will introduce below various methods for choosing a root $\{\widetilde x_i\}^{(j)}$ that consider the diversity of possible input domains $\mathcal{X} \subseteq \mathbb{R}^d$ to which the data belongs. Each choice of input domain and associated method to find a root will lead to a different rule for propagating relevance $\{R_j\}$ to $\{R_i\}$.
247
+
248
+ \subsection{Unconstrained Input Space and the $w^2$-Rule}
249
+
250
+ We first consider the simplest case where any real-valued input is admissible ($\mathcal{X} = \mathbb{R}^d$). In that case, we can always choose the root point $\{\widetilde x_{i}\}^{(j)}$ that is nearest in the Euclidean sense to the actual data point $\{x_i\}$. When $R_j > 0$, the nearest root of $R_j$ as defined in Equation \ref{eq:onelayer-rel} is the intersection of the plane equation $\sum_i \widetilde x_{i}^{(j)} w_{ij} + b_j = 0$, and the line of maximum descent $\{\widetilde x_i\}^{(j)} = \{x_i\} + t \cdot \boldsymbol{w}_{j}$, where $\boldsymbol{w}_{j}$ is the vector of weight parameters that connects the input to neuron $x_j$ and $t \in \mathbb{R}$. The intersection of these two subspaces is the nearest root point. It is given by $\{\widetilde x_{i}\}^{(j)} = \{x_i - \frac{w_{ij}}{\sum_i w_{ij}^2} (\sum_i x_i w_{ij} + b_j)\}$. Injecting this root into Equation \ref{eq:onelayer}, the relevance redistributed onto neuron $i$ becomes:
251
+ \begin{align}
252
+ \boxed{R_i = \sum_j \frac{w_{ij}^2}{\sum_{i'} w_{i'j}^2} R_j}
253
+ \label{eq:relprop-w2}
254
+ \end{align}
255
+ The propagation rule consists of redistributing relevance according to the square magnitude of the weights, and pooling relevance across all neurons $j$. This rule is also valid for $R_j = 0$, where the actual point $\{x_i\}$ is already a root, and for which no relevance needs to be propagated.
256
+ \begin{proposition}
257
+ For all $g \in \mathcal{G}$, the deep Taylor decomposition with the $w^2$-rule is consistent in the sense of Definition \ref{def:consistent}.
258
+ \label{prop:w2rule-consistent}
259
+ \end{proposition}
260
+ The $w^2$-rule resembles the rule by \cite{Garson1991,Gevrey2003249} for determining the importance of input variables in neural networks, where absolute values of $w_{ij}$ are used in place of squared values. It is important to note that the decomposition that we propose here is modulated by the upper layer data-dependent $R_j$s, which leads to an individual explanation for each data point.
261
+
262
+ \subsection{Constrained Input Space and the $z$-Rules}
263
+
264
+ When the input domain is restricted to a subset $\mathcal{X} \subset \mathbb{R}^d$, the nearest root of $R_j$ in the Euclidean sense might fall outside of $\mathcal{X}$. In the general case, finding the nearest root in this constrained input space can be difficult. An alternative is to further restrict the search domain to a subset of $\mathcal{X}$ where nearest root search becomes feasible again.
265
+
266
+ We first study the case $\mathcal{X} = \mathbb{R}_+^d$, which arises, for example in feature spaces that follow the application of rectified linear units. In that case, we restrict the search domain to the segment $(\{x_i 1_{w_{ij}<0}\},\{x_i\}) \subset \mathbb{R}_+^d$, that we know contains at least one root at its first extremity. Injecting the nearest root on that segment into Equation \ref{eq:onelayer}, we obtain the relevance propagation rule:
267
+ \begin{align*}
268
+ \boxed{R_{i} = \sum_j \frac{z_{ij}^+}{\sum_{i'} z_{i'j}^+} R_{j}}
269
+ \end{align*}
270
+ (called $z^+\!$-rule), where $z_{ij}^+ = x_i w_{ij}^+$, and where $w_{ij}^+$ denotes the positive part of $w_{ij}$. This rule corresponds for positive input spaces to the $\alpha\beta$-rule formerly proposed by \cite{bach15} with $\alpha=1$ and $\beta=0$. The $z^+\!$-rule will be used in Section \ref{section:twolayers} to propagate relevances in higher layers of a neural network where neuron activations are positive.
271
+
272
+ \begin{proposition}
273
+ For all $g \in \mathcal{G}$ and data points $\{x_i\} \in \mathbb{R}_+^d$, the deep Taylor decomposition with the $z^+\!$-rule is consistent in the sense of Definition \ref{def:consistent}.
274
+ \label{prop:zprule-consistent}
275
+ \end{proposition}
276
+
277
+ For image classification tasks, pixel spaces are typically subjects to box-constraints, where an image has to be in the domain $\mathcal{B} = \{ \{x_i\} : \forall_{i=1}^d~l_i \leq x_i \leq h_i \}$, where $l_i \leq 0$ and $h_i \geq 0$ are the smallest and largest admissible pixel values for each dimension. In that new constrained setting, we can restrict the search for a root on the segment $(\{l_i 1_{w_{ij}>0} + h_i 1_{w_{ij}<0}\},\{x_i\}) \subset \mathcal{B}$, where we know that there is at least one root at its first extremity. Injecting the nearest root on that segment into Equation \ref{eq:onelayer}, we obtain relevance propagation rule:
278
+ \begin{align*}
279
+ \boxed{R_{i} = \sum_j \frac{z_{ij} - l_i w_{ij}^+ - h_i w_{ij}^-}{\sum_{i'} z_{i'j} - l_i w_{i'j}^+ - h_i w_{i'j}^-} R_{j}}
280
+ \end{align*}
281
+ (called $z^\mathcal{B}\!$-rule), where $z_{ij} = x_i w_{ij}$, and where we note the presence of data-independent additive terms in the numerator and denominator. The idea of using an additive term in the denominator was formerly proposed by \cite{bach15} and called $\epsilon$-stabilized rule. However, the objective of \cite{bach15} was to make the denominator non-zero to avoid numerical instability, while in our case, the additive terms serve to enforce positivity.
282
+
283
+ \begin{proposition}
284
+ For all $g \in \mathcal{G}$ and data points $\{x_i\} \in \mathcal{B}$, the deep Taylor decomposition with the $z^\mathcal{B}\!$-rule is consistent in the sense of Definition \ref{def:consistent}.
285
+ \label{prop:zbrule-consistent}
286
+ \end{proposition}
287
+
288
+ Detailed derivations of the proposed rules, proofs of Propositions \ref{prop:w2rule-consistent}, \ref{prop:zprule-consistent} and \ref{prop:zbrule-consistent}, and algorithms that implement these rules efficiently are given in the supplement. The properties of the relevance propagation techniques considered so far (when applied to functions $g \in \mathcal{G}$), their domain of applicability, their consistency, and other computational properties, are summarized in the table below:
289
+
290
+ \begin{center}
291
+ \begin{tabular}{l||c|c|c|c}
292
+ & \!\!sensitivity\!\! & Taylor & $w^2$-rule & $z$-rules\\\hline
293
+ $\mathcal{X} = \mathbb{R}^d$ & \bf yes & \bf yes & \bf yes & no\\
294
+ $\mathcal{X} = \mathbb{R}^d_+,\mathcal{B}$ & \bf yes & \bf yes & no & \bf yes\\\hline
295
+ nearest root on $\mathcal{X}$\!\! & no & \bf yes & \bf yes & no\\\hline
296
+ \em conservative & \em no & \em no & \em yes & \em yes\\
297
+ \em positive & \em yes & \em \!yes$^\star$\! & \em yes & \em yes\\
298
+ consistent & no & no & \bf yes & \bf yes\\\hline
299
+ unique solution & \bf yes & \bf yes & \bf yes & \bf yes\\
300
+ fast computation & \bf yes & no & \bf yes & \bf yes
301
+ \end{tabular}
302
+ \end{center}
303
+ {\small $^{(\star)}$~e.g. using the continuously differentiable approximation of the detection function $\max(0,x) = \lim_{t \to \infty} t^{-1} \log(0.5+0.5\exp(t x))$.}
304
+
305
+ \begin{figure*}[t]
306
+ \centering \small
307
+ \begin{tabular}{c|c|c|c}
308
+ Sensitivity (rescaled) &
309
+ Taylor (nearest root) &
310
+ Deep Taylor ($w^2\!$-rule) &
311
+ Deep Taylor ($z^\mathcal{B}\!$-rule)\\
312
+ \includegraphics[width=0.225\linewidth]{results/heatmap-SNN-sensitivity.png}&
313
+ \includegraphics[width=0.225\linewidth]{results/heatmap-SNN-taylor.png}&
314
+ \includegraphics[width=0.225\linewidth]{results/heatmap-SNN-w2.png}&
315
+ \includegraphics[width=0.225\linewidth]{results/heatmap-SNN-zb.png}\\
316
+ \end{tabular}
317
+ \caption{\label{fig:shallow-heatmaps} Comparison of heatmaps produced by various decompositions. Each input image (pair of handwritten digits) is presented with its associated heatmap.}
318
+ \vskip 5mm
319
+ \begin{tabular}{c|c|c|c}
320
+ Sensitivity (rescaled) &
321
+ Taylor (nearest root) &
322
+ Deep Taylor ($w^2\!$-rule) &
323
+ Deep Taylor ($z^\mathcal{B}\!$-rule)\\
324
+ \includegraphics[width=0.225\linewidth]{results/scatter-SNN-sensitivity.pdf}&
325
+ \includegraphics[width=0.225\linewidth]{results/scatter-SNN-taylor.pdf}&
326
+ \includegraphics[width=0.225\linewidth]{results/scatter-SNN-w2.pdf}&
327
+ \includegraphics[width=0.225\linewidth]{results/scatter-SNN-zb.pdf}\\
328
+ \includegraphics[width=0.225\linewidth,trim=0 10 0 0]{results/histogram-SNN-sensitivity.pdf}&
329
+ \includegraphics[width=0.225\linewidth,trim=0 10 0 0]{results/histogram-SNN-taylor.pdf}&
330
+ \includegraphics[width=0.225\linewidth,trim=0 10 0 0]{results/histogram-SNN-w2.pdf}&
331
+ \includegraphics[width=0.225\linewidth,trim=0 10 0 0]{results/histogram-SNN-zb.pdf}\\
332
+ \end{tabular}
333
+ \caption{\label{fig:shallow-scatter} Quantitative analysis of each decomposition technique. {\em Top:} Scatter plots showing for each data point the total output relevance (x-axis), and the sum of pixel-wise relevances (y-axis). {\em Bottom:} Histograms showing in log scale the number of times a particular value of pixel relevance occurs.}
334
+ \end{figure*}
335
+
336
+ \subsection{Experiment}
337
+
338
+ We now demonstrate empirically the properties of the heatmapping techniques introduced so far on the network of Figure \ref{fig:onelayer} trained to predict whether a MNIST handwritten digit of class 0--3 is present in the input image, next to a distractor digit of a different class 4--9. The neural network is trained to output $x_k = 0$ if there is no digit to detect in the image and $x_k = 100$ if there is one. We minimize the mean-square error between the true scores $\{0,100\}$, and the neural network output $x_k$. Treating the supervised task as a regression problem forces the network to assign approximately the same amount of relevance to all positive examples, and as little relevance as possible to the negative examples.
339
+
340
+ The input image is of size $28 \times 56$ pixels and is coded between $-0.5$ (black) and $+1.5$ (white). The neural network has $28 \times 56$ input neurons $\{x_i\}$, $400$ hidden neurons $\{x_j\}$, and one output $x_k$. Weights $\{w_{ij}\}$ are initialized using a normal distribution of mean $0$ and standard deviation $0.05$. Biases $\{b_j\}$ are initialized to zero and constrained to be negative or zero throughout training, in order to meet our specification of the one-layer network. The neural network is trained for $300000$ iterations of stochastic gradient descent with a minibatch of size $20$ and a small learning rate. Training data is extended with translated versions of MNIST digits. The root $\{\widetilde x_i\}$ for the nearest root Taylor method is chosen in our experiments to be the nearest point such that $f(\{\widetilde x_i\}) < 0.1 f(\{x_i\})$. The $z^\mathcal{B}\!$-rule is computed using as a lower- and upper-bounds $\forall_i: l_i = -0.5$ and $h_i = 1.5$.
341
+
342
+ Heatmaps are shown in Figure \ref{fig:shallow-heatmaps} for sensitivity analysis, nearest root Taylor decomposition, and deep Taylor decomposition with the $w^2$- and $z^\mathcal{B}\!$-rules. In all cases, we observe that the heatmapping procedure correctly assigns most of the relevance to pixels where the digit to detect is located. Sensitivity analysis produces unbalanced and incomplete heatmaps, with some input points reacting strongly, and others reacting weakly, and with a considerable amount of relevance associated to the border of the image, where there is no information. Nearest root Taylor produces selective heatmaps, that are still not fully complete. The heatmaps produced by deep Taylor decomposition with the $w^2$-rule, are similar to nearest root Taylor, but blurred, and not perfectly aligned with the data. The domain-aware $z^\mathcal{B}\!$-rule produces heatmaps that are still blurred, but that are complete and well-aligned with the data.
343
+
344
+ Figure \ref{fig:shallow-scatter} quantitatively evaluates heatmapping techniques of Figure \ref{fig:shallow-heatmaps}. The scatter plots compare the total output relevance with the sum of pixel-wise relevances. Each point in the scatter plot is a different data point drawn independently from the input distribution. These scatter plots test empirically for each heatmapping method whether it is conservative in the sense of Definition \ref{def:conservative}. In particular, if all points lie on the diagonal line of the scatter plot, then $\sum_p R_p = R_f$, and the heatmapping is conservative. The histograms just below test empirically whether the studied heatmapping methods satisfy positivity in the sense of Definition \ref{def:positive}, by counting the number of times (shown on a log-scale) pixel-wise contributions $R_p$ take a certain value. Red color in the histogram indicates positive relevance assignments, and blue color indicates negative relevance assignments. Therefore, an absence of blue bars in the histogram indicates that the heatmap is positive (the desired behavior). Overall, the scatter plots and the histograms produce a complete description of the degree of consistency of the heatmapping techniques in the sense of Definition \ref{def:consistent}.
345
+
346
+ Sensitivity analysis only measures a local effect and therefore does not conceptually redistribute relevance onto the input. However, we can still measure the relative strength of computed sensitivities between examples or pixels. The nearest root Taylor approach, although producing mostly positive heatmaps, dissipates a large fraction of the relevance. The deep Taylor decomposition on the other hand ensure full consistency, as theoretically predicted by Propositions \ref{prop:w2rule-consistent} and \ref{prop:zbrule-consistent}. The $z^\mathcal{B}$-rule spreads relevance onto more pixels than methods based on nearest root, as shown by the shorter tail of its relevance histogram.
347
+
348
+ \begin{figure*}
349
+ \centering
350
+ \includegraphics[width=0.95\textwidth]{figures/figure7.pdf}
351
+ \caption{Left: Example of a 3-layer deep network, composed of increasingly high-level feature extractors. Right: Diagram of the two proposed relevance models for redistributing relevance onto lower layers.}
352
+ \label{figure:hierarchy}
353
+ \end{figure*}
354
+
355
+ \section{Application to Deep Networks}
356
+ \label{section:twolayers}
357
+
358
+ In order to represent efficiently complex hierarchical problems, one needs deeper architectures. These architectures are typically made of several layers of nonlinearity, where each layer extracts features at different scale. An example of deep architecture is shown in Figure \ref{figure:hierarchy} (left). In this example, the input is first processed by feature extractors localized in the pixel space. The resulting features are combined into more complex mid-level features that cover more pixels. Finally, these more complex features are combined in a final stage of nonlinear mapping, that produces a score determining whether the object to detect is present in the input image or not. A practical example of deep network with similar hierarchical architecture, and that is frequently used for image recognition tasks, is the convolutional neural network~\cite{lecun-89}.
359
+
360
+ In Section \ref{section:decomposition} and \ref{section:onelayer}, we have assumed the existence and knowledge of a functional mapping between the neuron activities at a given layer and relevances in the higher layer. However, in deep architectures, the mapping may be unknown (although it may still exist). In order to redistribute the relevance from the higher layers to the lower layers, one needs to make this mapping explicit. For this purpose, we introduce the concept of relevance model.
361
+
362
+ A relevance model is a function that maps a set of neuron activations at a given layer to the relevance of a neuron in a higher layer, and whose output can be redistributed onto its input variables, for the purpose of propagating relevance backwards in the network. For the deep network of Figure \ref{figure:hierarchy} (left), on can for example, try to predict $R_k$ from $\{x_i\}$, which then allows us to decompose the predicted relevance $R_k$ into lower-layer relevances $\{R_i\}$. For practical purposes, the relevance models we will consider borrow the structure of the one-layer network studied in Section \ref{section:onelayer}, and for which we have already derived a deep Taylor decomposition.
363
+
364
+ Upper-layer relevance is not only determined by input neuron activations of the considered layer, but also by high-level information (i.e. abstractions) that have been formed in the top layers of the network. These high-level abstractions are necessary to ensure a global cohesion between low-level parts of the heatmap.
365
+
366
+ \subsection{Min-Max Relevance Model}
367
+
368
+ We first consider a trainable relevance model of $R_k$. This relevance model is illustrated in Figure \ref{figure:hierarchy}-1 and is designed to incorporate both bottom-up and top-down information, in a way that the relevance can still be fully decomposed in terms of input neurons. It is defined as
369
+ \begin{align*}
370
+ y_j &= \max\big(0,\textstyle{\sum}_i x_i v_{ij} + a_j \big)\\
371
+ \widehat R_k &= \textstyle{\sum}_j y_j.
372
+ \end{align*}
373
+ where $a_j = \min(0,\textstyle{\sum}_{l} R_{l} v_{lj} + d_j)$ is a negative bias that depends on upper-layer relevances, and where $\sum_l$ runs over the detection neurons of that upper-layer. This negative bias plays the role of an inhibitor, in particular, it prevents the activation of the detection unit $y_j$ of the relevance model in the case where no upper-level abstraction in $\{R_l\}$ matches the feature detected in $\{x_i\}$.
374
+
375
+ The parameters $\{v_{ij},v_{lj},d_j\}$ of the relevance model are learned by minimization of the mean square error objective
376
+ \begin{align*}
377
+ \min \big\langle (\widehat R_k - R_k )^2 \big\rangle,
378
+ \end{align*}
379
+ where $R_k$ is the true relevance, $\widehat R_k$ is the predicted relevance, and $\langle \cdot \rangle$ is the expectation with respect to the data distribution.
380
+
381
+
382
+ Because the relevance model has exactly the same structure as the one-layer neural network described in Section \ref{section:onelayer}, in particular, because $a_j$ is negative and only weakly dependent on the set of neurons $\{x_i\}$, one can apply the same set of rules for relevance propagation. That is, we compute
383
+ \begin{align}
384
+ \boxed{R_{j} = y_j}
385
+ \label{eq:rm-x}
386
+ \end{align}
387
+ for the pooling layer and
388
+ \begin{align}
389
+ \boxed{R_{i} = \sum_j \frac{q_{ij}}{\sum_{i'} q_{i'j}} R_{j}}
390
+ \label{eq:rm-q}
391
+ \end{align}
392
+ for the detection layer, where $q_{ij} = v_{ij}^2$, $q_{ij} = x_{i} v_{ij}^+$, or $q_{ij} = x_{i} v_{ij} - l_i v_{ij}^+ - h_i v_{ij}^-$ if choosing the $w^2$-, $z^+\!$-, $z^\mathcal{B}\!$-rules respectively. This set of equations used to backpropagate relevance from $R_k$ to $\{R_i\}$, is approximately conservative, with an approximation error that is determined by how much on average the output of the relevance model $\widehat R_k$ differs from the true relevance $R_k$.
393
+
394
+ \subsection{Training-Free Relevance Model}
395
+ \label{section:twolayers-mf}
396
+
397
+ A large deep neural network may have taken weeks or months to train, and we should be able to explain it without having to train a relevance model for each neuron. We consider the original feature extractor
398
+ \begin{align*}
399
+ x_j &= \max\big(0,\textstyle{\sum}_i x_i w_{ij} + b_j\big)\\
400
+ x_k &= \|\{x_j\}\|_p
401
+ \end{align*}
402
+ where the $L^p$-norm can represent a variety of pooling operations such as sum-pooling or max-pooling. Assuming that the upper-layer has been explained by the $z^+\!$-rule, and indexing by $l$ the detection neurons of that upper-layer, we can write the relevance $R_k$ as
403
+ \begin{align*}
404
+ R_k
405
+ &= \sum_l \frac{x_k w_{kl}^+}{\sum_{k'} x_{k'} w_{k'l}^+} R_l\\
406
+ &= \big({\textstyle \sum}_j x_j\big) \cdot
407
+ \frac{\|\{x_j\}\|_p}{\|\{x_j\}\|_1} \cdot
408
+ \sum_l \frac{w_{kl}^+ R_l}{\sum_{k'} x_{k'} w_{k'l}^+}
409
+ \end{align*}
410
+ The first term is a linear pooling over detection units that has the same structure as the network of Section \ref{section:onelayer}. The second term is a positive $L^p/L^1$ pooling ratio, which is constant under any permutation of neurons $\{x_j\}$, or multiplication of these neurons by a scalar. The last term is a positive weighted sum of higher-level relevances, that measures the sensitivity of the neuron relevance to its activation. It is mainly determined by the relevance found in higher layers and can be viewed as a top-down contextualization term $d_k(\{R_l\})$. Thus, we rewrite the relevance as
411
+ $$
412
+ R_k = \big({\textstyle \sum}_j x_j\big) \cdot c_k \cdot d_k(\{R_l\})
413
+ $$
414
+ where the pooling ratio $c_k > 0$ and the top-down term $d_k(\{R_l\}) > 0$ are only weakly dependent on $\{x_j\}$ and are approximated as constant terms. This relevance model is illustrated in Figure \ref{figure:hierarchy}-2. Because the relevance model above has the same structure as the network of Section \ref{section:onelayer} (up to a constant factor), it is easy to derive its Taylor decomposition, in particular one can show that
415
+ \begin{align*}
416
+ \boxed{R_{j} = \frac{x_j}{\sum_{j'} x_{j'}} R_k}
417
+ \end{align*}
418
+ where relevance is redistributed in proportion to activations in the detection layer, and that
419
+ \begin{align*}
420
+ \boxed{R_{i} = \sum_j \frac{q_{ij}}{\sum_{i'} q_{i'j}} R_{j}}.
421
+ \end{align*}
422
+ where $q_{ij} = w_{ij}^2$, $q_{ij} = x_{i} w_{ij}^+$, or $q_{ij} = x_{i} w_{ij} - l_i w_{ij}^+ - h_i w_{ij}^-$ if choosing the $w^2$-, $z^+\!$-, $z^\mathcal{B}\!$-rules respectively. If choosing the $z^+\!$-rule for that layer again, the same training-free decomposition technique can be applied again to the layer below, and the process can be repeated until the input layer. Thus, when using the training-free relevance model, all layers of the network must be decomposed using the $z^+\!$-rule, except the first layer for which other rules can be applied such as the $w^2$-rule or the $z^\mathcal{B}\!$-rule.
423
+
424
+ The technical advantages and disadvantages of each heatmapping method are summarized in the table below:
425
+
426
+ \begin{center}
427
+ \begin{tabular}{l||c|c|c|c}
428
+ & \!\!sensitivity\!\! & \!Taylor\! & \!min-max\! & \!\parbox{11mm}{\centering training\\[-1mm]-free}\!\\\hline
429
+ consistent & no & no & \bf yes$^\dagger$ & \bf yes \\
430
+ unique solution & \bf yes & no$^\ddagger$ & no$^\ddagger$ & \bf yes\\
431
+ training-free & \bf yes & \bf yes & no & \bf yes\\
432
+ fast computation\!\! & \bf yes & no & \bf yes & \bf yes
433
+ \end{tabular}
434
+ \end{center}
435
+ {\small $^\dagger$ Conservative up to a fitting error between the redistributed relevance and the relevance model output. $^\ddagger$ Root finding and relevance model training are in the general case both nonconvex.}
436
+
437
+ \begin{figure*}[t]
438
+ \centering \small
439
+ \begin{tabular}{c|c|c|c}
440
+ Sensitivity (rescaled) &
441
+ Taylor (nearest root) &
442
+ Deep Taylor (min-max) &
443
+ Deep Taylor (training-free)\\
444
+ \includegraphics[width=0.225\linewidth]{results/heatmap-DNN-sensitivity.png} &
445
+ \includegraphics[width=0.225\linewidth]{results/heatmap-DNN-taylor.png} &
446
+ \includegraphics[width=0.225\linewidth]{results/heatmap-MRM.png} &
447
+ \includegraphics[width=0.225\linewidth]{results/heatmap-DNN.png}\\
448
+ \end{tabular}
449
+ \caption{\label{fig:deep-heatmaps} Comparison of heatmaps produced by various decompositions and relevance models. Each input image is presented with its associated heatmap.}
450
+ \vskip 5mm
451
+ \begin{tabular}{c|c|c|c}
452
+ Sensitivity (rescaled) &
453
+ Taylor (nearest root) &
454
+ Deep Taylor (min-max) &
455
+ Deep Taylor (training-free)\\
456
+ \includegraphics[width=0.225\linewidth]{results/scatter-DNN-sensitivity.pdf}&
457
+ \includegraphics[width=0.225\linewidth]{results/scatter-DNN-taylor.pdf} &
458
+ \includegraphics[width=0.225\linewidth]{results/scatter-MRM.pdf} &
459
+ \includegraphics[width=0.225\linewidth]{results/scatter-DNN.pdf}\\
460
+ \includegraphics[width=0.225\linewidth,trim=0 10 0 0]{results/histogram-DNN-sensitivity.pdf}&
461
+ \includegraphics[width=0.225\linewidth,trim=0 10 0 0]{results/histogram-DNN-taylor.pdf}&
462
+ \includegraphics[width=0.225\linewidth,trim=0 10 0 0]{results/histogram-MRM.pdf}&
463
+ \includegraphics[width=0.225\linewidth,trim=0 10 0 0]{results/histogram-DNN.pdf}\\
464
+ \end{tabular}
465
+ \caption{\label{fig:deep-scatter} {\em Top:} Scatter plots showing for each type of decomposition and data points the predicted class score (x-axis), and the sum-of-relevance in the input layer (y-axis). {\em Bottom:} Histograms showing the number of times (on a log-scale) a particular pixel-wise relevance score occurs.}
466
+ \end{figure*}
467
+
468
+ \subsection{Experiment on MNIST}
469
+
470
+ We train a neural network with two layers of nonlinearity on the same MNIST problem as in Section \ref{section:onelayer}. The neural network is composed of a first detection-pooling layer with $400$ detection neurons sum-pooled into $100$ units (i.e. we sum-pool groups of 4 detection units). A second detection-pooling layer with $400$ detection neurons is applied to the resulting $100$-dimensional output of the previous layer, and activities are sum-pooled onto a single unit representing the deep network output. In addition, we learn a min-max relevance model for the first layer. The relevance model is trained to minimize the mean-square error between the relevance model output and the true relevance (obtained by application of the $z^+\!$-rule in the top layer). The deep network and the relevance models are trained using stochastic gradient descent with minibatch size $20$, for $300000$ iterations, and using a small learning rate.
471
+
472
+ Figure \ref{fig:deep-heatmaps} shows heatmaps obtained with sensitivity analysis, standard Taylor decomposition, and deep Taylor decomposition with different relevance models. We apply the $z^\mathcal{B}\!$-rule to backpropagate relevance of pooled features onto pixels. Sensitivity analysis and standard Taylor decomposition produce noisy and incomplete heatmaps. These two methods do not handle well the increased depth of the network. The min-max Taylor decomposition and the training-free Taylor decomposition produce relevance maps that are complete, and qualitatively similar to those obtained by deep Taylor decomposition of the shallow architecture in Section \ref{section:onelayer}. This demonstrates the high level of transparency of deep Taylor methods with respect to the choice of architecture. The heatmaps obtained by the trained min-max relevance model and by the training-free method are of similar quality.
473
+
474
+ Similar advantageous properties of the deep Taylor decomposition are observed quantitatively in the plots of Figure \ref{fig:deep-scatter}. The standard Taylor decomposition is positive, but dissipates relevance. The deep Taylor decomposition with the min-max relevance model produces near-conservative heatmaps, and the training-free deep Taylor decomposition produces heatmaps that are fully conservative. Both deep Taylor decomposition variants shown here also ensures positivity, due to the application of the $z^\mathcal{B}\!$- and $z^+\!$-rule in the respective layers.
475
+
476
+ \begin{figure*}
477
+ \centering \small
478
+ \begin{tabular}{cccc}
479
+ Image &
480
+ Sensitivity (CaffeNet) &
481
+ Deep Taylor (CaffeNet) &
482
+ Deep Taylor (GoogleNet)\\
483
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/frogs2-animal-178429_1280.jpg}&
484
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/bvlc_sensitivity/frogs2-animal-178429_1280_hm.png}&
485
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/bvlc_deeptaylor/frogs2-animal-178429_1280_hm.png}&
486
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/gnet_deeptaylor/frogs2-animal-178429_1280_hm.png}\\
487
+
488
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/hammerhead-shark-298238_1280.jpg}&
489
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/bvlc_sensitivity/hammerhead-shark-298238_1280_hm.png}&
490
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/bvlc_deeptaylor/hammerhead-shark-298238_1280_hm.png}&
491
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/gnet_deeptaylor/hammerhead-shark-298238_1280_hm.png}\\
492
+
493
+ \includegraphics[trim=0 25 0 45,clip,width=0.22\linewidth]{images/kitten-288549_1280.jpg}&
494
+ \includegraphics[trim=0 25 0 45,clip,width=0.22\linewidth]{images/bvlc_sensitivity/kitten-288549_1280_hm.png}&
495
+ \includegraphics[trim=0 25 0 45,clip,width=0.22\linewidth]{images/bvlc_deeptaylor/kitten-288549_1280_hm.png}&
496
+ \includegraphics[trim=0 25 0 45,clip,width=0.22\linewidth]{images/gnet_deeptaylor/kitten-288549_1280_hm.png}\\
497
+
498
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/bighorn-sheep-204693_1280.jpg}&
499
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/bvlc_sensitivity/bighorn-sheep-204693_1280_hm.png}&
500
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/bvlc_deeptaylor/bighorn-sheep-204693_1280_hm.png}&
501
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/gnet_deeptaylor/bighorn-sheep-204693_1280_hm.png}\\
502
+
503
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/matches-171732_1280.jpg}&
504
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/bvlc_sensitivity/matches-171732_1280_hm.png}&
505
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/bvlc_deeptaylor/matches-171732_1280_hm.png}&
506
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/gnet_deeptaylor/matches-171732_1280_hm.png}\\
507
+
508
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/motorcycle-345001_1280.jpg}&
509
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/bvlc_sensitivity/motorcycle-345001_1280_hm.png}&
510
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/bvlc_deeptaylor/motorcycle-345001_1280_hm.png}&
511
+ \includegraphics[trim=0 35 0 35,clip,width=0.22\linewidth]{images/gnet_deeptaylor/motorcycle-345001_1280_hm.png}\\
512
+
513
+ \includegraphics[trim=0 45 0 25,clip,width=0.22\linewidth]{images/scooter_vietnam-265807_1280.jpg}&
514
+ \includegraphics[trim=0 45 0 25,clip,width=0.22\linewidth]{images/bvlc_sensitivity/scooter_vietnam-265807_1280_hm.png}&
515
+ \includegraphics[trim=0 45 0 25,clip,width=0.22\linewidth]{images/bvlc_deeptaylor/scooter_vietnam-265807_1280_hm.png}&
516
+ \includegraphics[trim=0 45 0 25,clip,width=0.22\linewidth]{images/gnet_deeptaylor/scooter_vietnam-265807_1280_hm.png}\\
517
+
518
+ \includegraphics[trim=0 45 0 25,clip,width=0.22\linewidth]{images/stupa-83774_1280.jpg}&
519
+ \includegraphics[trim=0 45 0 25,clip,width=0.22\linewidth]{images/bvlc_sensitivity/stupa-83774_1280_hm.png}&
520
+ \includegraphics[trim=0 45 0 25,clip,width=0.22\linewidth]{images/bvlc_deeptaylor/stupa-83774_1280_hm.png}&
521
+ \includegraphics[trim=0 45 0 25,clip,width=0.22\linewidth]{images/gnet_deeptaylor/stupa-83774_1280_hm.png}
522
+ \end{tabular}
523
+ \caption{Images of different ILSVRC classes (``frog'', ``shark'', ``cat'', ``sheep'', ``matchstick'', ``motorcycle'', ``scooter'', and ``stupa'') given as input to a deep network, and displayed next to the corresponding heatmaps. Heatmap scores are summed over all color channels.}
524
+ \label{fig:ilsvrc}
525
+ \end{figure*}
526
+
527
+ \subsection{Experiment on ILSVRC}
528
+
529
+ We now apply the fast training-free decomposition to explain decisions made by large neural networks (BVLC Reference CaffeNet \cite{Jia13caffe} and GoogleNet \cite{DBLP:journals/corr/SzegedyLJSRAEVR14}) trained on the dataset of the ImageNet large scale visual recognition challenges ILSVRC 2012 \cite{ilsvrc2012} and ILSVRC 2014 \cite{DBLP:journals/ijcv/RussakovskyDSKS15} respectively. For these models, standard Taylor decomposition methods with root finding are computationally too expensive. We keep the neural networks unchanged.
530
+
531
+ The training-free relevance propagation method is tested on a number of images from Pixabay.com and Wikimedia Commons. The $z^\mathcal{B}\!$-rule is applied to the first convolution layer. For all higher convolution and fully-connected layers, the $z^+\!$-rule is applied. Positive biases (that are not allowed in our deep Taylor framework), are treated as neurons, on which relevance can be redistributed (i.e. we add $\max(0,b_j)$ in the denominator of $z^\mathcal{B}\!$- and $z^\mathcal{+}\!$-rules). Normalization layers are bypassed in the relevance propagation pass. In order to visualize the heatmaps in the pixel space, we sum the relevances of the three color channels, leading to single-channel heatmaps, where the red color designates relevant regions.
532
+
533
+ Figure \ref{fig:ilsvrc} shows the resulting heatmaps for eight different images. Deep Taylor decomposition produces exhaustive heatmaps covering the whole object to detect. On the other hand, sensitivity analysis assigns most of the relevance to a few pixels. Deep Taylor heatmaps for the Caffenet and Googlenet have a high level of similarity, showing the transparency of the heatmapping method to the choice of deep network architecture. However, GoogleNet being more accurate, its corresponding heatmaps are also of better quality, with more heat associated to the truly relevant parts of the image. Heatmaps identify the dorsal fin of the shark, the head of the cat, the flame above the matchsticks, or the wheels of the motorbike. The heatmaps are able to detect two instances of the same object within a same image, for example, the two frogs and the two stupas. The heatmaps also ignore most of the distracting structure, such as the horizontal lines above the cat's head, the wood pattern behind the matches, or the grass behind the motorcycle. Sometimes, the object to detect is shown in a less stereotypical pose or can be confused with the background. For example, the sheeps in the top-right image are overlapping and superposed to a background of same color, and the scooter is difficult to separate from the complex and high contrast urban background. This confuses the network and the heatmapping procedure, and in that case, a significant amount of relevance is lost to the background.
534
+
535
+ Figure \ref{fig:zoom} studies the special case of an image of class ``volcano'', and a zoomed portion of it. On a global scale, the heatmapping method recognizes the characteristic outline of the volcano. On a local scale, the relevance is present on both sides of the edge of the volcano, which is consistent with the fact that the two sides of the edge are necessary to detect it. The zoomed portion of the image also reveals different stride sizes in the first convolution layer between CaffeNet (stride 4) and GoogleNet (stride 2). Therefore, our proposed heatmapping technique produces explanations that are interpretable both at a global and local scale in the pixel space.
536
+
537
+ \begin{figure}[t]
538
+ \centering \small
539
+ \includegraphics[width=1.0\linewidth]{figures/figure11.pdf}
540
+ \caption{Image with ILSVRC class ``volcano'', displayed next to its associated heatmaps and a zoom on a region of interest.}
541
+ \label{fig:zoom}
542
+ \end{figure}
543
+
544
+ \section{Conclusion}
545
+ \label{section:conclusion}
546
+
547
+ Nonlinear machine learning models have become standard tools in science and industry due to their excellent performance even for large, complex and high-dimensional problems. However, in practice it becomes more and more important to understand the underlying nonlinear model, i.e. to achieve transparency of {\em what} aspect of the input makes the model decide.
548
+
549
+ To achieve this, we have contributed by novel conceptual ideas to deconstruct nonlinear models. Specifically, we have proposed a novel relevance propagation approach based on deep Taylor decomposition, that is used to efficiently assess the importance of single pixels in image classification applications. Thus, we are now able to compute {\em heatmaps} that clearly and intuitively allow to better understand the role of input pixels when classifying an unseen data point.
550
+
551
+ In particular, we have shed light on theoretical connections between the Taylor decomposition of a function and rule-based relevance propagation techniques, showing a clear relationship between these two approaches for a particular class of neural networks. We have introduced the concept of relevance model as a mean to scale deep Taylor decomposition to neural networks with many layers. Our method is stable under different architectures and datasets, and does not require hyperparameter tuning.
552
+
553
+ We would like to stress, that we are free to use as a starting point of our framework either an own trained and carefully tuned neural network model or we may also download existing pre-trained deep network models (e.g. the Caffe Reference ImageNet Model \cite{Jia13caffe}) that have already been shown to achieve excellent performance on benchmarks. In both cases, our layer-wise relevance propagation concept can provide explanation. In other words our approach is orthogonal to the quest for enhanced results on benchmarks, in fact, we can use any benchmark winner and then enhance its transparency to the user.
554
+
555
+ \bibliographystyle{ieeetr}
556
+
557
+ \bibliography{paper}
558
+
559
+ \end{document}
papers/1512/1512.02902.tex ADDED
@@ -0,0 +1,854 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,twocolumn,letterpaper]{article}
2
+
3
+ \usepackage{cvpr}
4
+ \usepackage{times}
5
+ \usepackage{epsfig}
6
+ \usepackage{graphicx}
7
+ \graphicspath{{figs/}}
8
+ \usepackage{amsmath}
9
+ \usepackage{amssymb}
10
+ \usepackage{subfigure}
11
+ \usepackage{booktabs}
12
+ \usepackage{multirow}
13
+ \usepackage{balance}
14
+ \usepackage{enumitem}
15
+ \usepackage{float}
16
+ \usepackage{lipsum}
17
+ \usepackage{rotating}
18
+ \usepackage[utf8]{inputenc}
19
+ \usepackage{bbm}
20
+ \usepackage{caption}
21
+ \captionsetup{font=small}
22
+ \usepackage{soul}
23
+ \usepackage{pifont}
24
+ \usepackage{color}
25
+ \definecolor{gray}{rgb}{0.80, 0.80, 0.80}
26
+ \definecolor{orange}{rgb}{1,0.9,0.65}
27
+ \definecolor{gr}{rgb}{0.9,1,0.6}
28
+ \definecolor{bl}{rgb}{0.9,0.8,1}
29
+ \definecolor{bg}{rgb}{0.8,0.9,1}
30
+ \definecolor{orr}{rgb}{1,0.92,0.75}
31
+ \definecolor{grr}{rgb}{0.8,1,0.5}
32
+ \definecolor{blr}{rgb}{0.97,0.8,1}
33
+ \definecolor{bgr}{rgb}{0.7,0.9,1}
34
+ \definecolor{gry}{rgb}{0.92,0.92,0.92}
35
+
36
+ \definecolor{cadmiumgreen}{rgb}{0.0, 0.42, 0.24}
37
+ \definecolor{cornellred}{rgb}{0.7, 0.11, 0.11}
38
+ \definecolor{cornflowerblue}{rgb}{0.39, 0.58, 0.93}
39
+
40
+ \newcommand{\graypara}[1]{{\color{gray}\lipsum[{#1}]}}
41
+ \usepackage{array}
42
+ \usepackage{multirow}
43
+ \usepackage{colortbl}
44
+
45
+
46
+ \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}
47
+
48
+ \newcommand{\cmark}{{\ding{51}}}\newcommand{\xmark}{{\ding{55}}}
49
+
50
+ \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
51
+ \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
52
+ \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
53
+
54
+
55
+ \cvprfinalcopy
56
+
57
+ \def\cvprPaperID{1284} \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
58
+
59
+ \ifcvprfinal\pagestyle{empty}\fi
60
+ \begin{document}
61
+
62
+ \title{\vspace{-0.5cm}MovieQA: Understanding Stories in Movies through Question-Answering}
63
+
64
+ \author{
65
+ Makarand Tapaswi$^1$,\hspace{1.4cm}Yukun Zhu$^3$,\hspace{1.5cm}Rainer Stiefelhagen$^1$\\
66
+ Antonio Torralba$^2$,\hspace{1.0cm}Raquel Urtasun$^3$,\hspace{1.5cm}Sanja Fidler$^3$
67
+ \vspace{0.1cm}\\
68
+ $^1$Karlsruhe Institute of Technology,
69
+ $^2$Massachusetts Institute of Technology,
70
+ $^3$University of Toronto
71
+ \\
72
+ \texttt{\footnotesize
73
+ \{tapaswi,rainer.stiefelhagen\}@kit.edu,
74
+ torralba@csail.mit.edu,
75
+ \{yukun,urtasun,fidler\}@cs.toronto.edu
76
+ }
77
+ \\
78
+ {\normalsize \url{http://movieqa.cs.toronto.edu}}
79
+ }
80
+
81
+
82
+
83
+ \twocolumn[{\renewcommand\twocolumn[1][]{#1}\maketitle
84
+ \vspace*{-0.7cm}
85
+ \centering
86
+ \includegraphics[width=\linewidth,trim=0 36 0 91,clip]{figs/MovieQA_4Q_1.pdf}
87
+ \vspace*{-0.7cm}
88
+ \captionof{figure}{\small Our MovieQA dataset contains 14,944 questions about 408 movies.
89
+ It contains multiple sources of information: plots, subtitles, video clips, scripts, and DVS transcriptions.
90
+ In this figure we show example QAs from \emph{The Matrix} and localize them in the timeline.}
91
+ \label{fig:frontpage}
92
+ \vspace*{0.5cm}
93
+ }]
94
+
95
+
96
+ \begin{abstract}
97
+ We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text.
98
+ The dataset consists of 14,944 questions about 408 movies with high semantic diversity.
99
+ The questions range from simpler ``Who'' did ``What'' to ``Whom'', to ``Why'' and ``How'' certain events occurred.
100
+ Each question comes with a set of five possible answers; a correct one and four deceiving answers provided by human annotators.
101
+ Our dataset is unique in that it contains multiple sources of information -- video clips, plots, subtitles, scripts, and DVS~\cite{Rohrbach15}.
102
+ We analyze our data through various statistics and methods.
103
+ We further extend existing QA techniques to show that question-answering with such open-ended semantics is hard.
104
+ We make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain.
105
+ \end{abstract}
106
+
107
+ \vspace{-2mm}
108
+ \vspace{-1mm}
109
+ \section{Introduction}
110
+ \label{sec:intro}
111
+ \vspace{-1mm}
112
+
113
+ \begin{figure*}[t]
114
+ \vspace{-3mm}
115
+ \includegraphics[height=0.161\linewidth]{figs/et}
116
+ \includegraphics[height=0.161\linewidth]{figs/vegas1}
117
+ \includegraphics[height=0.161\linewidth]{figs/forrest}
118
+ \includegraphics[height=0.161\linewidth]{figs/10things}\\[1mm]
119
+ \begin{scriptsize}
120
+ \addtolength{\tabcolsep}{-2.1pt}
121
+ \begin{tabular}{m{0.00cm}m{3.63cm}m{0.00cm}m{4.1cm}m{0.00cm}m{3.72cm}m{0.02cm}m{4cm}}
122
+ & \hspace{-3.6mm}{\color{cornellred}{\bf Q}}: How does E.T. show his happiness that he is finally returning home? & & \hspace{-3.6mm}{\color{cornellred}{\bf Q}}: Why do Joy and Jack get married that first night they meet in Las Vegas? & & \hspace{-3.6mm}{\color{cornellred}{\bf Q}}: Why does Forrest undertake a three-year marathon? & & \vspace{-2.6mm}\hspace{-3.6mm}{\color{cornellred}{\bf Q}}: How does Patrick start winning Kat over?\\[-1mm]
123
+ & \hspace{-3.4mm}{\color{cadmiumgreen}{\bf A}}: His heart lights up & & \hspace{-3.4mm}{\color{cadmiumgreen}{\bf A}}: They are both vulnerable and totally drunk & & \hspace{-3.4mm}{\color{cadmiumgreen}{\bf A}}: Because he is upset that Jenny left him & & \hspace{-3.4mm}{\color{cadmiumgreen}{\bf A}}: By getting personal information about her likes and dislikes
124
+ \end{tabular}
125
+ \end{scriptsize}
126
+ \vspace{-3mm}
127
+ \caption{\small Examples from the MovieQA dataset.
128
+ For illustration we show a single frame, however, all these questions/answers are time-stamped to a much longer clip in the movie.
129
+ Notice that while some questions can be answered using vision or dialogs alone, most require both.
130
+ Vision can be used to locate the scene set by the question, and semantics extracted from dialogs can be used to answer it.}
131
+ \label{fig:example_questions}
132
+ \vspace{-3mm}
133
+ \end{figure*}
134
+
135
+ Fast progress in Deep Learning as well as a large amount of available labeled data has significantly pushed forward the performance in many visual tasks such as image tagging, object detection and segmentation, action recognition, and image/video captioning.
136
+ We are steps closer to applications such as assistive solutions for the visually impaired, or cognitive robotics, which require a holistic understanding of the visual world by reasoning about all these tasks in a common framework.
137
+ However, a truly intelligent machine would ideally also infer high-level semantics underlying human actions such as motivation, intent and emotion, in order to react and, possibly, communicate appropriately.
138
+ These topics have only begun to be explored in the literature~\cite{thewhy,ZhuICCV15}.
139
+
140
+
141
+
142
+ A great way of showing one's understanding about the scene is to be able to answer any question about it~\cite{malinowski14nips}.
143
+ This idea gave rise to several question-answering datasets which provide a set of questions for each image along with multi-choice answers.
144
+ These datasets are either based on RGB-D images~\cite{malinowski14nips} or a large collection of static photos such as Microsoft COCO~\cite{VQA,VisualMadlibs}.
145
+ The types of questions typically asked are ``What'' is there and ``Where'' is it, what attributes an object has, what is its relation to other objects in the scene, and ``How many'' objects of certain type are present.
146
+ While these questions verify the holistic nature of our vision algorithms, there is an inherent limitation in what can be asked about a static image.
147
+ High-level semantics about actions and their intent is mostly lost and can typically only be inferred from temporal, possibly life-long visual observations.
148
+
149
+ Movies provide us with snapshots from people's lives that link into stories, allowing an experienced human viewer to get a high-level understanding of the characters, their actions, and the motivations behind them. Our goal is to create a question-answering dataset to evaluate machine comprehension of both, complex videos such as movies and their accompanying text.
150
+ We believe that this data will help push automatic semantic understanding to the next level, required to truly understand stories of such complexity.
151
+
152
+
153
+
154
+ This paper introduces MovieQA, a large-scale question-answering dataset about movies.
155
+ Our dataset consists of 14,944 multiple-choice questions with five deceiving options, of which only one is correct, sourced from 408 movies with high semantic diversity.
156
+ For 140 of these movies (6,462 QAs), we have timestamp annotations indicating the location of the question and answer in the video.
157
+ The questions range from simpler ``Who'' did ``What'' to ``Whom'' that can be solved by vision alone, to ``Why'' and ``How'' something happened, that can only be solved by exploiting both the visual information and dialogs (see Fig.~\ref{fig:example_questions} for a few example ``Why'' and ``How'' questions).
158
+ Our dataset is unique in that it contains multiple sources of information: video clips, subtitles, scripts, plots, and DVS~\cite{Rohrbach15} as illustrated in Fig.~\ref{fig:frontpage}.
159
+ We analyze the data through various statistics and intelligent baselines that mimic how different ``students'' would approach the quiz.
160
+ We further extend existing QA techniques to work with our data and show that question-answering with such open-ended semantics is hard.
161
+ We have created an online benchmark with a leaderboard (\url{http://movieqa.cs.toronto.edu/leaderboard}), encouraging inspiring work in this challenging domain.
162
+
163
+ \vspace{-1mm}
164
+ \section{Related work}
165
+ \label{sec:relwork}
166
+ \vspace{-1mm}
167
+
168
+ Integration of language and vision is a natural step towards improved understanding and is receiving increasing attention from the research community.
169
+ This is in large part due to efforts in large-scale data collection such as Microsoft's COCO~\cite{lin2014microsoft}, Flickr30K~\cite{Flickr30k} and Abstract Scenes~\cite{Abstract15} providing tens to hundreds of thousand images with natural language captions.
170
+ Having access to such data enabled the community to shift from hand-crafted language templates typically used for image description~\cite{BabyTalk} or retrieval-based approaches~\cite{Farhadi10,Im2Txt,Yang11} to deep neural models~\cite{Zitnick14,Karpathy15,kiros15,Vinyals14} that achieve impressive captioning results.
171
+ Another way of conveying semantic understanding of both vision and text is by retrieving semantically meaningful images given a natural language query~\cite{Karpathy15}.
172
+ An interesting direction, particularly for the goals of our paper, is also the task of learning common sense knowledge from captioned images~\cite{Vedantam15}.
173
+ This has so far been demonstrated only on synthetic clip-art scenes which enable perfect visual parsing.
174
+
175
+ {\bf Video understanding via language.}
176
+ In the video domain, there are fewer works on integrating vision and language, likely due to less available labeled data.
177
+ In~\cite{lrcn2014,Venugopalan:2014wc}, the authors caption video clips using LSTMs, ~\cite{rohrbach13iccv} formulates description as a machine translation model, while older work uses templates~\cite{SanjaUAI12,Das:2013br,krishnamoorthy:aaai13}.
178
+ In~\cite{Lin:2014db}, the authors retrieve relevant video clips for natural language queries, while~\cite{ramanathan13} exploits captioned clips to learn action and role models.
179
+ For TV series in particular, the majority of work aims at recognizing and tracking characters in the videos~\cite{Baeuml2013_SemiPersonID,Bojanowski:2013bg,Ramanathan:2014fj,Sivic:2009kt}.
180
+ In~\cite{Cour08,Sankar:bv}, the authors aligned videos with movie scripts in order to improve scene prediction.
181
+ ~\cite{Tapaswi_J1_PlotRetrieval} aligns movies with their plot synopses with the aim to allow semantic browsing of large video content via textual queries.
182
+ Just recently,~\cite{Tapaswi2015_Book2Movie,ZhuICCV15} aligned movies to books with the aim to ground temporal visual data with verbose and detailed descriptions available in books.
183
+
184
+ {\bf Question-answering.}
185
+ QA is a popular task in NLP with significant advances made recently with neural models such as memory networks~\cite{Sukhbaatar2015}, deep LSTMs~\cite{Hermann15}, and structured prediction~\cite{wang2015mctest}.
186
+ In computer vision,~\cite{malinowski14nips} proposed a Bayesian approach on top of a logic-based QA system~\cite{Liang13}, while~\cite{Malinowski15,mengye15} encoded both an image and the question using an LSTM and decoded an answer.
187
+ We are not aware of QA methods addressing the temporal domain.
188
+
189
+ {\bf QA Datasets.}
190
+ Most available datasets focus on image~\cite{KongCVPR14,lin2014microsoft,Flickr30k,Abstract15} or video description~\cite{Chen11,Rohrbach15,YouCook}.
191
+ Particularly relevant to our work is the MovieDescription dataset~\cite{Rohrbach15} which transcribed text from the Described Video Service (DVS), a narration service for the visually impaired, for a collection of over 100 movies.
192
+ For QA, \cite{malinowski14nips} provides questions and answers (mainly lists of objects, colors, \etc) for the NYUv2 RGB-D dataset, while~\cite{VQA,VisualMadlibs} do so for MS-COCO with a dataset of a million QAs.
193
+ While these datasets are unique in testing the vision algorithms in performing various tasks such as recognition, attribute induction and counting, they are inherently limited to static images.
194
+ In our work, we collect a large QA dataset sourced from over 400 movies with challenging questions that require semantic reasoning over a long temporal domain.
195
+
196
+ Our dataset is also related to purely text QA datasets such as MCTest~\cite{MCTest} which contains 660 short stories with 4 multi-choice QAs each, and~\cite{Hermann15} which converted 300K news summaries into Cloze-style questions.
197
+ We go beyond these datasets by having significantly longer text, as well as multiple sources of available information (plots, subtitles, scripts and DVS).
198
+ This makes our data one of a kind.
199
+ \vspace{-1.5mm}
200
+ \section{MovieQA dataset}
201
+ \label{sec:movieqa}
202
+ \vspace{-1mm}
203
+
204
+
205
+
206
+ \begin{table}[t]
207
+ \vspace{-2.5mm}
208
+ \centering
209
+ \tabcolsep=0.19cm
210
+ {\small
211
+ \begin{tabular}{lrrrr}
212
+
213
+ \toprule
214
+ & \textsc{Train} & \textsc{Val} & \textsc{Test} & \textsc{Total} \\
215
+ \hline
216
+ \multicolumn{5}{>{\columncolor{bgr}}c}{Movies with Plots and Subtitles} \\[0.1mm]
217
+ \hline
218
+ \#Movies & 269 & 56 & 83 & 408 \\
219
+ \#QA & 9848 & 1958 & 3138 & 14944 \\
220
+ Q \#words & 9.3 & 9.3 & 9.5 & 9.3 $\pm$ 3.5 \\
221
+ CA. \#words & 5.7 & 5.4 & 5.4 & 5.6 $\pm$ 4.1 \\
222
+ WA. \#words & 5.2 & 5.0 & 5.1 & 5.1 $\pm$ 3.9 \\
223
+ \hline
224
+ \multicolumn{5}{>{\columncolor{orr}}c}{Movies with Video Clips} \\[0.1mm]
225
+ \hline
226
+ \#Movies & 93 & 21 & 26 & 140 \\
227
+ \#QA & 4318 & 886 & 1258 & 6462 \\
228
+ \#Video clips & 4385 & 1098 & 1288 & 6771 \\
229
+ Mean clip dur. (s) & 201.0 & 198.5 & 211.4 & 202.7 $\pm$ 216.2 \\
230
+ Mean QA \#shots & 45.6 & 49.0 & 46.6 & 46.3 $\pm$ 57.1 \\
231
+ \bottomrule
232
+ \end{tabular}
233
+ }
234
+ \vspace*{-0.3cm}
235
+ \caption{MovieQA dataset stats.
236
+ Our dataset supports two modes of answering: text and video.
237
+ We present the split into train, val, and test splits for the number of movies and questions.
238
+ We also present mean counts with standard deviations in the total column.}
239
+ \vspace*{-0.35cm}
240
+ \label{tab:qa_stats}
241
+ \end{table}
242
+ The goal of our paper is to create a challenging benchmark that evaluates semantic understanding over long temporal data.
243
+ We collect a dataset with very diverse sources of information that can be exploited in this challenging domain.
244
+ Our data consists of quizzes about movies that the automatic systems will have to answer.
245
+ For each movie, a quiz comprises of a set of questions, each with 5 multiple-choice answers, only one of which is correct.
246
+ The system has access to various sources of textual and visual information, which we describe in detail below.
247
+
248
+ We collected 408 subtitled movies, and obtained their extended summaries in the form of plot synopses from \emph{Wikipedia}.
249
+ We crawled \emph{imsdb} for scripts, which were available for 49\% (199) of our movies.
250
+ A fraction of our movies (60) come with DVS transcriptions provided by~\cite{Rohrbach15}.
251
+
252
+ {\bf Plot synopses}
253
+ are movie summaries that fans write after watching the movie.
254
+ Synopses widely vary in detail and range from one to 20 paragraphs, but focus on describing content that is directly relevant to the story.
255
+ They rarely contain detailed visual information (\eg~character appearance), and focus more on describing the movie events and character interactions.
256
+ We exploit plots to gather our quizzes.
257
+
258
+ {\bf Videos and subtitles.}
259
+ An average movie is about 2 hours in length and has over 198K frames and almost 2000 shots.
260
+ Note that video alone contains information about e.g., ``Who'' did ``What'' to ``Whom'', but may be lacking in information to explain why something happened.
261
+ Dialogs play an important role, and only both modalities together allow us to fully understand the story.
262
+ Note that subtitles do not contain speaker information. In our dataset, we provide video clips rather than full movies.
263
+
264
+ {\bf DVS} is a service that narrates movie scenes to the visually impaired by inserting relevant descriptions in between dialogs.
265
+ These descriptions contain sufficient ``visual'' information about the scene that they allow visually impaired audience to follow the movie.
266
+ DVS thus acts as a proxy for a perfect vision system, and is another source for answering.
267
+
268
+
269
+ {\bf Scripts.}
270
+ The scripts that we collected are written by screenwriters and serve as a guideline for movie making.
271
+ They typically contain detailed descriptions of scenes, and, unlike subtitles, contain both dialogs and speaker information.
272
+ Scripts are thus similar, if not richer in content to DVS+Subtitles, however are not always entirely faithful to the movie as the director may aspire to artistic freedom.
273
+
274
+
275
+ \vspace{-1mm}
276
+ \subsection{QA Collection method}
277
+ \vspace{-1mm}
278
+
279
+ \begin{table*}[t]
280
+ \centering
281
+ \tabcolsep=0.16cm
282
+ {\small
283
+ \begin{tabular}{lccclllrr}
284
+ \toprule
285
+ & Txt & Img & Vid & Goal
286
+ & Data source & AType & \#Q & AW \\
287
+ \midrule
288
+ MCTest~\cite{MCTest} & \cmark & - & - & reading comprehension
289
+ & Children stories & MC (4) & 2,640 & 3.40 \\
290
+
291
+ bAbI~\cite{Weston14} & \cmark & - & - & reasoning for toy tasks
292
+ & Synthetic & Word & 20$\times$2,000 & 1.0 \\
293
+
294
+ CNN+DailyMail~\cite{Hermann15} & \cmark & - & - & information abstraction
295
+ & News articles & Word & 1,000,000* & 1* \\
296
+
297
+ DAQUAR~\cite{malinowski14nips} & - & \cmark & - & visual: counts, colors, objects
298
+ & NYU-RGBD & Word/List & 12,468 & 1.15 \\
299
+
300
+ Visual Madlibs~\cite{VisualMadlibs} & - & \cmark & - & visual: scene, objects, person, ...
301
+ & COCO+Prompts & FITB/MC (4)& 2$\times$75,208* & 2.59 \\
302
+
303
+ VQA (v1)~\cite{VQA} & - & \cmark & - & visual understanding
304
+ & COCO+Abstract & Open/MC (18) & 764,163 & 1.24 \\
305
+
306
+ MovieQA & \cmark & \cmark & \cmark & text+visual story comprehension
307
+ & Movie stories & MC (5) & 14,944 & 5.29 \\
308
+ \bottomrule
309
+ \end{tabular}
310
+ }
311
+ \vspace{-0.25cm}
312
+ \caption{A comparison of various QA datasets. First three columns depict the modality in which the story is presented. AType: answer type; AW: average \# of words in answer(s); MC (N): multiple choice with N answers; FITB: fill in the blanks; *estimated information.}
313
+ \vspace{-0.3cm}
314
+ \label{tab:dataset-comparison}
315
+ \end{table*}
316
+ \begin{figure}
317
+ \vspace{-2.5mm}
318
+ \centering
319
+ \includegraphics[width=0.93\linewidth,trim=0 0 0 20,clip]{figs/stats-qword_calength.pdf}
320
+ \vspace*{-0.4cm}
321
+ \caption{Average number of words in MovieQA dataset based on the first word in the question. Area of a bubble indicates \#QA.}
322
+ \vspace*{-0.4cm}
323
+ \label{fig:stats:qword_calength}
324
+ \end{figure}
325
+
326
+ Since videos are difficult and expensive to provide to annotators, we used plot synopses as a proxy for the movie.
327
+ While creating quizzes, our annotators only referred to the story plot and were thus automatically coerced into asking story-like questions.
328
+ We split our annotation efforts into two primary parts to ensure high quality of the collected data.
329
+
330
+ {\bf Q and correct A.}
331
+ Our annotators were first asked to select a movie from a large list, and were shown its plot synopsis one paragraph at a time.
332
+ For each paragraph, the annotator had the freedom of forming any number and type of questions.
333
+ Each annotator was asked to provide the correct answer, and was additionally required to mark a minimal set of sentences within the plot synopsis paragraph that can be used to both frame the question and answer it.
334
+ This was treated as ground-truth for localizing the QA in the plot.
335
+
336
+ In our instructions, we asked the annotators to provide context to each question, such that a human taking the quiz should be able to answer it by watching the movie alone (without having access to the synopsis).
337
+ The purpose of this was to ensure questions that are localizable in the video and story as opposed to generic questions such as ``What are they talking?".
338
+ We trained our annotators for about one to two hours and gave them the option to re-visit and correct their data.
339
+ The annotators were paid by the hour, a strategy that allowed us to collect more thoughtful and complex QAs, rather than short questions and single-word answers.
340
+
341
+ {\bf Multiple answer choices.}
342
+ In the second step of data collection, we collected multiple-choice answers for each question.
343
+ Our annotators were shown a paragraph and a question at a time, but not the correct answer.
344
+ They were then asked to answer the question correctly as well as provide 4 wrong answers.
345
+ These answers were either deceiving facts from the same paragraph or common-sense answers.
346
+ The annotator was also allowed to re-formulate or correct the question.
347
+ We used this to sanity check all the questions received in the first step.
348
+ All QAs from the ``val'' and ``test'' set underwent another round of clean up.
349
+
350
+ {\bf Time-stamp to video.}
351
+ We further asked in-house annotators to align each sentence in the plot synopsis to the video by marking the beginning and end (in seconds) of the video that the sentence describes.
352
+ Long and complicated plot sentences were often aligned to multiple, non-consecutive video clips.
353
+ Annotation took roughly 2 hours per movie.
354
+ Since we have each QA aligned to a sentence(s) in the plot synopsis, the video to plot alignment links QAs with video clips.
355
+ We provide these clips as part of our benchmark.
356
+
357
+
358
+
359
+
360
+
361
+ \subsection{Dataset Statistics}
362
+
363
+ \begin{figure}
364
+ \vspace{-4mm}
365
+ \centering
366
+ \includegraphics[width=0.9\linewidth]{figs/stats-answer_type.pdf}
367
+ \vspace*{-0.6cm}
368
+ \caption{Stats about MovieQA questions based on answer types.
369
+ Note how questions beginning with the same word may cover a variety of answer types:
370
+ \emph{Causality}: What happens ... ?; \emph{Action}: What did X do?
371
+ \emph{Person name}: What is the killer's name?; \etc
372
+ }
373
+ \vspace*{-0.5cm}
374
+ \label{fig:stats:answer_type}
375
+ \end{figure}
376
+
377
+ In the following, we present some statistics of our MovieQA dataset.
378
+ Table~\ref{tab:dataset-comparison} presents an overview of popular and recent Question-Answering datasets in the field.
379
+ Most datasets (except MCTest) use very short answers and are thus limited to covering simpler visual/textual forms of understanding.
380
+ To the best of our knowledge, our dataset not only has long sentence-like answers, but is also the first to use videos in the form of movies.
381
+
382
+ {\bf Multi-choice QA.}
383
+ We collected a total of 14,944 QAs from 408 movies.
384
+ Each question comes with one correct and four deceiving answers.
385
+ Table~\ref{tab:qa_stats} presents an overview of the dataset along with information about the train/val/test splits, which will be used to evaluate automatically trained QA models.
386
+ On average, our questions and answers are fairly long with about 9 and 5 words respectively unlike most other QA datasets.
387
+ The video-based answering split for our dataset, supports 140 movies for which we aligned plot synopses with videos.
388
+ Note that the QA methods needs to look at a long video clip ($\sim$200s) to answer the question.
389
+
390
+ Fig.~\ref{fig:stats:qword_calength} presents the number of questions (bubble area) split based on the first word of the question along with information about number of words in the question and answer.
391
+ Of particular interest are ``Why'' questions that require verbose answers, justified by having the largest average number of words in the correct answer, and in contrast, ``Who'' questions with answers being short people names.
392
+
393
+
394
+ Instead of the first word in the question, a peculiar way to categorize QAs is based on the answer type.
395
+ We present such an analysis in Fig.~\ref{fig:stats:answer_type}.
396
+ Note how reasoning based questions (Why, How, Abstract) are a large part of our data.
397
+ In the bottom left quadrant we see typical question types that can likely be answered using vision alone.
398
+ Note however, that even the reasoning questions typically require vision, as the question context provides a visual description of a scene (\eg,~``Why does John run after Mary?'').
399
+
400
+
401
+
402
+ \begin{table}[t]
403
+ \centering
404
+ {\small
405
+ \begin{tabular}{lccc}
406
+ \toprule
407
+ Text type & \# Movies & \# Sent. / Mov. & \# Words in Sent. \\
408
+ \midrule
409
+ Plot & 408 & ~~~~35.2 & 20.3 \\
410
+ Subtitle & 408 & 1558.3 & ~~6.2 \\
411
+ Script & 199 & 2876.8 & ~~8.3 \\
412
+ DVS & ~~60 & ~~636.3 & ~~9.3 \\
413
+ \bottomrule
414
+ \end{tabular}
415
+ }
416
+ \vspace*{-0.25cm}
417
+ \caption{Statistics for the various text sources used for answering.}
418
+ \vspace*{-0.4cm}
419
+ \label{tab:textsrc_stats}
420
+ \end{table}
421
+ {\bf Text sources for answering.}
422
+ In Table~\ref{tab:textsrc_stats}, we summarize and present some statistics about different text sources used for answering.
423
+ Note how plot synopses have a large number of words per sentence, hinting towards the richness and complexity of the source.
424
+
425
+
426
+
427
+
428
+ \section{Multi-choice Question-Answering}
429
+
430
+ We now investigate a number of intelligent baselines for QA.
431
+ We also study inherent biases in the data and try to answer the quizzes based simply on answer characteristics such as word length or within answer diversity.
432
+
433
+
434
+ Formally, let $S$ denote the story, which can take the form of any of the available sources of information -- \eg~plots, subtitles, or video shots.
435
+ Each story $S$ has a set of questions, and we assume that the (automatic) student reads one question $q^S$ at a time.
436
+ Let $\{a_{j}^S\}_{j=1}^{M}$ be the set of multiple choice answers (only one of which is correct) corresponding to $q^S$, with $M=5$ in our dataset.
437
+
438
+ The general problem of multi-choice question answering can be formulated by a three-way scoring function $f(S,q^S,a^S)$. This function evaluates the ``quality'' of the answer given the story and the question.
439
+ Our goal is thus to pick the best answer $a^S$ for question $q^S$ that maximizes $f$:
440
+ \begin{equation}
441
+ j^* = \arg\max_{j=1\ldots M} f(S, q^S, a_{j}^S) \,
442
+ \end{equation}
443
+ Answering schemes are thus different functions $f$.
444
+ We drop the superscript $(\cdot)^S$ for simplicity of notation.
445
+
446
+ \subsection{The Hasty Student}
447
+ \label{sec:hasty_student}
448
+
449
+ We first consider $f$ which ignores the story and attempts to answer the question directly based on latent biases and similarities.
450
+ We call such a baseline as the ``Hasty Student'' since he/she is not concerned to read/watch the actual story.
451
+
452
+ The extreme case of a hasty student is to try and answer the question by only looking at the answers.
453
+ Here, $f(S, q, a_{j}) = g_{H1}(a_{j}|{\bf a})$, where $g_{H1}(\cdot)$ captures some properties of the answers.
454
+
455
+
456
+ {\bf Answer length.}
457
+ We explore using the number of words in the multiple choices to find the correct answer and explore biases in the dataset.
458
+ As shown in Table~\ref{tab:qa_stats}, correct answers are slightly longer as it is often difficult to frame long deceiving answers.
459
+ We choose an answer by:
460
+ (i) selecting the longest answer;
461
+ (ii) selecting the shortest answer; or
462
+ (iii) selecting the answer with the most different length.
463
+
464
+ {\bf Within answer similarity/difference.}
465
+ While still looking only at the answers, we compute a distance between all answers based on their representations (discussed in Sec.~\ref{sec:representation}).
466
+ We then select our answer as either the most similar or most distinct among all answers.
467
+
468
+ {\bf Q and A similarity.}
469
+ We now consider a hasty student that looks at both the question and answer, $f(S, q, a_j) = g_{H2}(q, a_{j})$.
470
+ We compute similarity between the question and each answer and pick the highest scoring answer.
471
+
472
+
473
+ \subsection{The Searching Student}
474
+ \label{sec:sliding}
475
+
476
+ While the hasty student ignores the story, we consider a student that tries to answer the question by trying to locate a subset of the story $S$ which is most similar to both the question and the answer.
477
+ The scoring function $f$ is
478
+ \begin{equation}
479
+ f(S, q, a_{j}) = g_I(S, q) + g_I(S, a_{j}) \, .
480
+ \end{equation}
481
+ a factorization of the question and answer similarity.
482
+ We propose two similarity functions:
483
+ a simple windowed cosine similarity, and another using a neural architecture.
484
+
485
+
486
+ {\bf Cosine similarity with a sliding window.} We aim to find the best window of $H$ sentences (or shots) in the story $S$ that maximize similarity between the story and question, and story and answer.
487
+ We define our similarity function:
488
+ \begin{equation}
489
+ f(S, q, a_{j}) = \max_l \sum_{k = l}^{l+H} g_{ss}(s_k, q) + g_{ss}(s_k, a_{j}) \, ,
490
+ \end{equation}
491
+ where $s_k$ denotes a sentence (or shot) from the story $S$.
492
+ We use $g_{ss}(s, q) = x(s)^T x(q)$ as a dot product between the (normalized) representations of the two sentences (shots).
493
+ We discuss these representations in detail in Sec.~\ref{sec:representation}.
494
+
495
+ {\bf Searching student with a convolutional brain (SSCB).}
496
+ Instead of factoring $f(S, q, a_{j})$ as a fixed (unweighted) sum of two similarity functions $g_{I}(S, q)$ and $g_{I}(S, a_{j})$, we build a neural network that learns such a function.
497
+ Assuming the story $S$ is of length $n$, \eg~$n$ plot sentences or $n$ video shots, $g_{I}(S, q)$ and $g_{I}(S, a_{j})$ can be seen as two vectors of length $n$ whose $k$-th entry is $g_{ss}(s_k, q)$.
498
+ We further combine all $[g_I(S, a_{j})]_j$ for the 5 answers into a $n\times 5$ matrix.
499
+ The vector $g_{I}(S, q)$ is replicated $5$-times, and we stack the question and answer matrix together to obtain a tensor of size $n \times 5 \times 2$.
500
+
501
+ Our neural similarity model is a convnet (CNN), shown in Fig.~\ref{fig:model:cnn}, that takes the above tensor, and applies couple layers of $h = 10$, $1 \times 1$ convolutions to approximate a family of functions $\phi(g_I(S, q), g_I(S, a_{j}))$.
502
+ Additionally, we incorporate a max pooling layer with kernel size $3$ to allow for scoring the similarity within a window in the story.
503
+ The last convolutional output is a tensor with shape ($\frac{n}{3}, 5$), and we apply both mean and max pooling across the storyline, add them, and make predictions using softmax.
504
+ We train our network using cross-entropy loss and the Adam optimizer~\cite{kingma2014adam}.
505
+
506
+ \begin{figure}
507
+ \vspace{-5mm}
508
+ \centering
509
+ \includegraphics[width=0.95\linewidth,trim=0 0 0 0,clip]{figs/CNN.pdf}
510
+ \vspace*{-0.5cm}
511
+ \caption{\small Our neural similarity architecture (see text for details).}
512
+ \label{fig:model:cnn}
513
+ \vspace*{-0.5cm}
514
+ \end{figure}
515
+
516
+
517
+
518
+
519
+
520
+
521
+
522
+
523
+ \subsection{Memory Network for Complex QA}
524
+
525
+ Memory Networks were originally proposed for text QA and model complex three-way relationships between the story, question and answer.
526
+ We briefly describe MemN2N proposed by~\cite{Sukhbaatar2015} and suggest simple extensions to make it suitable for our data and task.
527
+
528
+ The input of the original MemN2N is a story and question.
529
+ The answering is restricted to single words and is done by picking the most likely word from the vocabulary $\mathcal{V}$ of 20-40 words.
530
+ Note that this is not directly applicable to MovieQA, as our data set does not have perform vocabulary-based answering.
531
+
532
+ A question $q$ is encoded as a vector $u \in \mathbb{R}^d$ using a word embedding $B \in \mathbb{R}^{d \times |\mathcal{V}|}$.
533
+ Here, $d$ is the embedding dimension, and $u$ is obtained by mean-pooling the representations of words in the question.
534
+ Simultaneously, the sentences of the story $s_l$ are encoded using word embeddings $A$ and $C$ to provide two different sentence representations $m_l$ and $c_l$, respectively.
535
+ $m_l$, the representation of sentence $l$ in the story, is used in conjunction with $u$ to produce an attention-like mechanism which selects sentences in the story most similar to the question via a softmax function:
536
+ \begin{equation}
537
+ p_l = \mathrm{softmax}(u^T m_l) \, .
538
+ \end{equation}
539
+ The probability $p_l$ is used to weight the second sentence embedding $c_l$, and the output $o = \sum_l p_l c_l$ is obtained by pooling the weighted sentence representations across the story.
540
+ Finally, a linear projection $W \in \mathbb{R}^{|\mathcal{V}| \times d}$ decodes the question $u$ and the story representation $o$ to provide a soft score for each vocabulary word
541
+ \begin{equation}
542
+ a = \mathrm{softmax}(W (o + u)) \, .
543
+ \end{equation}
544
+ The top scoring word $\hat a$ is picked from $a$ as the answer.
545
+ The free parameters to train are the embeddings $B$, $A$, $C$, $W$ for different words which can be shared across different layers.
546
+
547
+ Due to its fixed set of output answers, the MemN2N in the current form is not designed for multi-choice answering with open, natural language answers.
548
+ We propose two key modifications to make the network suitable for our task.
549
+
550
+
551
+ {\bf MemN2N for natural language answers.}
552
+ To allow the MemN2N to rank multiple answers written in natural language, we add an additional embedding layer $F$ which maps each multi-choice answer $a_j$ to a vector $g_j$.
553
+ Note that $F$ is similar to embeddings $B$, $A$ and $C$, but operates on answers instead of the question or story.
554
+ To predict the correct answer, we compute the similarity between the answers $g$, the question embedding $u$ and the story representation $o$:
555
+ \begin{equation}
556
+ \label{eq:memnet_multichoice_ans}
557
+ a = \mathrm{softmax}((o + u)^T g)
558
+ \end{equation}
559
+ and pick the most probable answer as correct.
560
+ In our general QA formulation, this is equivalent to
561
+ \begin{equation}
562
+ f(S, q, a_{j}) = g_{M1}(S, q, a_{j}) + g_{M2}(q, a_{j}),
563
+ \end{equation}
564
+ where $g_{M1}$ attends to parts of the story using the question, and a second function $g_{M2}$ directly considers similarities between the question and the answer.
565
+
566
+
567
+ {\bf Weight sharing and fixed word embeddings.}
568
+ The original MemN2N learns embeddings for each word based directly on the task of question-answering.
569
+ However, to scale this to large vocabulary data sets like ours, this requires unreasonable amounts of training data.
570
+ For example, training a model with a vocabulary size 14,000 (obtained just from plot synopses) and $d = 100$ would entail learning 1.4M parameters for each embedding.
571
+ To prevent overfitting, we first share all word embeddings $B, A, C, F$ of the memory network.
572
+ Nevertheless, even one embedding is still a large number of parameters.
573
+
574
+ We make the following crucial modification that allows us to use the Memory Network for our dataset.
575
+ We drop $B$, $A$, $C$, $F$ and replace them by a fixed (pre-trained) word embedding $Z \in \mathbb{R}^{d_1 \times |\mathcal{V}|}$ obtained from the Word2Vec model and learn a shared linear projection layer $T \in \mathbb{R}^{d_2 \times d_1}$ to map all sentences (stories, questions and answers) into a common space.
576
+ Here, $d_1$ is the dimension of the Word2Vec embedding, and $d_2$ is the projection dimension.
577
+ Thus, the new encodings are
578
+ \begin{equation}
579
+ u = T \cdot Z q; \, m_l, c_l = T \cdot Z s_l; \, \mathrm{and} \, g_j = T \cdot Z a_j .
580
+ \end{equation}
581
+ Answer prediction is performed as before in Eq.~\ref{eq:memnet_multichoice_ans}.
582
+
583
+ We initialize $T$ either using an identity matrix $d_1 \times d_1$ or using PCA to lower the dimension from $d_1 = 300$ to $d_2 = 100$.
584
+ Training is performed using stochastic gradient descent with a batch size of 32.
585
+
586
+
587
+ \subsection{Representations for Text and Video}
588
+ \label{sec:representation}
589
+
590
+
591
+ {\bf TF-IDF} is a popular and successful feature in information retrieval.
592
+ In our case, we treat plots (or other forms of text) from different movies as documents and compute a weight for each word.
593
+ We set all words to lower case, use stemming, and compute the vocabulary $\mathcal{V}$ which consists of words $w$ that appear more than $\theta$ times in the documents. We represent each sentence (or question or answer) in a bag-of-words style with an TF-IDF score for each word.
594
+
595
+
596
+ {\bf Word2Vec.}
597
+ A disadvantage of TF-IDF is that it is unable to capture the similarities between words.
598
+ We use the skip-gram model proposed by~\cite{mikolov2013efficient} and train it on roughly 1200 movie plots to obtain domain-specific, $300$ dimensional word embeddings.
599
+ A sentence is then represented by mean-pooling its word embeddings.
600
+ We normalize the resulting vector to have unit norm.
601
+
602
+ {\bf SkipThoughts.}
603
+ While the sentence representation using mean pooled Word2Vec discards word order, SkipThoughts~\cite{skipthoughts} use a Recurrent Neural Network to capture the underlying sentence semantics.
604
+ We use the pre-trained model by~\cite{skipthoughts} to compute a $4800$ dimensional sentence representation.
605
+
606
+ {\bf Video.}
607
+ To answer questions from the video, we learn an embedding between a shot and a sentence, which maps the two modalities in a common space. In this joint space, one can score the similarity between the two modalities via a simple dot product. This allows us to apply all of our proposed question-answering techniques in their original form.
608
+
609
+ To learn the joint embedding we follow~\cite{ZhuICCV15} which extends~\cite{kiros15} to video.
610
+ Specifically, we use the GoogLeNet architecture~\cite{szegedy2014going} as well as hybrid-CNN~\cite{ZhouNIPS2014} to extract frame-wise features, and mean-pool the representations over all frames in a shot.
611
+ The embedding is a linear mapping of the shot representation and an LSTM on word embeddings on the sentence side, trained using the ranking loss on the MovieDescription Dataset~\cite{Rohrbach15} as in~\cite{ZhuICCV15}.
612
+
613
+
614
+
615
+ \section{QA Evaluation}
616
+ \label{sec:eval}
617
+
618
+ We present results for question-answering with the proposed methods on our MovieQA dataset.
619
+ We study how various sources of information influence the performance, and how different levels of complexity encoded in $f$ affects the quality of automatic QA.
620
+
621
+ {\bf Protocol.}
622
+ Note that we have two primary tasks for evaluation.
623
+ (i) {\bf Text-based}: the story takes the form of various texts -- plots, subtitles, scripts, DVS; and
624
+ (ii) {\bf Video-based}: story is the video, and with/without subtitles.
625
+
626
+ {\bf Dataset structure.}
627
+ The dataset is divided into three disjoint splits: \emph{train}, \emph{val}, and \emph{test}, based on unique movie titles in each split.
628
+ The splits are optimized to preserve the ratios between \#movies, \#QAs, and all the story sources at 10:2:3 (\eg~about 10k, 2k, and 3k QAs).
629
+ Stats for each split are presented in Table~\ref{tab:qa_stats}.
630
+ The \emph{train} set is to be used for training automatic models and tuning any hyperparameters.
631
+ The \emph{val} set should not be touched during training, and may be used to report results for several models.
632
+ The \emph{test} set is a held-out set, and is evaluated on our MovieQA server.
633
+ For this paper, all results are presented on the \emph{val} set.
634
+
635
+ {\bf Metrics.}
636
+ Multiple choice QA leads to a simple and objective evaluation.
637
+ We measure \emph{accuracy}, the number of correctly answered QAs over the total count.
638
+
639
+ \vspace{-1mm}
640
+ \subsection{The Hasty Student}
641
+ \vspace{-1mm}
642
+ \begin{table}[t]
643
+ \centering
644
+ {\small
645
+ \begin{tabular}{l|lrrr}
646
+ \toprule
647
+ \multirow{2}{*}{{\bf Answer length}} & & longest & shortest & different \\
648
+ & & 25.33 & 14.56 & 20.38 \\
649
+ \midrule
650
+ \multirow{3}{*}{{\bf Within answers}} & & TF-IDF & SkipT & w2v \\
651
+ & similar & 21.71 & 28.14 & 25.43 \\
652
+ & distinct & 19.92 & 14.91 & 15.12 \\
653
+ \midrule
654
+ \multirow{2}{*}{{\bf Question-answer}} & & TF-IDF & SkipT & w2v \\
655
+ & similar & 12.97 & 19.25 & 24.97 \\
656
+ \bottomrule
657
+ \end{tabular}
658
+ }
659
+ \vspace*{-0.2cm}
660
+ \caption{The question-answering accuracy for the ``Hasty Student'' who tries to answer questions without looking at the story.}
661
+ \vspace*{-0.5cm}
662
+ \label{tab:results-cheating_baseline}
663
+ \end{table}
664
+
665
+ The first part of Table~\ref{tab:results-cheating_baseline} shows the performance of three models when trying to answer questions based on the answer length.
666
+ Notably, always choosing the longest answer performs better (25.3\%) than random (20\%).
667
+ The second part of Table~\ref{tab:results-cheating_baseline} presents results when using within-answer feature-based similarity.
668
+ We see that the answer most similar to others is likely to be correct when the representations are generic and try to capture the semantics of the sentence (Word2Vec, SkipThoughts).
669
+ The most distinct answers performs worse than random on all features.
670
+ In the last section of Table~\ref{tab:results-cheating_baseline} we see that computing feature-based similarity between questions and answers is insufficient for answering.
671
+ Especially, TF-IDF performs worse than random since words in the question rarely appear in the answer.
672
+
673
+
674
+ {\bf Hasty Turker.}
675
+ To analyze the deceiving nature of our multi-choice QAs, we tested humans (via AMT) on a subset of 200 QAs.
676
+ The turkers were not shown the story in any form and were asked to pick the best possible answer given the question and a set of options.
677
+ We asked each question to 10 turkers, and rewarded each with a bonus if their answer agreed with the majority.
678
+ We observe that without access to the story, humans obtain an accuracy of 27.6\%.
679
+ We suspect that the bias is due to the fact that some of the QAs reveal the movie (e.g., ``Darth Vader'') and the turker may have seen this movie.
680
+ Removing such questions, and re-evaluating on a subset of 135 QAs, lowers the performance to 24.7\%.
681
+ This shows the genuine difficulty of our QAs.
682
+
683
+ \vspace{-1mm}
684
+ \subsection{Searching Student}
685
+ \vspace{-1mm}
686
+
687
+
688
+ {\bf Cosine similarity with window.}
689
+ The first section of Table~\ref{tab:results-comprehensive_baseline} presents results for the proposed cosine similarity using different representations and text stories.
690
+ Using the plots to answer questions outperforms other sources (subtitles, scripts, and DVS) as the QAs were collected using plots and annotators often reproduce words from the plot.
691
+
692
+ We show the results of using Word2Vec or SkipThought representations in the following rows of Table~\ref{tab:results-comprehensive_baseline}.
693
+ SkipThoughts perform much worse than both TF-IDF and Word2Vec which are closer.
694
+ We suspect that while SkipThoughts are good at capturing the overall semantics of a sentence, proper nouns -- names, places -- are often hard to distinguish.
695
+ Fig.~\ref{fig:results-simple_baselines_qfw} presents a accuracy breakup based on the first word of the questions.
696
+ TF-IDF and Word2Vec perform considerably well, however, we see a larger difference between the two for ``Who'' and ``Why'' questions.
697
+ ``Who'' questions require distinguishing between names, and ``Why'' answers are typically long, and mean pooling destroys semantics.
698
+ In fact Word2Vec performs best on ``Where'' questions that may use synonyms to indicate places.
699
+ SkipThoughts perform best on ``Why'' questions where sentence semantics help improve answering.
700
+
701
+ \begin{table}[t]
702
+ \vspace{-2.0mm}
703
+ \tabcolsep=0.20cm
704
+ \centering
705
+ {\small
706
+ \begin{tabular}{lcccc}
707
+ \toprule
708
+ Method & Plot & DVS & Subtitle & Script \\
709
+ \midrule
710
+ Cosine TFIDF & 47.6 & 24.5 & 24.5 & 24.6 \\
711
+ Cosine SkipThought & 31.0 & 19.9 & 21.3 & 21.2 \\
712
+ Cosine Word2Vec & 46.4 & 26.6 & 24.5 & 23.4 \\
713
+ \midrule
714
+ SSCB TFIDF & 48.5 & 24.5 & 27.6 & 26.1 \\
715
+ SSCB SkipThought & 28.3 & 24.5 & 20.8 & 21.0 \\
716
+ SSCB Word2Vec & 45.1 & 24.8 & 24.8 & 25.0 \\
717
+ \midrule
718
+ SSCB Fusion & {\bf56.7} & 24.8 & 27.7 & 28.7 \\
719
+ \midrule
720
+ MemN2N (w2v, linproj) & 40.6 & {\bf33.0}& {\bf38.0}& {\bf42.3}\\
721
+ \bottomrule
722
+ \end{tabular}
723
+ }
724
+ \vspace*{-0.25cm}
725
+ \caption{Accuracy for Text-based QA. {\bf Top}: results for the Searching student with cosine similarity; {\bf Middle}: Convnet SSCB; and {\bf Bottom}: the modified Memory Network.}
726
+ \label{tab:results-comprehensive_baseline}
727
+ \vspace{-5.0mm}
728
+ \end{table}
729
+
730
+
731
+
732
+
733
+
734
+
735
+
736
+
737
+
738
+
739
+
740
+
741
+
742
+
743
+
744
+
745
+
746
+
747
+
748
+
749
+
750
+ {\bf SSCB}. The middle rows of Table~\ref{tab:results-comprehensive_baseline} show the result of our neural similarity model.
751
+ Here, we present additional results combining all text representations (\textit{SSCB fusion}) via our CNN.
752
+ We split the \emph{train} set into $90\%$ train / $10\%$ dev, such that all questions and answers of the same movie are in the same split, train our model on train and monitor performance on dev.
753
+ Both \emph{val} and \emph{test} sets are held out.
754
+ During training, we also create several model replicas and pick the ones with the best validation performance.
755
+
756
+ Table~\ref{tab:results-comprehensive_baseline} shows that the neural model outperforms the simple cosine similarity on most tasks, while the fusion method achieves the highest performance when using plot synopses as the story.
757
+ Ignoring the case of plots, the accuracy is capped at about $30\%$ for most modalities showing the difficulty of our dataset.
758
+
759
+ \vspace{-1mm}
760
+ \subsection{Memory Network}
761
+ \vspace{-0.5mm}
762
+
763
+ The original MemN2N which trains the word embeddings along with the answering modules overfits heavily on our dataset leading to near random performance on \textit{val} ($\sim$20\%).
764
+ However, our modifications help in restraining the learning process.
765
+ Table~\ref{tab:results-comprehensive_baseline} (bottom) presents results for MemN2N with Word2Vec initialization and a linear projection layer.
766
+ Using plot synopses, we see a performance closer to SSCB with Word2Vec features.
767
+ However, in the case of longer stories, the attention mechanism in the network is able to sift through thousands of story sentences and performs well on DVS, subtitles and scripts.
768
+ This shows that complex three-way scoring functions are needed to tackle such QA sources.
769
+ In terms of story sources, the MemN2N performs best with scripts which contain the most information (descriptions, dialogs and speaker information).
770
+
771
+
772
+ \begin{table}[t]
773
+ \vspace{-2mm}
774
+ \centering
775
+ {\small
776
+ \begin{tabular}{lccc}
777
+
778
+ \toprule
779
+ Method & Video & Subtitle & Video+Subtitle \\
780
+ \midrule
781
+ SSCB all clips & 21.6 & 22.3 & 21.9 \\
782
+ MemN2N all clips & \bf{23.1} & \bf{38.0} & \bf{34.2} \\
783
+ \bottomrule
784
+ \end{tabular}
785
+ }
786
+ \vspace*{-0.25cm}
787
+ \caption{Accuracy for Video-based QA and late fusion of Subtitle and Video scores.}
788
+ \label{tab:results-video_baseline}
789
+ \vspace{-3mm}
790
+ \end{table}
791
+
792
+
793
+
794
+
795
+
796
+
797
+
798
+ \begin{figure}
799
+ \centering
800
+ \includegraphics[width=0.8\linewidth,trim=0 0 0 12,clip]{figs/results-simple_baselines_qfw.pdf}
801
+ \vspace*{-0.6cm}
802
+ \caption{Accuracy for different feature representations of plot sentences with respect to the first word of the question.}
803
+ \label{fig:results-simple_baselines_qfw}
804
+ \vspace{-0.5cm}
805
+ \end{figure}
806
+
807
+ \vspace{-1mm}
808
+ \subsection{Video baselines}
809
+ \vspace{-1mm}
810
+ We evaluate SSCB and MemN2N in a setting where the automatic models answer questions by ``watching'' all the video clips that are provided for that movie.
811
+ Here, the story descriptors are shot embeddings.
812
+
813
+
814
+ The results are presented in Table~\ref{tab:results-video_baseline}.
815
+ We see that learning to answer questions using video is still a hard problem with performance close to random.
816
+ As visual information alone is insufficient, we also perform and experiment combining video and dialog (subtitles) through late fusion.
817
+ We train the SSCB model with the visual-text embedding for subtitles and see that it yields poor performance (22.3\%) compared to the fusion of all text features (27.7\%).
818
+ For the memory network, we answer subtitles as before using Word2Vec.
819
+
820
+
821
+
822
+
823
+
824
+
825
+ \vspace{-1mm}
826
+ \section{Conclusion}
827
+ \vspace{-1mm}
828
+ We introduced the MovieQA data set which aims to evaluate automatic story comprehension from both video and text. Our dataset is unique in that it contains several sources of information -- video clips, subtitles, scripts, plots and DVS. We provided several intelligent baselines and extended existing QA techniques to analyze the difficulty of our task. Our benchmark with an evaluation server is online at~\url{http://movieqa.cs.toronto.edu}.
829
+
830
+ \iffalse
831
+ Owing to the variety in information sources, our data set is applicable to Vision, Language and Machine Learning communities.
832
+ We evaluate a variety of answering methods which discover the biases within our data and demonstrate the limitations on this high level semantic task.
833
+ Current state-of-the-art methods do not perform well and are often only a little better than random.
834
+ Using this data set we will create an evaluation campaign that can help breach the next frontier in improved vision and language understanding.
835
+ \fi
836
+
837
+
838
+
839
+ {\small
840
+ \vspace{0.2cm}
841
+ \noindent {\bf Acknowledgment.}
842
+ We thank the \texttt{Upwork} annotators, Lea Jensterle, Marko Boben, and So\v{c}a Fidler for data collection, and Relu Patrascu for infrastructure support.
843
+ MT and RS are supported by DFG contract no. STI-598/2-1, and the work was carried out during MT's visit to U. of T. on a KHYS Research Travel Grant.
844
+ }
845
+
846
+
847
+ \balance
848
+ {\small
849
+ \bibliographystyle{ieee}
850
+ \bibliography{cvpr2016}
851
+ }
852
+
853
+
854
+ \end{document}
papers/1512/1512.03385.tex ADDED
@@ -0,0 +1,846 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[10pt,twocolumn,letterpaper]{article}
2
+
3
+ \usepackage{cvpr}
4
+ \usepackage{times}
5
+ \usepackage{epsfig}
6
+ \usepackage{graphicx}
7
+ \usepackage{amsmath}
8
+ \usepackage{amssymb}
9
+ \usepackage{tabulary}
10
+ \usepackage{multirow}
11
+
12
+ \usepackage{soul}
13
+
14
+
15
+
16
+ \usepackage[bookmarks=false,colorlinks=true,linkcolor=black,citecolor=black,filecolor=black,urlcolor=black]{hyperref}
17
+ \usepackage[british,UKenglish,USenglish,english,american]{babel}
18
+
19
+ \cvprfinalcopy
20
+
21
+ \newcommand{\ve}[1]{\mathbf{#1}} \newcommand{\ma}[1]{\mathrm{#1}}
22
+
23
+ \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
24
+
25
+ \renewcommand\arraystretch{1.2}
26
+
27
+ \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
28
+
29
+ \hyphenation{identity notorious underlying surpasses desired residual doubled}
30
+
31
+
32
+ \setcounter{page}{1}
33
+ \begin{document}
34
+
35
+ \title{Deep Residual Learning for Image Recognition}
36
+
37
+
38
+
39
+ \author{Kaiming He \qquad Xiangyu Zhang \qquad Shaoqing Ren \qquad Jian Sun \\
40
+ \large Microsoft Research \vspace{-.2em}\\
41
+ \normalsize
42
+ \{kahe,~v-xiangz,~v-shren,~jiansun\}@microsoft.com
43
+ }
44
+
45
+ \maketitle
46
+
47
+
48
+ \begin{abstract}
49
+ \vspace{-.5em}
50
+ Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
51
+ On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8$\times$ deeper than VGG nets \cite{Simonyan2015} but still having lower complexity.
52
+ An ensemble of these residual nets achieves 3.57\% error on the ImageNet \emph{test} set. This result won the 1st place on the ILSVRC 2015 classification task.
53
+ We also present analysis on CIFAR-10 with 100 and 1000 layers.
54
+
55
+ The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28\% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC \& COCO 2015 competitions\footnote{\fontsize{7.6pt}{1em}\selectfont \url{http://image-net.org/challenges/LSVRC/2015/} and \url{http://mscoco.org/dataset/\#detections-challenge2015}.}, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
56
+ \end{abstract}
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+ \vspace{-1em}
65
+ \section{Introduction}
66
+ \label{sec:intro}
67
+
68
+ Deep convolutional neural networks \cite{LeCun1989,Krizhevsky2012} have led to a series of breakthroughs for image classification \cite{Krizhevsky2012,Zeiler2014,Sermanet2014}. Deep networks naturally integrate low/mid/high-level features \cite{Zeiler2014} and classifiers in an end-to-end multi-layer fashion, and the ``levels'' of features can be enriched by the number of stacked layers (depth).
69
+ Recent evidence \cite{Simonyan2015,Szegedy2015} reveals that network depth is of crucial importance, and the leading results \cite{Simonyan2015,Szegedy2015,He2015,Ioffe2015} on the challenging ImageNet dataset \cite{Russakovsky2014} all exploit ``very deep'' \cite{Simonyan2015} models, with a depth of sixteen \cite{Simonyan2015} to thirty \cite{Ioffe2015}. Many other nontrivial visual recognition tasks \cite{Girshick2014,He2014,Girshick2015,Ren2015,Long2015} have also greatly benefited from very deep models.
70
+
71
+ Driven by the significance of depth, a question arises: \emph{Is learning better networks as easy
72
+ as stacking more layers?}
73
+ An obstacle to answering this question was the notorious problem of vanishing/exploding gradients \cite{Bengio1994,Glorot2010}, which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization \cite{LeCun1998,Glorot2010,Saxe2013,He2015} and intermediate normalization layers \cite{Ioffe2015}, which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation \cite{LeCun1989}.
74
+
75
+ \begin{figure}[t]
76
+ \begin{center}
77
+ \includegraphics[width=1.0\linewidth]{eps/teaser}
78
+ \end{center}
79
+ \vspace{-1.2em}
80
+ \caption{Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer ``plain'' networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig.~\ref{fig:imagenet}.}
81
+ \label{fig:teaser}
82
+ \vspace{-1em}
83
+ \end{figure}
84
+
85
+ When deeper networks are able to start converging, a \emph{degradation} problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is \emph{not caused by overfitting}, and adding more layers to a suitably deep model leads to \emph{higher training error}, as reported in \cite{He2015a, Srivastava2015} and thoroughly verified by our experiments. Fig.~\ref{fig:teaser} shows a typical example.
86
+
87
+ The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution \emph{by construction} to the deeper model: the added layers are \emph{identity} mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).
88
+
89
+ In this paper, we address the degradation problem by introducing a \emph{deep residual learning} framework.
90
+ Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as $\mathcal{H}(\ve{x})$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}(\ve{x}):=\mathcal{H}(\ve{x})-\ve{x}$. The original mapping is recast into $\mathcal{F}(\ve{x})+\ve{x}$.
91
+ We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.
92
+
93
+ The formulation of $\mathcal{F}(\ve{x})+\ve{x}$ can be realized by feedforward neural networks with ``shortcut connections'' (Fig.~\ref{fig:block}). Shortcut connections \cite{Bishop1995,Ripley1996,Venables1999} are those skipping one or more layers. In our case, the shortcut connections simply perform \emph{identity} mapping, and their outputs are added to the outputs of the stacked layers (Fig.~\ref{fig:block}). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (\eg, Caffe \cite{Jia2014}) without modifying the solvers.
94
+
95
+ \begin{figure}[t]
96
+ \centering
97
+ \hspace{48pt}
98
+ \includegraphics[width=0.9\linewidth]{eps/block}
99
+ \vspace{-.5em}
100
+ \caption{Residual learning: a building block.}
101
+ \label{fig:block}
102
+ \vspace{-1em}
103
+ \end{figure}
104
+
105
+ We present comprehensive experiments on ImageNet \cite{Russakovsky2014} to show the degradation problem and evaluate our method.
106
+ We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart ``plain'' nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.
107
+
108
+ Similar phenomena are also shown on the CIFAR-10 set \cite{Krizhevsky2009}, suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.
109
+
110
+ On the ImageNet classification dataset \cite{Russakovsky2014}, we obtain excellent results by extremely deep residual nets.
111
+ Our 152-layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets \cite{Simonyan2015}. Our ensemble has \textbf{3.57\%} top-5 error on the ImageNet \emph{test} set, and \emph{won the 1st place in the ILSVRC 2015 classification competition}. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further \emph{win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation} in ILSVRC \& COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.
112
+
113
+
114
+ \section{Related Work}
115
+
116
+ \noindent\textbf{Residual Representations.}
117
+ In image recognition, VLAD \cite{Jegou2012} is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector \cite{Perronnin2007} can be formulated as a probabilistic version \cite{Jegou2012} of VLAD.
118
+ Both of them are powerful shallow representations for image retrieval and classification \cite{Chatfield2011,Vedaldi2008}.
119
+ For vector quantization, encoding residual vectors \cite{Jegou2011} is shown to be more effective than encoding original vectors.
120
+
121
+ In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method \cite{Briggs2000} reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning \cite{Szeliski1990,Szeliski2006}, which relies on variables that represent residual vectors between two scales. It has been shown \cite{Briggs2000,Szeliski1990,Szeliski2006} that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.
122
+
123
+ \vspace{6pt}
124
+ \noindent\textbf{Shortcut Connections.}
125
+ Practices and theories that lead to shortcut connections \cite{Bishop1995,Ripley1996,Venables1999} have been studied for a long time.
126
+ An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output \cite{Ripley1996,Venables1999}. In \cite{Szegedy2015,Lee2014}, a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of \cite{Schraudolph1998,Schraudolph1998a,Raiko2012,Vatanen2013} propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In \cite{Szegedy2015}, an ``inception'' layer is composed of a shortcut branch and a few deeper branches.
127
+
128
+
129
+ Concurrent with our work, ``highway networks'' \cite{Srivastava2015,Srivastava2015a} present shortcut connections with gating functions \cite{Hochreiter1997}. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is ``closed'' (approaching zero), the layers in highway networks represent \emph{non-residual} functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (\eg, over 100 layers).
130
+
131
+ \section{Deep Residual Learning}
132
+
133
+ \subsection{Residual Learning}
134
+ \label{sec:motivation}
135
+
136
+ Let us consider $\mathcal{H}(\ve{x})$ as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with $\ve{x}$ denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions\footnote{This hypothesis, however, is still an open question. See \cite{Montufar2014}.}, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, \ie, $\mathcal{H}(\ve{x})-\ve{x}$ (assuming that the input and output are of the same dimensions).
137
+ So rather than expect stacked layers to approximate $\mathcal{H}(\ve{x})$, we explicitly let these layers approximate a residual function $\mathcal{F}(\ve{x}):=\mathcal{H}(\ve{x})-\ve{x}$. The original function thus becomes $\mathcal{F}(\ve{x})+\ve{x}$. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.
138
+
139
+ This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig.~\ref{fig:teaser}, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.
140
+
141
+ In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig.~\ref{fig:std}) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning.
142
+
143
+ \subsection{Identity Mapping by Shortcuts}
144
+
145
+ We adopt residual learning to every few stacked layers.
146
+ A building block is shown in Fig.~\ref{fig:block}. Formally, in this paper we consider a building block defined as:
147
+ \begin{equation}\label{eq:identity}
148
+ \ve{y}= \mathcal{F}(\ve{x}, \{W_{i}\}) + \ve{x}.
149
+ \end{equation}
150
+ Here $\ve{x}$ and $\ve{y}$ are the input and output vectors of the layers considered. The function $\mathcal{F}(\ve{x}, \{W_{i}\})$ represents the residual mapping to be learned. For the example in Fig.~\ref{fig:block} that has two layers, $\mathcal{F}=W_{2}\sigma(W_{1}\ve{x})$ in which $\sigma$ denotes ReLU \cite{Nair2010} and the biases are omitted for simplifying notations. The operation $\mathcal{F}+\ve{x}$ is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (\ie, $\sigma(\ve{y})$, see Fig.~\ref{fig:block}).
151
+
152
+ The shortcut connections in Eqn.(\ref{eq:identity}) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).
153
+
154
+ The dimensions of $\ve{x}$ and $\mathcal{F}$ must be equal in Eqn.(\ref{eq:identity}). If this is not the case (\eg, when changing the input/output channels), we can perform a linear projection $W_{s}$ by the shortcut connections to match the dimensions:
155
+ \begin{equation}\label{eq:transform}
156
+ \ve{y}= \mathcal{F}(\ve{x}, \{W_{i}\}) + W_{s}\ve{x}.
157
+ \end{equation}
158
+ We can also use a square matrix $W_{s}$ in Eqn.(\ref{eq:identity}). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus $W_{s}$ is only used when matching dimensions.
159
+
160
+ The form of the residual function $\mathcal{F}$ is flexible. Experiments in this paper involve a function $\mathcal{F}$ that has two or three layers (Fig.~\ref{fig:block_deeper}), while more layers are possible. But if $\mathcal{F}$ has only a single layer, Eqn.(\ref{eq:identity}) is similar to a linear layer: $\ve{y}=W_1\ve{x}+\ve{x}$, for which we have not observed advantages.
161
+
162
+ We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function $\mathcal{F}(\ve{x}, \{W_{i}\})$ can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.
163
+
164
+ \begin{figure}[t]
165
+ \begin{center}
166
+ \vspace{.5em}
167
+ \includegraphics[width=1.0\linewidth]{eps/arch}
168
+ \end{center}
169
+ \caption{Example network architectures for ImageNet. \textbf{Left}: the VGG-19 model \cite{Simonyan2015} (19.6 billion FLOPs) as a reference. \textbf{Middle}: a plain network with 34 parameter layers (3.6 billion FLOPs). \textbf{Right}: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions. \textbf{Table~\ref{tab:arch}} shows more details and other variants.}
170
+ \label{fig:arch}
171
+ \vspace{-1em}
172
+ \end{figure}
173
+
174
+ \subsection{Network Architectures}
175
+
176
+ We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.
177
+
178
+ \vspace{6pt}
179
+ \noindent\textbf{Plain Network.}
180
+ Our plain baselines (Fig.~\ref{fig:arch}, middle) are mainly inspired by the philosophy of VGG nets \cite{Simonyan2015} (Fig.~\ref{fig:arch}, left).
181
+ The convolutional layers mostly have 3$\times$3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2.
182
+ The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig.~\ref{fig:arch} (middle).
183
+
184
+ It is worth noticing that our model has \emph{fewer} filters and \emph{lower} complexity than VGG nets \cite{Simonyan2015} (Fig.~\ref{fig:arch}, left). Our 34-layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18\% of VGG-19 (19.6 billion FLOPs).
185
+
186
+ \vspace{6pt}
187
+ \noindent\textbf{Residual Network.}
188
+ Based on the above plain network, we insert shortcut connections (Fig.~\ref{fig:arch}, right) which turn the network into its counterpart residual version.
189
+ The identity shortcuts (Eqn.(\ref{eq:identity})) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig.~\ref{fig:arch}).
190
+ When the dimensions increase (dotted line shortcuts in Fig.~\ref{fig:arch}), we consider two options:
191
+ (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter;
192
+ (B) The projection shortcut in Eqn.(\ref{eq:transform}) is used to match dimensions (done by 1$\times$1 convolutions).
193
+ For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.
194
+
195
+ \newcommand{\blocka}[2]{\multirow{3}{*}{\(\left[\begin{array}{c}\text{3$\times$3, #1}\\[-.1em] \text{3$\times$3, #1} \end{array}\right]\)$\times$#2}
196
+ }
197
+ \newcommand{\blockb}[3]{\multirow{3}{*}{\(\left[\begin{array}{c}\text{1$\times$1, #2}\\[-.1em] \text{3$\times$3, #2}\\[-.1em] \text{1$\times$1, #1}\end{array}\right]\)$\times$#3}
198
+ }
199
+ \renewcommand\arraystretch{1.1}
200
+ \setlength{\tabcolsep}{3pt}
201
+ \begin{table*}[t]
202
+ \begin{center}
203
+ \resizebox{0.7\linewidth}{!}{
204
+ \begin{tabular}{c|c|c|c|c|c|c}
205
+ \hline
206
+ layer name & output size & 18-layer & 34-layer & 50-layer & 101-layer & 152-layer \\
207
+ \hline
208
+ conv1 & 112$\times$112 & \multicolumn{5}{c}{7$\times$7, 64, stride 2}\\
209
+ \hline
210
+ \multirow{4}{*}{conv2\_x} & \multirow{4}{*}{56$\times$56} & \multicolumn{5}{c}{3$\times$3 max pool, stride 2} \\\cline{3-7}
211
+ & & \blocka{64}{2} & \blocka{64}{3} & \blockb{256}{64}{3} & \blockb{256}{64}{3} & \blockb{256}{64}{3}\\
212
+ & & & & & &\\
213
+ & & & & & &\\
214
+ \hline
215
+ \multirow{3}{*}{conv3\_x} & \multirow{3}{*}{28$\times$28} & \blocka{128}{2} & \blocka{128}{4} & \blockb{512}{128}{4} & \blockb{512}{128}{4} &
216
+ \blockb{512}{128}{8}\\
217
+ & & & & & & \\
218
+ & & & & & & \\
219
+ \hline
220
+ \multirow{3}{*}{conv4\_x} & \multirow{3}{*}{14$\times$14} & \blocka{256}{2} & \blocka{256}{6} & \blockb{1024}{256}{6} & \blockb{1024}{256}{23} & \blockb{1024}{256}{36}\\
221
+ & & & & & \\
222
+ & & & & & \\
223
+ \hline
224
+ \multirow{3}{*}{conv5\_x} & \multirow{3}{*}{7$\times$7} & \blocka{512}{2} & \blocka{512}{3} & \blockb{2048}{512}{3} & \blockb{2048}{512}{3}
225
+ & \blockb{2048}{512}{3}\\
226
+ & & & & & & \\
227
+ & & & & & & \\
228
+ \hline
229
+ & 1$\times$1 & \multicolumn{5}{c}{average pool, 1000-d fc, softmax} \\
230
+ \hline
231
+ \multicolumn{2}{c|}{FLOPs} & 1.8$\times10^9$ & 3.6$\times10^9$ & 3.8$\times10^9$ & 7.6$\times10^9$ & 11.3$\times10^9$ \\
232
+ \hline
233
+ \end{tabular}
234
+ }
235
+ \end{center}
236
+ \vspace{-.5em}
237
+ \caption{Architectures for ImageNet. Building blocks are shown in brackets (see also Fig.~\ref{fig:block_deeper}), with the numbers of blocks stacked. Downsampling is performed by conv3\_1, conv4\_1, and conv5\_1 with a stride of 2.
238
+ }
239
+ \label{tab:arch}
240
+ \vspace{-.5em}
241
+ \end{table*}
242
+
243
+ \begin{figure*}[t]
244
+ \begin{center}
245
+ \includegraphics[width=0.86\linewidth]{eps/imagenet}
246
+ \end{center}
247
+ \vspace{-1.2em}
248
+ \caption{Training on \textbf{ImageNet}. Thin curves denote training error, and bold curves denote validation error of the center crops. Left: plain networks of 18 and 34 layers. Right: ResNets of 18 and 34 layers. In this plot, the residual networks have no extra parameter compared to their plain counterparts.}
249
+ \label{fig:imagenet}
250
+ \end{figure*}
251
+
252
+ \subsection{Implementation}
253
+ \label{sec:impl}
254
+
255
+ Our implementation for ImageNet follows the practice in \cite{Krizhevsky2012,Simonyan2015}. The image is resized with its shorter side randomly sampled in $[256, 480]$ for scale augmentation \cite{Simonyan2015}. A 224$\times$224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted \cite{Krizhevsky2012}. The standard color augmentation in \cite{Krizhevsky2012} is used.
256
+ We adopt batch normalization (BN) \cite{Ioffe2015} right after each convolution and before activation, following \cite{Ioffe2015}.
257
+ We initialize the weights as in \cite{He2015} and train all plain/residual nets from scratch.
258
+ We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to $60\times10^4$ iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout \cite{Hinton2012}, following the practice in \cite{Ioffe2015}.
259
+
260
+ In testing, for comparison studies we adopt the standard 10-crop testing \cite{Krizhevsky2012}.
261
+ For best results, we adopt the fully-convolutional form as in \cite{Simonyan2015,He2015}, and average the scores at multiple scales (images are resized such that the shorter side is in $\{224, 256, 384, 480, 640\}$).
262
+
263
+
264
+ \section{Experiments}
265
+ \label{sec:exp}
266
+
267
+ \subsection{ImageNet Classification}
268
+ \label{sec:imagenet}
269
+
270
+ We evaluate our method on the ImageNet 2012 classification dataset \cite{Russakovsky2014} that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.
271
+
272
+
273
+ \vspace{6pt}
274
+ \noindent\textbf{Plain Networks.}
275
+ We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig.~\ref{fig:arch} (middle). The 18-layer plain net is of a similar form. See Table~\ref{tab:arch} for detailed architectures.
276
+
277
+ The results in Table~\ref{tab:plain_vs_shortcut} show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig.~\ref{fig:imagenet} (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the 34-layer plain net has higher \emph{training} error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.
278
+
279
+ \newcolumntype{x}[1]{>{\centering}p{#1pt}}
280
+ \renewcommand\arraystretch{1.1}
281
+ \setlength{\tabcolsep}{8pt}
282
+ \begin{table}[t]
283
+ \begin{center}
284
+ \small
285
+ \begin{tabular}{l|x{42}|c}
286
+ \hline
287
+ & plain & ResNet \\
288
+ \hline
289
+ 18 layers & 27.94 & 27.88 \\
290
+ 34 layers & 28.54 & \textbf{25.03} \\
291
+ \hline
292
+ \end{tabular}
293
+ \end{center}
294
+ \vspace{-.5em}
295
+ \caption{Top-1 error (\%, 10-crop testing) on ImageNet validation. Here the ResNets have no extra parameter compared to their plain counterparts. Fig.~\ref{fig:imagenet} shows the training procedures.}
296
+ \label{tab:plain_vs_shortcut}
297
+ \end{table}
298
+
299
+ We argue that this optimization difficulty is \emph{unlikely} to be caused by vanishing gradients. These plain networks are trained with BN \cite{Ioffe2015}, which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish.
300
+ In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table~\ref{tab:10crop}), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error\footnote{We have experimented with more training iterations (3$\times$) and still observed the degradation problem, suggesting that this problem cannot be feasibly addressed by simply using more iterations.}.
301
+ The reason for such optimization difficulties will be studied in the future.
302
+
303
+ \vspace{6pt}
304
+ \noindent\textbf{Residual Networks.}
305
+ Next we evaluate 18-layer and 34-layer residual nets (\emph{ResNets}). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3$\times$3 filters as in Fig.~\ref{fig:arch} (right). In the first comparison (Table~\ref{tab:plain_vs_shortcut} and Fig.~\ref{fig:imagenet} right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have \emph{no extra parameter} compared to the plain counterparts.
306
+
307
+ We have three major observations from Table~\ref{tab:plain_vs_shortcut} and Fig.~\ref{fig:imagenet}. First, the situation is reversed with residual learning -- the 34-layer ResNet is better than the 18-layer ResNet (by 2.8\%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.
308
+
309
+ Second, compared to its plain counterpart, the 34-layer ResNet reduces the top-1 error by 3.5\% (Table~\ref{tab:plain_vs_shortcut}), resulting from the successfully reduced training error (Fig.~\ref{fig:imagenet} right \vs left). This comparison verifies the effectiveness of residual learning on extremely deep systems.
310
+
311
+ Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table~\ref{tab:plain_vs_shortcut}), but the 18-layer ResNet converges faster (Fig.~\ref{fig:imagenet} right \vs left).
312
+ When the net is ``not overly deep'' (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage.
313
+
314
+ \begin{table}[t]
315
+ \setlength{\tabcolsep}{8pt}
316
+ \begin{center}
317
+ \small
318
+ \begin{tabular}{l|cc}
319
+ \hline
320
+ \footnotesize model & \footnotesize top-1 err. & \footnotesize top-5 err. \\
321
+ \hline
322
+ \footnotesize VGG-16 \cite{Simonyan2015} & 28.07 & 9.33\\
323
+ \footnotesize GoogLeNet \cite{Szegedy2015} & - & 9.15 \\
324
+ \footnotesize PReLU-net \cite{He2015} & 24.27 & 7.38 \\
325
+ \hline
326
+ \hline
327
+ \footnotesize plain-34 & 28.54 & 10.02 \\
328
+ \footnotesize ResNet-34 A & 25.03 & 7.76 \\
329
+ \footnotesize ResNet-34 B & 24.52 & 7.46 \\
330
+ \footnotesize ResNet-34 C & 24.19 & 7.40 \\
331
+ \hline
332
+ \footnotesize ResNet-50 & 22.85 & 6.71 \\
333
+ \footnotesize ResNet-101 & 21.75 & 6.05 \\
334
+ \footnotesize ResNet-152 & \textbf{21.43} & \textbf{5.71} \\
335
+ \hline
336
+ \end{tabular}
337
+ \end{center}
338
+ \vspace{-.5em}
339
+ \caption{Error rates (\%, \textbf{10-crop} testing) on ImageNet validation.
340
+ VGG-16 is based on our test. ResNet-50/101/152 are of option B that only uses projections for increasing dimensions.}
341
+ \label{tab:10crop}
342
+ \vspace{-.5em}
343
+ \end{table}
344
+
345
+ \begin{table}[t]
346
+ \setlength{\tabcolsep}{8pt}
347
+ \small
348
+ \begin{center}
349
+ \begin{tabular}{l|c c}
350
+ \hline
351
+ \footnotesize method & \footnotesize top-1 err. & \footnotesize top-5 err.\\
352
+ \hline
353
+ VGG \cite{Simonyan2015} (ILSVRC'14) & - & 8.43$^{\dag}$\\
354
+ GoogLeNet \cite{Szegedy2015} (ILSVRC'14) & - & 7.89\\
355
+ \hline
356
+ VGG \cite{Simonyan2015} \footnotesize (v5) & 24.4 & 7.1\\
357
+ PReLU-net \cite{He2015} & 21.59 & 5.71 \\
358
+ BN-inception \cite{Ioffe2015} & 21.99 & 5.81 \\\hline
359
+ ResNet-34 B & 21.84 & 5.71 \\
360
+ ResNet-34 C & 21.53 & 5.60 \\
361
+ ResNet-50 & 20.74 & 5.25 \\
362
+ ResNet-101 & 19.87 & 4.60 \\
363
+ ResNet-152 & \textbf{19.38} & \textbf{4.49} \\
364
+ \hline
365
+ \end{tabular}
366
+ \end{center}
367
+ \vspace{-.5em}
368
+ \caption{Error rates (\%) of \textbf{single-model} results on the ImageNet validation set (except $^{\dag}$ reported on the test set).}
369
+ \label{tab:single}
370
+ \setlength{\tabcolsep}{12pt}
371
+ \small
372
+ \begin{center}
373
+ \begin{tabular}{l|c}
374
+ \hline
375
+ \footnotesize method & top-5 err. (\textbf{test}) \\
376
+ \hline
377
+ VGG \cite{Simonyan2015} (ILSVRC'14) & 7.32\\
378
+ GoogLeNet \cite{Szegedy2015} (ILSVRC'14) & 6.66\\
379
+ \hline
380
+ VGG \cite{Simonyan2015} \footnotesize (v5) & 6.8 \\
381
+ PReLU-net \cite{He2015} & 4.94 \\
382
+ BN-inception \cite{Ioffe2015} & 4.82 \\\hline
383
+ \textbf{ResNet (ILSVRC'15)} & \textbf{3.57} \\
384
+ \hline
385
+ \end{tabular}
386
+ \end{center}
387
+ \vspace{-.5em}
388
+ \caption{Error rates (\%) of \textbf{ensembles}. The top-5 error is on the test set of ImageNet and reported by the test server.}
389
+ \label{tab:ensemble}
390
+ \end{table}
391
+
392
+ \vspace{6pt}
393
+ \noindent\textbf{Identity \vs Projection Shortcuts.}
394
+ We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(\ref{eq:transform})).
395
+ In Table~\ref{tab:10crop} we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameter-free (the same as Table~\ref{tab:plain_vs_shortcut} and Fig.~\ref{fig:imagenet} right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and (C) all shortcuts are projections.
396
+
397
+ Table~\ref{tab:10crop} shows that all three options are considerably better than the plain counterpart.
398
+ B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.
399
+
400
+
401
+ \begin{figure}[t]
402
+ \begin{center}
403
+ \hspace{12pt}
404
+ \includegraphics[width=0.85\linewidth]{eps/block_deeper}
405
+ \end{center}
406
+ \caption{A deeper residual function $\mathcal{F}$ for ImageNet. Left: a building block (on 56$\times$56 feature maps) as in Fig.~\ref{fig:arch} for ResNet-34. Right: a ``bottleneck'' building block for ResNet-50/101/152.}
407
+ \label{fig:block_deeper}
408
+ \vspace{-.6em}
409
+ \end{figure}
410
+
411
+ \vspace{6pt}
412
+ \noindent\textbf{Deeper Bottleneck Architectures.} Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a \emph{bottleneck} design\footnote{Deeper \emph{non}-bottleneck ResNets (\eg, Fig.~\ref{fig:block_deeper} left) also gain accuracy from increased depth (as shown on CIFAR-10), but are not as economical as the bottleneck ResNets. So the usage of bottleneck designs is mainly due to practical considerations. We further note that the degradation problem of plain nets is also witnessed for the bottleneck designs.}.
413
+ For each residual function $\mathcal{F}$, we use a stack of 3 layers instead of 2 (Fig.~\ref{fig:block_deeper}). The three layers are 1$\times$1, 3$\times$3, and 1$\times$1 convolutions, where the 1$\times$1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3$\times$3 layer a bottleneck with smaller input/output dimensions.
414
+ Fig.~\ref{fig:block_deeper} shows an example, where both designs have similar time complexity.
415
+
416
+ The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig.~\ref{fig:block_deeper} (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs.
417
+
418
+ \textbf{50-layer ResNet:} We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table~\ref{tab:arch}). We use option B for increasing dimensions.
419
+ This model has 3.8 billion FLOPs.
420
+
421
+ \textbf{101-layer and 152-layer ResNets:} We construct 101-layer and 152-layer ResNets by using more 3-layer blocks (Table~\ref{tab:arch}).
422
+ Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has \emph{lower complexity} than VGG-16/19 nets (15.3/19.6 billion FLOPs).
423
+
424
+ The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table~\ref{tab:10crop} and~\ref{tab:single}). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table~\ref{tab:10crop} and~\ref{tab:single}).
425
+
426
+ \begin{figure*}[t]
427
+ \begin{center}
428
+ \includegraphics[width=0.8\linewidth]{eps/cifar}
429
+ \end{center}
430
+ \vspace{-1.5em}
431
+ \caption{Training on \textbf{CIFAR-10}. Dashed lines denote training error, and bold lines denote testing error. \textbf{Left}: plain networks. The error of plain-110 is higher than 60\% and not displayed. \textbf{Middle}: ResNets. \textbf{Right}: ResNets with 110 and 1202 layers.}
432
+ \label{fig:cifar}
433
+ \end{figure*}
434
+
435
+ \vspace{6pt}
436
+ \noindent\textbf{Comparisons with State-of-the-art Methods.}
437
+ In Table~\ref{tab:single} we compare with the previous best single-model results.
438
+ Our baseline 34-layer ResNets have achieved very competitive accuracy.
439
+ Our 152-layer ResNet has a single-model top-5 validation error of 4.49\%. This single-model result outperforms all previous ensemble results (Table~\ref{tab:ensemble}).
440
+ We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to \textbf{3.57\%} top-5 error on the test set (Table~\ref{tab:ensemble}). \emph{This entry won the 1st place in ILSVRC 2015.}
441
+
442
+
443
+ \subsection{CIFAR-10 and Analysis}
444
+
445
+ We conducted more studies on the CIFAR-10 dataset \cite{Krizhevsky2009}, which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set.
446
+ Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.
447
+
448
+ The plain/residual architectures follow the form in Fig.~\ref{fig:arch} (middle/right).
449
+ The network inputs are 32$\times$32 images, with the per-pixel mean subtracted. The first layer is 3$\times$3 convolutions. Then we use a stack of $6n$ layers with 3$\times$3 convolutions on the feature maps of sizes $\{32, 16, 8\}$ respectively, with 2$n$ layers for each feature map size. The numbers of filters are $\{16, 32, 64\}$ respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6$n$+2 stacked weighted layers. The following table summarizes the architecture:
450
+ \renewcommand\arraystretch{1.1}
451
+ \begin{center}
452
+ \small
453
+ \setlength{\tabcolsep}{8pt}
454
+ \begin{tabular}{c|c|c|c}
455
+ \hline
456
+ output map size & 32$\times$32 & 16$\times$16 & 8$\times$8 \\
457
+ \hline
458
+ \# layers & 1+2$n$ & 2$n$ & 2$n$\\
459
+ \# filters & 16 & 32 & 64\\
460
+ \hline
461
+ \end{tabular}
462
+ \end{center}
463
+ When shortcut connections are used, they are connected to the pairs of 3$\times$3 layers (totally $3n$ shortcuts). On this dataset we use identity shortcuts in all cases (\ie, option A), so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts.
464
+
465
+ \renewcommand\arraystretch{1.05}
466
+ \setlength{\tabcolsep}{5pt}
467
+ \begin{table}[t]
468
+ \begin{center}
469
+ \small
470
+ \resizebox{1.0\linewidth}{!}{
471
+ \begin{tabular}{c|c|c|l}
472
+ \hline
473
+ \multicolumn{3}{c|}{method} & error (\%) \\
474
+ \hline
475
+ \multicolumn{3}{c|}{Maxout \cite{Goodfellow2013}} & 9.38 \\
476
+ \multicolumn{3}{c|}{NIN \cite{Lin2013}} & 8.81 \\
477
+ \multicolumn{3}{c|}{DSN \cite{Lee2014}} & 8.22 \\
478
+ \hline
479
+ & \# layers & \# params & \\
480
+ \hline
481
+ FitNet \cite{Romero2015} & 19 & 2.5M & 8.39 \\
482
+ Highway \cite{Srivastava2015,Srivastava2015a} & 19 & 2.3M & 7.54 \footnotesize (7.72$\pm$0.16) \\
483
+ Highway \cite{Srivastava2015,Srivastava2015a} & 32 & 1.25M & 8.80 \\
484
+ \hline
485
+ ResNet & 20 & 0.27M & 8.75 \\
486
+ ResNet & 32 & 0.46M & 7.51 \\
487
+ ResNet & 44 & 0.66M & 7.17 \\
488
+ ResNet & 56 & 0.85M & 6.97 \\
489
+ ResNet & 110 & 1.7M & \textbf{6.43} \footnotesize (6.61$\pm$0.16) \\ ResNet & 1202 & 19.4M & 7.93 \\
490
+ \hline
491
+ \end{tabular}
492
+ }
493
+ \end{center}
494
+ \caption{Classification error on the \textbf{CIFAR-10} test set. All methods are with data augmentation. For ResNet-110, we run it 5 times and show ``best (mean$\pm$std)'' as in \cite{Srivastava2015a}.}
495
+ \label{tab:cifar}
496
+ \end{table}
497
+
498
+ We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in \cite{He2015} and BN \cite{Ioffe2015} but with no dropout. These models are trained with a mini-batch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in \cite{Lee2014} for training: 4 pixels are padded on each side, and a 32$\times$32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32$\times$32 image.
499
+
500
+ We compare $n=\{3,5,7,9\}$, leading to 20, 32, 44, and 56-layer networks.
501
+ Fig.~\ref{fig:cifar} (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig.~\ref{fig:imagenet}, left) and on MNIST (see \cite{Srivastava2015}), suggesting that such an optimization difficulty is a fundamental problem.
502
+
503
+ Fig.~\ref{fig:cifar} (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig.~\ref{fig:imagenet}, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases.
504
+
505
+ We further explore $n=18$ that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging\footnote{With an initial learning rate of 0.1, it starts converging ($<$90\% error) after several epochs, but still reaches similar accuracy.}. So we use 0.01 to warm up the training until the training error is below 80\% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig.~\ref{fig:cifar}, middle). It has \emph{fewer} parameters than other deep and thin networks such as FitNet \cite{Romero2015} and Highway \cite{Srivastava2015} (Table~\ref{tab:cifar}), yet is among the state-of-the-art results (6.43\%, Table~\ref{tab:cifar}).
506
+
507
+ \begin{figure}[t]
508
+ \begin{center}
509
+ \includegraphics[width=0.9\linewidth]{eps/std}
510
+ \end{center}
511
+ \vspace{-1.5em}
512
+ \caption{Standard deviations (std) of layer responses on CIFAR-10. The responses are the outputs of each 3$\times$3 layer, after BN and before nonlinearity. \textbf{Top}: the layers are shown in their original order. \textbf{Bottom}: the responses are ranked in descending order.}
513
+ \label{fig:std}
514
+ \end{figure}
515
+
516
+
517
+ \vspace{6pt}
518
+ \noindent\textbf{Analysis of Layer Responses.}
519
+ Fig.~\ref{fig:std} shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3$\times$3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions.
520
+ Fig.~\ref{fig:std} shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.\ref{sec:motivation}) that the residual functions might be generally closer to zero than the non-residual functions.
521
+ We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig.~\ref{fig:std}. When there are more layers, an individual layer of ResNets tends to modify the signal less.
522
+
523
+ \vspace{6pt}
524
+ \noindent\textbf{Exploring Over 1000 layers.}
525
+ We explore an aggressively deep model of over 1000 layers. We set $n=200$ that leads to a 1202-layer network, which is trained as described above. Our method shows \emph{no optimization difficulty}, and this $10^3$-layer network is able to achieve \emph{training error} $<$0.1\% (Fig.~\ref{fig:cifar}, right). Its test error is still fairly good (7.93\%, Table~\ref{tab:cifar}).
526
+
527
+ But there are still open problems on such aggressively deep models.
528
+ The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting.
529
+ The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout \cite{Goodfellow2013} or dropout \cite{Hinton2012} is applied to obtain the best results (\cite{Goodfellow2013,Lin2013,Lee2014,Romero2015}) on this dataset.
530
+ In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future.
531
+
532
+ \subsection{Object Detection on PASCAL and MS COCO}
533
+
534
+ \renewcommand\arraystretch{1.05}
535
+ \setlength{\tabcolsep}{8pt}
536
+ \begin{table}[t]
537
+ \begin{center}
538
+ \small
539
+ \begin{tabular}{c|c|c}
540
+ \hline
541
+ training data & 07+12 & 07++12 \\
542
+ \hline
543
+ test data & VOC 07 test & VOC 12 test \\
544
+ \hline
545
+ VGG-16 & 73.2 & 70.4 \\
546
+ ResNet-101 & \textbf{76.4} & \textbf{73.8} \\
547
+ \hline
548
+ \end{tabular}
549
+ \end{center}
550
+ \vspace{-.5em}
551
+ \caption{Object detection mAP (\%) on the PASCAL VOC 2007/2012 test sets using \textbf{baseline} Faster R-CNN. See also Table~\ref{tab:voc07_all} and \ref{tab:voc12_all} for better results.
552
+ }
553
+ \vspace{-.5em}
554
+ \label{tab:detection_voc}
555
+ \setlength{\tabcolsep}{5pt}
556
+ \begin{center}
557
+ \small
558
+ \begin{tabular}{c|c|c}
559
+ \hline
560
+ metric & ~~~mAP@.5~~~ & mAP@[.5, .95] \\
561
+ \hline
562
+ VGG-16 & 41.5 & 21.2 \\
563
+ ResNet-101 & \textbf{48.4} & \textbf{27.2} \\
564
+ \hline
565
+ \end{tabular}
566
+ \end{center}
567
+ \vspace{-.5em}
568
+ \caption{Object detection mAP (\%) on the COCO validation set using \textbf{baseline} Faster R-CNN. See also Table~\ref{tab:detection_coco_improve} for better results.
569
+ }
570
+ \vspace{-.5em}
571
+ \label{tab:detection_coco}
572
+ \end{table}
573
+
574
+ Our method has good generalization performance on other recognition tasks. Table~\ref{tab:detection_voc} and ~\ref{tab:detection_coco} show the object detection baseline results on PASCAL VOC 2007 and 2012 \cite{Everingham2010} and COCO \cite{Lin2014}. We adopt \emph{Faster R-CNN} \cite{Ren2015} as the detection method. Here we are interested in the improvements of replacing VGG-16 \cite{Simonyan2015} with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0\% increase in COCO's standard metric (mAP@[.5, .95]), which is a 28\% relative improvement. This gain is solely due to the learned representations.
575
+
576
+ Based on deep residual nets, we won the 1st places in several tracks in ILSVRC \& COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix.
577
+
578
+
579
+
580
+
581
+
582
+
583
+
584
+ {
585
+ \footnotesize
586
+ \bibliographystyle{ieee}
587
+ \bibliography{residual_v1_arxiv_release}
588
+ }
589
+
590
+ \newpage
591
+
592
+ \appendix
593
+ \section{Object Detection Baselines}
594
+
595
+ In this section we introduce our detection method based on the baseline Faster R-CNN \cite{Ren2015} system.
596
+ The models are initialized by the ImageNet classification models, and then fine-tuned on the object detection data. We have experimented with ResNet-50/101 at the time of the ILSVRC \& COCO 2015 detection competitions.
597
+
598
+ Unlike VGG-16 used in \cite{Ren2015}, our ResNet has no hidden fc layers. We adopt the idea of ``Networks on Conv feature maps'' (NoC) \cite{Ren2015a} to address this issue.
599
+ We compute the full-image shared conv feature maps using those layers whose strides on the image are no greater than 16 pixels (\ie, conv1, conv2\_ x, conv3\_x, and conv4\_x, totally 91 conv layers in ResNet-101; Table~\ref{tab:arch}). We consider these layers as analogous to the 13 conv layers in VGG-16, and by doing so, both ResNet and VGG-16 have conv feature maps of the same total stride (16 pixels).
600
+ These layers are shared by a region proposal network (RPN, generating 300 proposals) \cite{Ren2015} and a Fast R-CNN detection network \cite{Girshick2015}.
601
+ RoI pooling \cite{Girshick2015} is performed before conv5\_1. On this RoI-pooled feature, all layers of conv5\_x and up are adopted for each region, playing the roles of VGG-16's fc layers.
602
+ The final classification layer is replaced by two sibling layers (classification and box regression \cite{Girshick2015}).
603
+
604
+ For the usage of BN layers, after pre-training, we compute the BN statistics (means and variances) for each layer on the ImageNet training set. Then the BN layers are fixed during fine-tuning for object detection. As such, the BN layers become linear activations with constant offsets and scales, and BN statistics are not updated by fine-tuning. We fix the BN layers mainly for reducing memory consumption in Faster R-CNN training.
605
+
606
+
607
+ \vspace{.5em}
608
+ \noindent\textbf{PASCAL VOC}
609
+
610
+ Following \cite{Girshick2015,Ren2015}, for the PASCAL VOC 2007 \emph{test} set, we use the 5k \emph{trainval} images in VOC 2007 and 16k \emph{trainval} images in VOC 2012 for training (``07+12''). For the PASCAL VOC 2012 \emph{test} set, we use the 10k \emph{trainval}+\emph{test} images in VOC 2007 and 16k \emph{trainval} images in VOC 2012 for training (``07++12''). The hyper-parameters for training Faster R-CNN are the same as in \cite{Ren2015}.
611
+ Table~\ref{tab:detection_voc} shows the results. ResNet-101 improves the mAP by $>$3\% over VGG-16. This gain is solely because of the improved features learned by ResNet.
612
+
613
+
614
+
615
+ \vspace{.5em}
616
+ \noindent\textbf{MS COCO}
617
+
618
+ The MS COCO dataset \cite{Lin2014} involves 80 object categories. We evaluate the PASCAL VOC metric (mAP @ IoU = 0.5) and the standard COCO metric (mAP @ IoU = .5:.05:.95). We use the 80k images on the train set for training and the 40k images on the val set for evaluation.
619
+ Our detection system for COCO is similar to that for PASCAL VOC.
620
+ We train the COCO models with an 8-GPU implementation, and thus the RPN step has a mini-batch size of 8 images (\ie, 1 per GPU) and the Fast R-CNN step has a mini-batch size of 16 images. The RPN step and Fast R-CNN step are both trained for 240k iterations with a learning rate of 0.001 and then for 80k iterations with 0.0001.
621
+
622
+
623
+ Table~\ref{tab:detection_coco} shows the results on the MS COCO validation set. ResNet-101 has a 6\% increase of mAP@[.5, .95] over VGG-16, which is a 28\% relative improvement, solely contributed by the features learned by the better network. Remarkably, the mAP@[.5, .95]'s absolute increase (6.0\%) is nearly as big as mAP@.5's (6.9\%). This suggests that a deeper network can improve both recognition and localization.
624
+
625
+
626
+
627
+ \section{Object Detection Improvements}
628
+
629
+ For completeness, we report the improvements made for the competitions. These improvements are based on deep features and thus should benefit from residual learning.
630
+
631
+ \renewcommand\arraystretch{1.05}
632
+ \setlength{\tabcolsep}{4pt}
633
+ \begin{table*}[t]
634
+ \begin{center}
635
+ \small
636
+ \begin{tabular}{l|c|c|c|c}
637
+ \hline
638
+ training data & \multicolumn{2}{c|}{COCO train} & \multicolumn{2}{c}{COCO trainval} \\
639
+ \hline
640
+ test data & \multicolumn{2}{c|}{COCO val} & \multicolumn{2}{c}{COCO test-dev}\\
641
+ \hline
642
+ mAP & ~~~~@.5~~~~ & @[.5, .95] & ~~~~@.5~~~~ & @[.5, .95]\\
643
+ \hline
644
+ baseline Faster R-CNN (VGG-16) & 41.5 & 21.2 & \\
645
+ baseline Faster R-CNN (ResNet-101) & 48.4 & 27.2 & \\
646
+ ~+box refinement & 49.9 & 29.9 & \\
647
+ ~+context & 51.1 & 30.0 & 53.3 & 32.2 \\
648
+ ~+multi-scale testing & 53.8 & 32.5 & \textbf{55.7} & \textbf{34.9} \\
649
+ \hline
650
+ ensemble & & & \textbf{59.0} & \textbf{37.4} \\
651
+ \hline
652
+ \end{tabular}
653
+ \end{center}
654
+ \vspace{-.5em}
655
+ \caption{Object detection improvements on MS COCO using Faster R-CNN and ResNet-101.}
656
+ \vspace{-.5em}
657
+ \label{tab:detection_coco_improve}
658
+ \end{table*}
659
+
660
+ \newcolumntype{x}[1]{>{\centering}p{#1pt}}
661
+ \newcolumntype{y}{>{\centering}p{16pt}}
662
+ \renewcommand{\hl}[1]{\textbf{#1}}
663
+ \newcommand{\ct}[1]{\fontsize{6pt}{1pt}\selectfont{#1}}
664
+ \renewcommand{\arraystretch}{1.2}
665
+ \setlength{\tabcolsep}{1.5pt}
666
+ \begin{table*}[t]
667
+ \begin{center}
668
+ \footnotesize
669
+ \vspace{1em}
670
+ \resizebox{\linewidth}{!}{
671
+ \begin{tabular}{l|x{40}|x{54}|x{20}|yyyyyyyyyyyyyyyyyyyc}
672
+ \hline
673
+ \ct{system} & net & data & mAP & \ct{areo} & \ct{bike} & \ct{bird} & \ct{boat} & \ct{bottle} & \ct{bus} & \ct{car} & \ct{cat} & \ct{chair} & \ct{cow} & \ct{table} & \ct{dog} & \ct{horse} & \ct{mbike} & \ct{person} & \ct{plant} & \ct{sheep} & \ct{sofa} & \ct{train} & \ct{tv} \\
674
+ \hline
675
+ \footnotesize baseline & \footnotesize VGG-16 & 07+12 & {73.2} & 76.5 & 79.0 & {70.9} & {65.5} & {52.1} & {83.1} & {84.7} & 86.4 & 52.0 & {81.9} & 65.7 & {84.8} & {84.6} & {77.5} & {76.7} & 38.8 & {73.6} & 73.9 & {83.0} & {72.6}\\
676
+ \footnotesize baseline & \footnotesize ResNet-101 & 07+12 & 76.4 & 79.8 & 80.7 & 76.2 & 68.3 & 55.9 & 85.1 & 85.3 & \hl{89.8} & 56.7 & 87.8 & 69.4 & 88.3 & 88.9 & 80.9 & 78.4 & 41.7 & 78.6 & 79.8 & 85.3 & 72.0 \\
677
+ \footnotesize baseline+++ & \footnotesize ResNet-101 & COCO+07+12 & \hl{85.6} & \hl{90.0} & \hl{89.6} & \hl{87.8} & \hl{80.8} & \hl{76.1} & \hl{89.9} & \hl{89.9} & {89.6} & \hl{75.5} & \hl{90.0} & \hl{80.7} & \hl{89.6} & \hl{90.3} & \hl{89.1} & \hl{88.7} & \hl{65.4} & \hl{88.1} & \hl{85.6} & \hl{89.0} & \hl{86.8} \\
678
+ \hline
679
+ \end{tabular}
680
+ }
681
+ \end{center}
682
+ \vspace{-.5em}
683
+ \caption{Detection results on the PASCAL VOC 2007 test set. The baseline is the Faster R-CNN system. The system ``baseline+++'' include box refinement, context, and multi-scale testing in Table~\ref{tab:detection_coco_improve}.}
684
+ \label{tab:voc07_all}
685
+ \begin{center}
686
+ \footnotesize
687
+ \resizebox{\linewidth}{!}{
688
+ \begin{tabular}{l|x{40}|x{54}|x{20}|yyyyyyyyyyyyyyyyyyyc}
689
+ \hline
690
+ \ct{system} & net & data & mAP & \ct{areo} & \ct{bike} & \ct{bird} & \ct{boat} & \ct{bottle} & \ct{bus} & \ct{car} & \ct{cat} & \ct{chair} & \ct{cow} & \ct{table} & \ct{dog} & \ct{horse} & \ct{mbike} & \ct{person} & \ct{plant} & \ct{sheep} & \ct{sofa} & \ct{train} & \ct{tv} \\
691
+ \hline
692
+ \footnotesize baseline & \footnotesize VGG-16 & 07++12 & {70.4} & {84.9} & {79.8} & {74.3} & {53.9} & {49.8} & 77.5 & {75.9} & 88.5 & {45.6} & {77.1} & {55.3} & 86.9 & {81.7} & {80.9} & {79.6} & {40.1} & {72.6} & 60.9 & {81.2} & 61.5\\
693
+ \footnotesize baseline & \footnotesize ResNet-101 & 07++12 & 73.8 & 86.5 & 81.6 & 77.2 & 58.0 & 51.0 & 78.6 & 76.6 & 93.2 & 48.6 & 80.4 & 59.0 & 92.1 & 85.3 & 84.8 & 80.7 & 48.1 & 77.3 & 66.5 & 84.7 & 65.6 \\
694
+ \footnotesize baseline+++ & \footnotesize ResNet-101 & COCO+07++12 & \hl{83.8} & \hl{92.1} & \hl{88.4} & \hl{84.8} & \hl{75.9} & \hl{71.4} & \hl{86.3} & \hl{87.8} & \hl{94.2} & \hl{66.8} & \hl{89.4} & \hl{69.2} & \hl{93.9} & \hl{91.9} & \hl{90.9} & \hl{ 89.6} & \hl{67.9} & \hl{88.2} & \hl{76.8} & \hl{90.3} & \hl{80.0} \\
695
+ \hline
696
+ \end{tabular}
697
+ }
698
+ \end{center}
699
+ \vspace{-.5em}
700
+ \caption{Detection results on the PASCAL VOC 2012 test set (\url{http://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?challengeid=11&compid=4}). The baseline is the Faster R-CNN system. The system ``baseline+++'' include box refinement, context, and multi-scale testing in Table~\ref{tab:detection_coco_improve}.}
701
+ \label{tab:voc12_all}
702
+ \end{table*}
703
+
704
+ \vspace{.5em}
705
+ \noindent\textbf{MS COCO}
706
+
707
+ \noindent\emph{Box refinement.} Our box refinement partially follows the iterative localization in \cite{Gidaris2015}.
708
+ In Faster R-CNN, the final output is a regressed box that is different from its proposal box. So for inference, we pool a new feature from the regressed box and obtain a new classification score and a new regressed box.
709
+ We combine these 300 new predictions with the original 300 predictions. Non-maximum suppression (NMS) is applied on the union set of predicted boxes using an IoU threshold of 0.3 \cite{Girshick2014}, followed by box voting \cite{Gidaris2015}.
710
+ Box refinement improves mAP by about 2 points (Table~\ref{tab:detection_coco_improve}).
711
+
712
+ \vspace{.5em}
713
+ \noindent\emph{Global context.} We combine global context in the Fast R-CNN step. Given the full-image conv feature map, we pool a feature by global Spatial Pyramid Pooling \cite{He2014} (with a ``single-level'' pyramid) which can be implemented as ``RoI'' pooling using the entire image's bounding box as the RoI. This pooled feature is fed into the post-RoI layers to obtain a global context feature. This global feature is concatenated with the original per-region feature, followed by the sibling classification and box regression layers. This new structure is trained end-to-end.
714
+ Global context improves mAP@.5 by about 1 point (Table~\ref{tab:detection_coco_improve}).
715
+
716
+ \vspace{.5em}
717
+ \noindent\emph{Multi-scale testing.} In the above, all results are obtained by single-scale training/testing as in \cite{Ren2015}, where the image's shorter side is $s=600$ pixels. Multi-scale training/testing has been developed in \cite{He2014,Girshick2015} by selecting a scale from a feature pyramid, and in \cite{Ren2015a} by using maxout layers. In our current implementation, we have performed multi-scale \emph{testing} following \cite{Ren2015a}; we have not performed multi-scale training because of limited time. In addition, we have performed multi-scale testing only for the Fast R-CNN step (but not yet for the RPN step).
718
+ With a trained model, we compute conv feature maps on an image pyramid, where the image's shorter sides are $s\in\{200, 400, 600, 800, 1000\}$. We select two adjacent scales from the pyramid following \cite{Ren2015a}. RoI pooling and subsequent layers are performed on the feature maps of these two scales \cite{Ren2015a}, which are merged by maxout as in \cite{Ren2015a}.
719
+ Multi-scale testing improves the mAP by over 2 points (Table~\ref{tab:detection_coco_improve}).
720
+
721
+ \vspace{.5em}
722
+ \noindent\emph{Using validation data.} Next we use the 80k+40k trainval set for training and the 20k test-dev set for evaluation. The test-dev set has no publicly available ground truth and the result is reported by the evaluation server. Under this setting, the results are an mAP@.5 of 55.7\% and an mAP@[.5, .95] of 34.9\% (Table~\ref{tab:detection_coco_improve}). This is our single-model result.
723
+
724
+ \vspace{.5em}
725
+ \noindent\emph{Ensemble.} In Faster R-CNN, the system is designed to learn region proposals and also object classifiers, so an ensemble can be used to boost both tasks. We use an ensemble for proposing regions, and the union set of proposals are processed by an ensemble of per-region classifiers.
726
+ Table~\ref{tab:detection_coco_improve} shows our result based on an ensemble of 3 networks. The mAP is 59.0\% and 37.4\% on the test-dev set. \emph{This result won the 1st place in the detection task in COCO 2015.}
727
+
728
+
729
+ \vspace{1em}
730
+ \noindent\textbf{PASCAL VOC}
731
+
732
+ We revisit the PASCAL VOC dataset based on the above model. With the single model on the COCO dataset (55.7\% mAP@.5 in Table~\ref{tab:detection_coco_improve}), we fine-tune this model on the PASCAL VOC sets. The improvements of box refinement, context, and multi-scale testing are also adopted. By doing so we achieve 85.6\% mAP on PASCAL VOC 2007 (Table~\ref{tab:voc07_all}) and 83.8\% on PASCAL VOC 2012 (Table~\ref{tab:voc12_all})\footnote{\fontsize{6.5pt}{1em}\selectfont\url{http://host.robots.ox.ac.uk:8080/anonymous/3OJ4OJ.html}, submitted on 2015-11-26.}. The result on PASCAL VOC 2012 is 10 points higher than the previous state-of-the-art result \cite{Gidaris2015}.
733
+
734
+
735
+ \vspace{1em}
736
+ \noindent\textbf{ImageNet Detection}
737
+
738
+ \renewcommand\arraystretch{1.2}
739
+ \setlength{\tabcolsep}{10pt}
740
+ \begin{table}[t]
741
+ \begin{center}
742
+ \small
743
+ \begin{tabular}{l|c|c}
744
+ \hline
745
+ & val2 & test \\
746
+ \hline
747
+ GoogLeNet \cite{Szegedy2015} (ILSVRC'14) & - & 43.9 \\
748
+ \hline
749
+ our single model (ILSVRC'15) & 60.5 & 58.8 \\
750
+ our ensemble (ILSVRC'15) & \textbf{63.6} & \textbf{62.1} \\
751
+ \hline
752
+ \end{tabular}
753
+ \end{center}
754
+ \vspace{-.5em}
755
+ \caption{Our results (mAP, \%) on the ImageNet detection dataset. Our detection system is Faster R-CNN \cite{Ren2015} with the improvements in Table~\ref{tab:detection_coco_improve}, using ResNet-101.
756
+ }
757
+ \vspace{-.5em}
758
+ \label{tab:imagenet_det}
759
+ \end{table}
760
+
761
+ The ImageNet Detection (DET) task involves 200 object categories. The accuracy is evaluated by mAP@.5.
762
+ Our object detection algorithm for ImageNet DET is the same as that for MS COCO in Table~\ref{tab:detection_coco_improve}. The networks are pre-trained on the 1000-class ImageNet classification set, and are fine-tuned on the DET data. We split the validation set into two parts (val1/val2) following \cite{Girshick2014}. We fine-tune the detection models using the DET training set and the val1 set. The val2 set is used for validation. We do not use other ILSVRC 2015 data. Our single model with ResNet-101 has 58.8\% mAP and our ensemble of 3 models has 62.1\% mAP on the DET test set (Table~\ref{tab:imagenet_det}). \emph{This result won the 1st place in the ImageNet detection task in ILSVRC 2015}, surpassing the second place by \textbf{8.5 points} (absolute).
763
+
764
+
765
+
766
+ \section{ImageNet Localization}
767
+ \label{sec:appendix_localization}
768
+
769
+ \renewcommand\arraystretch{1.05}
770
+ \setlength{\tabcolsep}{2pt}
771
+ \begin{table}[t]
772
+ \begin{center}
773
+ \small
774
+ \resizebox{1.0\linewidth}{!}{
775
+ \begin{tabular}{c|c|c|c|c|c}
776
+ \hline
777
+ \tabincell{c}{LOC \\ method} & \tabincell{c}{LOC \\ network} & testing & \tabincell{c}{LOC error \\on GT CLS} & \tabincell{c}{classification\\ network} & \tabincell{c}{top-5 LOC error \\ on predicted CLS} \\
778
+ \hline
779
+ VGG's \cite{Simonyan2015} & VGG-16 & 1-crop & 33.1 \cite{Simonyan2015} & & \\
780
+ RPN & ResNet-101 & 1-crop & 13.3 & & \\
781
+ RPN & ResNet-101 & dense & 11.7 & & \\
782
+ \hline
783
+ RPN & ResNet-101 & dense & & ResNet-101 & 14.4 \\
784
+ RPN+RCNN & ResNet-101 & dense & & ResNet-101 & \textbf{10.6} \\
785
+ RPN+RCNN & ensemble & dense & & ensemble & \textbf{8.9} \\
786
+ \hline
787
+ \end{tabular}
788
+ }
789
+ \end{center}
790
+ \vspace{-.5em}
791
+ \caption{Localization error (\%) on the ImageNet validation. In the column of ``LOC error on GT class'' (\cite{Simonyan2015}), the ground truth class is used.
792
+ In the ``testing'' column, ``1-crop'' denotes testing on a center crop of 224$\times$224 pixels, ``dense'' denotes dense (fully convolutional) and multi-scale testing.
793
+ }
794
+ \vspace{-.5em}
795
+ \label{tab:localization}
796
+ \end{table}
797
+
798
+ The ImageNet Localization (LOC) task \cite{Russakovsky2014} requires to classify and localize the objects.
799
+ Following \cite{Sermanet2014,Simonyan2015}, we assume that the image-level classifiers are first adopted for predicting the class labels of an image, and the localization algorithm only accounts for predicting bounding boxes based on the predicted classes. We adopt the ``per-class regression'' (PCR) strategy \cite{Sermanet2014,Simonyan2015}, learning a bounding box regressor for each class. We pre-train the networks for ImageNet classification and then fine-tune them for localization.
800
+ We train networks on the provided 1000-class ImageNet training set.
801
+
802
+ Our localization algorithm is based on the RPN framework of \cite{Ren2015} with a few modifications.
803
+ Unlike the way in \cite{Ren2015} that is category-agnostic, our RPN for localization is designed in a \emph{per-class} form. This RPN ends with two sibling 1$\times$1 convolutional layers for binary classification (\emph{cls}) and box regression (\emph{reg}), as in \cite{Ren2015}. The \emph{cls} and \emph{reg} layers are both in a \emph{per-class} from, in contrast to \cite{Ren2015}. Specifically, the \emph{cls} layer has a 1000-d output, and each dimension is \emph{binary logistic regression} for predicting being or not being an object class; the \emph{reg} layer has a 1000$\times$4-d output consisting of box regressors for 1000 classes.
804
+ As in \cite{Ren2015}, our bounding box regression is with reference to multiple translation-invariant ``anchor'' boxes at each position.
805
+
806
+ As in our ImageNet classification training (Sec.~\ref{sec:impl}), we randomly sample 224$\times$224 crops for data augmentation. We use a mini-batch size of 256 images for fine-tuning.
807
+ To avoid negative samples being dominate, 8 anchors are randomly sampled for each image, where the sampled positive and negative anchors have a ratio of 1:1 \cite{Ren2015}. For testing, the network is applied on the image fully-convolutionally.
808
+
809
+ \renewcommand\arraystretch{1.05}
810
+ \setlength{\tabcolsep}{10pt}
811
+ \begin{table}[t]
812
+ \begin{center}
813
+ \small
814
+ \begin{tabular}{l|c|c}
815
+ \hline
816
+ \multicolumn{1}{c|}{\multirow{2}{*}{method}} & \multicolumn{2}{c}{top-5 localization err} \\\cline{2-3}
817
+ & val & test \\
818
+ \hline
819
+ OverFeat \cite{Sermanet2014} (ILSVRC'13) & 30.0 & 29.9 \\
820
+ GoogLeNet \cite{Szegedy2015} (ILSVRC'14) & - & 26.7 \\
821
+ VGG \cite{Simonyan2015} (ILSVRC'14) & 26.9 & 25.3 \\
822
+ \hline
823
+ ours (ILSVRC'15) & \textbf{8.9} & \textbf{9.0} \\
824
+ \hline
825
+ \end{tabular}
826
+ \end{center}
827
+ \vspace{-.5em}
828
+ \caption{Comparisons of localization error (\%) on the ImageNet dataset with state-of-the-art methods.
829
+ }
830
+ \vspace{-.5em}
831
+ \label{tab:localization_all}
832
+ \end{table}
833
+
834
+ Table~\ref{tab:localization} compares the localization results. Following \cite{Simonyan2015}, we first perform ``oracle'' testing using the ground truth class as the classification prediction. VGG's paper \cite{Simonyan2015} reports a center-crop error of 33.1\% (Table~\ref{tab:localization}) using ground truth classes. Under the same setting, our RPN method using ResNet-101 net significantly reduces the center-crop error to 13.3\%. This comparison demonstrates the excellent performance of our framework.
835
+ With dense (fully convolutional) and multi-scale testing, our ResNet-101 has an error of 11.7\% using ground truth classes. Using ResNet-101 for predicting classes (4.6\% top-5 classification error, Table~\ref{tab:single}), the top-5 localization error is 14.4\%.
836
+
837
+ The above results are only based on the \emph{proposal network} (RPN) in Faster R-CNN \cite{Ren2015}. One may use the \emph{detection network} (Fast R-CNN \cite{Girshick2015}) in Faster R-CNN to improve the results. But we notice that on this dataset, one image usually contains a single dominate object, and the proposal regions highly overlap with each other and thus have very similar RoI-pooled features. As a result, the image-centric training of Fast R-CNN \cite{Girshick2015} generates samples of small variations, which may not be desired for stochastic training. Motivated by this, in our current experiment we use the original R-CNN \cite{Girshick2014} that is RoI-centric, in place of Fast R-CNN.
838
+
839
+ Our R-CNN implementation is as follows. We apply the per-class RPN trained as above on the training images to predict bounding boxes for the ground truth class. These predicted boxes play a role of class-dependent proposals.
840
+ For each training image, the highest scored 200 proposals are extracted as training samples to train an R-CNN classifier. The image region is cropped from a proposal, warped to 224$\times$224 pixels, and fed into the classification network as in R-CNN \cite{Girshick2014}. The outputs of this network consist of two sibling fc layers for \emph{cls} and \emph{reg}, also in a per-class form.
841
+ This R-CNN network is fine-tuned on the training set using a mini-batch size of 256 in the RoI-centric fashion. For testing, the RPN generates the highest scored 200 proposals for each predicted class, and the R-CNN network is used to update these proposals' scores and box positions.
842
+
843
+ This method reduces the top-5 localization error to 10.6\% (Table~\ref{tab:localization}). This is our single-model result on the validation set. Using an ensemble of networks for both classification and localization, we achieve a top-5 localization error of 9.0\% on the test set. This number significantly outperforms the ILSVRC 14 results (Table~\ref{tab:localization_all}), showing a 64\% relative reduction of error. \emph{This result won the 1st place in the ImageNet localization task in ILSVRC 2015.}
844
+
845
+
846
+ \end{document}