\documentclass[10pt,journal,letterpaper,compsoc]{IEEEtran} \newif\ifarxiv \arxivtrue \usepackage{times} \usepackage{natbib} \usepackage{graphicx} \usepackage{afterpage} \usepackage{amsmath} \usepackage{amssymb} \usepackage{color} \usepackage[utf8]{inputenc} \usepackage{refcount} \def\data{{\rm data}} \def\energy{{\cal E}} \def\sigmoid{{\rm sigmoid}} \def\note#1{} \def\begplan{\color{red}} \def\endplan{\color{black}} \def\vsA{\vspace*{-0.5mm}} \def\vsB{\vspace*{-1mm}} \def\vsC{\vspace*{-2mm}} \def\vsD{\vspace*{-3mm}} \def\vsE{\vspace*{-4mm}} \def\vsF{\vspace*{-5mm}} \def\vsG{\vspace*{-6mm}} \addtolength{\jot}{-1.25mm} \newcommand{\argmin}{\operatornamewithlimits{argmin}} \newcommand{\argmax}{\operatornamewithlimits{argmax}} \newcommand{\E}[2]{ {\mathbb{E}}_{#1}\left[{#2}\right] } \newcommand{\EE}[1]{ {\mathbb{E}}\left[{#1}\right] } \newcommand{\R}{ {\mathbb{R}} } \begin{document} \title{Representation Learning: A Review and New Perspectives} \author{Yoshua Bengio$^\dagger$, Aaron Courville, and Pascal Vincent$^\dagger$\\ Department of computer science and operations research, U. Montreal\\ $\dagger$ also, Canadian Institute for Advanced Research (CIFAR) \vsC \vsC \vsC } \date{} \maketitle \vsC \vsC \vsC \begin{abstract} \vsA The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning. \end{abstract} \vsC \vsC \begin{IEEEkeywords} Deep learning, representation learning, feature learning, unsupervised learning, Boltzmann Machine, autoencoder, neural nets \end{IEEEkeywords} \iffalse \begin{verbatim} Notation: please add your own here ********************************** \end{verbatim} $d_x$ input dimensionality. $d_h$ dimensionality of learned feature vector. \noindent$x^{(t)}$ = t-th example input $h^{(t)}$ = representation of $x^{(t)}$ $\theta$ = parameters $W$ = dictionary or first layer weights $b$ = hidden units biases $c$ = input units biases $f_\theta$ = encoder $g_\theta$ = decoder $Z$ or $Z_\theta$ = partition function $\lambda$ = regularization coefficient $\mathcal{J}$ = training criterion $L$ = loss function (e.g. reconstruction error) $A^T$ = transpose (by opposition to $A'$ which can be confusing) $\energy$ = energy function $\data$ = generic random variable for observed data $\sigmoid$ = logistic sigmoid function $\R$ = Reals $\E{p}{f(x)}$ = Expectation \begin{verbatim} ********************************** \end{verbatim} \fi \iffalse \begplan 1. Introduction 2. Motivation (new): Recent successes of feature learning methods. Speech recognition activity recognition object recognition Use this to introduce ideas that we will see later. Such as conv-nets etc. 3. What makes a good representation? Try to string together a story out of the recent successes. (1) Abstraction helps / invariance helps. (2) Depth helps because Depth leads to abstraction. (this point is fuzzy to me, I'm not sure of the evidence here.) (3) Regularization (pretraining) helps when there is a small number of examples. 4. What is a Deep architecture? (1) multiple layers of learned features. (2) typically made up of layerwise learning modules that are trained greedily (3) (possible exceptions of the layerwise learning could be considered and could be interesting but are current difficult to train.) 5. What are these layerwise learning modules? (1) They come in two basic flavors probabilistic and deterministic. (2) Strengths and weaknesses of each. semantic units versus computational units. (3) This distinction is simplistic and is often problematic (eg. sparse coding), JEPADA would go here. \endplan \begplan 6. Probabilistic models. (1) directed models (2) undirected models (3) severely restricted section on real-valued data modeling (cut down to a paragraph). 7. Deterministic models. (1) autoencoders (2) manifold methods / interpretations 8. Connections between Probabilistic and Deterministic models (There are here and not above because they rely a bit on the details of the previous two sections.) (1) Lots of resent work has gone into describing the connections between them. Pascal's work (2) Merging these is popular now: Einser's and Bagnell's approaches, i.e. building a probabilistic model and then unrolling the computational graph and learning on it. 9. Building in invariance: (1) domain knowledge invariance, i.e. conv-nets etc. (2) domain knowledge disentangling, i.e. transforming autoencoder, deconvolutional network, etc. (3) learning to disentangle. 10. Practical considerations. \endplan \fi \vsD \section{Introduction \note{YB}} \vsA The performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. For that reason, much of the actual effort in deploying machine learning algorithms goes into the design of preprocessing pipelines and data transformations that result in a representation of the data that can support effective machine learning. Such feature engineering is important but labor-intensive and highlights the weakness of current learning algorithms: their inability to extract and organize the discriminative information from the data. Feature engineering is a way to take advantage of human ingenuity and prior knowledge to compensate for that weakness. In order to expand the scope and ease of applicability of machine learning, it would be highly desirable to make learning algorithms less dependent on feature engineering, so that novel applications could be constructed faster, and more importantly, to make progress towards Artificial Intelligence (AI). An AI must fundamentally {\em understand the world around us}, and we argue that this can only be achieved if it can learn to identify and disentangle the underlying explanatory factors hidden in the observed milieu of low-level sensory data. This paper is about {\em representation learning}, i.e., learning representations of the data that make it easier to extract useful information when building classifiers or other predictors. In the case of probabilistic models, a good representation is often one that captures the posterior distribution of the underlying explanatory factors for the observed input. A good representation is also one that is useful as input to a supervised predictor. Among the various ways of learning representations, this paper focuses on deep learning methods: those that are formed by the composition of multiple non-linear transformations, with the goal of yielding more abstract -- and ultimately more useful -- representations. Here we survey this rapidly developing area with special emphasis on recent progress. We consider some of the fundamental questions that have been driving research in this area. Specifically, what makes one representation better than another? Given an example, how should we compute its representation, i.e. perform feature extraction? Also, what are appropriate objectives for learning good representations? \note{AC: This paragraph bellow doesn't seem to add to much other than a huge list of questions? what role does it serve?} \note{YB: I like questions. I think they are important. They set the stage. They raise the issues. They fit well in an introduction.} \note{AC: This list reads like you wrote it for you, not our target reader. You introduce ideas you don't explain (e.g. explaining away) and almost all of it way too briefly discussed for anyone who isn't an insider to get anything out of it. I have integrated some of these ideas in the above paragraph.} \vsD \section{Why should we care about learning representations?} \label{sec:motivation} \vsA Representation learning has become a field in itself in the machine learning community, with regular workshops at the leading conferences such as NIPS and ICML, and a new conference dedicated to it, ICLR\footnote{International Conference on Learning Representations}, sometimes under the header of {\em Deep Learning} or {\em Feature Learning}. Although depth is an important part of the story, many other priors are interesting and can be conveniently captured when the problem is cast as one of learning a representation, as discussed in the next section. The rapid increase in scientific activity on representation learning has been accompanied and nourished by a remarkable string of empirical successes both in academia and in industry. Below, we briefly highlight some of these high points. \vspace*{1mm} \noindent{\bf Speech Recognition and Signal Processing} \vspace*{1mm} Speech was one of the early applications of neural networks, in particular convolutional (or time-delay) neural networks \footnote{See~\citet{Bengio-ijprai93} for a review of early work in this area.}. The recent revival of interest in neural networks, deep learning, and representation learning has had a strong impact in the area of speech recognition, with breakthrough results~\citep{dahl2010phonerec-small,Deng-2010,Seide2011,Mohamed+Dahl+Hinton-2012,Dahl2012,Hinton-et-al-2012} obtained by several academics as well as researchers at industrial labs bringing these algorithms to a larger scale and into products. For example, Microsoft has released in 2012 a new version of their MAVIS (Microsoft Audio Video Indexing Service) speech system based on deep learning~\citep{Seide2011}. These authors managed to reduce the word error rate on four major benchmarks by about 30\% (e.g. from 27.4\% to 18.5\% on RT03S) compared to state-of-the-art models based on Gaussian mixtures for the acoustic modeling and trained on the same amount of data (309 hours of speech). The relative improvement in error rate obtained by \citet{Dahl2012} on a smaller large-vocabulary speech recognition benchmark (Bing mobile business search dataset, with 40 hours of speech) is between 16\% and 23\%. Representation-learning algorithms have also been applied to music, substantially beating the state-of-the-art in polyphonic transcription~\citep{Boulanger+al-ICML2012-small}, with relative error improvement between 5\% and 30\% on a standard benchmark of 4 datasets. Deep learning also helped to win MIREX (Music Information Retrieval) competitions, e.g. in 2011 on audio tagging~\citep{Hamel-et-al-ISMIR2011-small}. \vspace*{1mm} \noindent{\bf Object Recognition} \vspace*{1mm} The beginnings of deep learning in 2006 have focused on the MNIST digit image classification problem~\citep{Hinton06,Bengio-nips-2006-small}, breaking the supremacy of SVMs (1.4\% error) on this dataset\footnote{for the knowledge-free version of the task, where no image-specific prior is used, such as image deformations or convolutions}. The latest records are still held by deep networks: ~\citet{Ciresan-2012} currently claims the title of state-of-the-art for the unconstrained version of the task (e.g., using a convolutional architecture), with 0.27\% error, and~\citet{Dauphin-et-al-NIPS2011-small} is state-of-the-art for the knowledge-free version of MNIST, with 0.81\% error. In the last few years, deep learning has moved from digits to object recognition in natural images, and the latest breakthrough has been achieved on the ImageNet dataset\footnote{The 1000-class ImageNet benchmark, whose results are detailed here:\\ {\tt\scriptsize http://www.image-net.org/challenges/LSVRC/2012/results.html}} bringing down the state-of-the-art error rate from 26.1\% to 15.3\%~\citep{Krizhevsky-2012-small}. \vspace*{1mm} \noindent{\bf Natural Language Processing} \vspace*{1mm} Besides speech recognition, there are many other Natural Language Processing (NLP) applications of representation learning. {\em Distributed representations} for symbolic data were introduced by ~\citet{Hinton86b-small}, and first developed in the context of statistical language modeling by~\citet{Bengio-nnlm2003-small} in so-called {\em neural net language models}~\citep{Bengio-scholarpedia-2007-small}. They are all based on learning a distributed representation for each word, called a {\em word embedding}. Adding a convolutional architecture, ~\citet{collobert:2011b} developed the SENNA system\footnote{downloadable from {\tt http://ml.nec-labs.com/senna/}} that shares representations across the tasks of language modeling, part-of-speech tagging, chunking, named entity recognition, semantic role labeling and syntactic parsing. SENNA approaches or surpasses the state-of-the-art on these tasks but is simpler and much faster than traditional predictors. Learning word embeddings can be combined with learning image representations in a way that allow to associate text and images. This approach has been used successfully to build Google's image search, exploiting huge quantities of data to map images and queries in the same space~\citep{Weston+Bengio+Usunier-2010} and it has recently been extended to deeper multi-modal representations~\citep{Srivastava+Salakhutdinov-NIPS2012-small}. The neural net language model was also improved by adding recurrence to the hidden layers~\citep{Mikolov-Interspeech-2011-small}, allowing it to beat the state-of-the-art (smoothed n-gram models) not only in terms of perplexity (exponential of the average negative log-likelihood of predicting the right next word, going down from 140 to 102) but also in terms of word error rate in speech recognition (since the language model is an important component of a speech recognition system), decreasing it from 17.2\% (KN5 baseline) or 16.9\% (discriminative language model) to 14.4\% on the Wall Street Journal benchmark task. Similar models have been applied in statistical machine translation~\citep{Schwenk-2012,Le-et-al-2013-small}, improving perplexity and BLEU scores. Recursive auto-encoders (which generalize recurrent networks) have also been used to beat the state-of-the-art in full sentence paraphrase detection~\citep{Socher+al-NIPS2011} almost doubling the F1 score for paraphrase detection. Representation learning can also be used to perform word sense disambiguation~\citep{Antoine-al-2012-small}, bringing up the accuracy from 67.8\% to 70.2\% on the subset of Senseval-3 where the system could be applied (with subject-verb-object sentences). Finally, it has also been successfully used to surpass the state-of-the-art in sentiment analysis~\citep{Glorot+al-ICML-2011-small,Socher+al-EMNLP2011-small}. \vspace*{1mm} \noindent{\bf Multi-Task and Transfer Learning, Domain Adaptation} \vspace*{1mm} Transfer learning is the ability of a learning algorithm to exploit commonalities between different learning tasks in order to share statistical strength, and {\em transfer knowledge} across tasks. As discussed below, we hypothesize that representation learning algorithms have an advantage for such tasks because they learn representations that capture underlying factors, a subset of which may be relevant for each particular task, as illustrated in Figure~\ref{fig:multi-task}. This hypothesis seems confirmed by a number of empirical results showing the strengths of representation learning algorithms in transfer learning scenarios. \begin{figure}[h] \vsC \centerline{\includegraphics[width=0.6\linewidth]{multi-task.pdf}} \vsC \caption{\small Illustration of representation-learning discovering explanatory factors (middle hidden layer, in red), some explaining the input (semi-supervised setting), and some explaining target for each task. Because these subsets overlap, sharing of statistical strength helps generalization.} \label{fig:multi-task}. \vsC \end{figure} Most impressive are the two transfer learning challenges held in 2011 and won by representation learning algorithms. First, the Transfer Learning Challenge, presented at an ICML 2011 workshop of the same name, was won using unsupervised layer-wise pre-training~\citep{UTLC+DL+tutorial-2011-small,UTLC+LISA-2011-small}. A second Transfer Learning Challenge was held the same year and won by~\citet{Goodfellow+all-NIPS2011}. Results were presented at NIPS 2011's Challenges in Learning Hierarchical Models Workshop. \iffalse See Section~\ref{sec:transfer} for a longer discussion and more pointers to other related results showing off the natural ability of representation learning algorithms to generalize to new classes, tasks, and domains. \else In the related {\em domain adaptation} setup, the target remains the same but the input distribution changes~\citep{Glorot+al-ICML-2011-small,Chen-icml2012}. In the {\em multi-task learning} setup, representation learning has also been found advantageous~\citet{Krizhevsky-2012-small,collobert:2011b}, because of shared factors across tasks. \vsD \section{What makes a representation good?} \label{sec:whatmakesitgood} \vsA \subsection{Priors for Representation Learning in AI} \label{sec:priors} \vsA In~\citet{Bengio+Lecun-chapter2007}, one of us introduced the notion of AI-tasks, which are challenging for current machine learning algorithms, and involve complex but highly structured dependencies. One reason why explicitly dealing with representations is interesting is because they can be convenient to express many general priors about the world around us, i.e., priors that are not task-specific but would be likely to be useful for a learning machine to solve AI-tasks. Examples of such general-purpose priors are the following:\\ $\bullet$ {\bf Smoothness}: assumes the function to be learned $f$ is s.t. $x\approx y$ generally implies $f(x)\approx f(y)$. This most basic prior is present in most machine learning, but is insufficient to get around the curse of dimensionality, see Section \ref{sec:smoothness}.\\ $\bullet$ {\bf Multiple explanatory factors}: the data generating distribution is generated by different underlying factors, and for the most part what one learns about one factor generalizes in many configurations of the other factors. The objective to recover or at least disentangle these underlying factors of variation is discussed in Section~\ref{sec:disentangling}. This assumption is behind the idea of {\bf distributed representations}, discussed in Section \ref{sec:distributed} below.\\ $\bullet$ {\bf A hierarchical organization of explanatory factors}: the concepts that are useful for describing the world around us can be defined in terms of other concepts, in a hierarchy, with more {\bf abstract} concepts higher in the hierarchy, defined in terms of less abstract ones. This assumption is exploited with {\bf deep representations}, elaborated in Section \ref{sec:depth} below.\\ $\bullet$ {\bf Semi-supervised learning}: with inputs $X$ and target $Y$ to predict, a subset of the factors explaining $X$'s distribution explain much of $Y$, given $X$. Hence representations that are useful for $P(X)$ tend to be useful when learning $P(Y|X)$, allowing sharing of statistical strength between the unsupervised and supervised learning tasks, see Section~\ref{sec:stacking}.\\ $\bullet$ {\bf Shared factors across tasks}: with many $Y$'s of interest or many learning tasks in general, tasks (e.g., the corresponding $P(Y|X,{\rm task})$) are explained by factors that are shared with other tasks, allowing sharing of statistical strengths across tasks, as discussed in the previous section (Multi-Task and Transfer Learning, Domain Adaptation).\\ $\bullet$ {\bf Manifolds}: probability mass concentrates near regions that have a much smaller dimensionality than the original space where the data lives. This is explicitly exploited in some of the auto-encoder algorithms and other manifold-inspired algorithms described respectively in Sections \ref{sec:ae} and \ref{sec:manifold}.\\ $\bullet$ {\bf Natural clustering}: different values of categorical variables such as object classes are associated with separate manifolds. More precisely, the local variations on the manifold tend to preserve the value of a category, and a linear interpolation between examples of different classes in general involves going through a low density region, i.e., $P(X|Y=i)$ for different $i$ tend to be well separated and not overlap much. For example, this is exploited in the Manifold Tangent Classifier discussed in Section~\ref{sec:leveraging-manifold}. This hypothesis is consistent with the idea that humans have {\em named} categories and classes because of such statistical structure (discovered by their brain and propagated by their culture), and machine learning tasks often involves predicting such categorical variables.\\ $\bullet$ {\bf Temporal and spatial coherence}: consecutive (from a sequence) or spatially nearby observations tend to be associated with the same value of relevant categorical concepts, or result in a small move on the surface of the high-density manifold. More generally, different factors change at different temporal and spatial scales, and many categorical concepts of interest change slowly. When attempting to capture such categorical variables, this prior can be enforced by making the associated representations slowly changing, i.e., penalizing changes in values over time or space. This prior was introduced in~\cite{Becker92} and is discussed in Section~\ref{sec:slowness}.\\ $\bullet$ {\bf Sparsity}: for any given observation $x$, only a small fraction of the possible factors are relevant. In terms of representation, this could be represented by features that are often zero (as initially proposed by~\citet{Olshausen+Field-1996}), or by the fact that most of the extracted features are {\em insensitive} to small variations of $x$. This can be achieved with certain forms of priors on latent variables (peaked at 0), or by using a non-linearity whose value is often flat at 0 (i.e., 0 and with a 0 derivative), or simply by penalizing the magnitude of the Jacobian matrix (of derivatives) of the function mapping input to representation. This is discussed in Sections~\ref{sec:sparse-coding} and~\ref{sec:ae}.\\ $\bullet$ {\bf Simplicity of Factor Dependencies}: in good high-level representations, the factors are related to each other through simple, typically linear dependencies. This can be seen in many laws of physics, and is assumed when plugging a linear predictor on top of a learned representation. We can view many of the above priors as ways to help the learner discover and {\bf disentangle} some of the underlying (and a priori unknown) factors of variation that the data may reveal. This idea is pursued further in Sections~\ref{sec:disentangling} and~\ref{sec:disentangling-algorithms}. \vsD \subsection{Smoothness and the Curse of Dimensionality} \label{sec:smoothness} \vsA For AI-tasks, such as vision and NLP, it seems hopeless to rely only on simple parametric models (such as linear models) because they cannot capture enough of the complexity of interest unless provided with the appropriate feature space. Conversely, machine learning researchers have sought flexibility in {\em local}\footnote{{\em local} in the sense that the value of the learned function at $x$ depends mostly on training examples $x^{(t)}$'s close to $x$} {\em non-parametric} learners such as kernel machines with a fixed generic local-response kernel (such as the Gaussian kernel). Unfortunately, as argued at length by \citet{Bengio+Monperrus-2005-short,Bengio-localfailure-NIPS-2006-small,Bengio+Lecun-chapter2007,Bengio-2009,Bengio-decision-trees10}, most of these algorithms only exploit the principle of {\em local generalization}, i.e., the assumption that the target function (to be learned) is smooth enough, so they rely on examples to {\em explicitly map out the wrinkles of the target function}. Generalization is mostly achieved by a form of local interpolation between neighboring training examples. Although smoothness can be a useful assumption, it is insufficient to deal with the {\em curse of dimensionality}, because the number of such wrinkles (ups and downs of the target function) may grow exponentially with the number of relevant interacting factors, when the data are represented in raw input space. We advocate learning algorithms that are flexible and non-parametric\footnote{We understand {\em non-parametric} as including all learning algorithms whose capacity can be increased appropriately as the amount of data and its complexity demands it, e.g. including mixture models and neural networks where the number of parameters is a data-selected hyper-parameter.} but do not rely exclusively on the smoothness assumption. Instead, we propose to incorporate generic priors such as those enumerated above into representation-learning algorithms. Smoothness-based learners (such as kernel machines) and linear models can still be useful on top of such learned representations. In fact, the combination of learning a representation and kernel machine is equivalent to {\em learning the kernel}, i.e., the feature space. Kernel machines are useful, but they depend on a prior definition of a suitable similarity metric, or a feature space in which naive similarity metrics suffice. We would like to use the data, along with very generic priors, to discover those features, or equivalently, a similarity function. \vsD \subsection{Distributed representations} \label{sec:distributed} \vsA Good representations are {\em expressive}, meaning that a reasonably-sized learned representation can capture a huge number of possible input configurations. A simple counting argument helps us to assess the expressiveness of a model producing a representation: how many parameters does it require compared to the number of input regions (or configurations) it can distinguish? Learners of one-hot representations, such as traditional clustering algorithms, Gaussian mixtures, nearest-neighbor algorithms, decision trees, or Gaussian SVMs all require $O(N)$ parameters (and/or $O(N)$ examples) to distinguish $O(N)$ input regions. One could naively believe that one cannot do better. However, RBMs, sparse coding, auto-encoders or multi-layer neural networks can all represent up to $O(2^k)$ input regions using only $O(N)$ parameters (with $k$ the number of non-zero elements in a sparse representation, and $k=N$ in non-sparse RBMs and other dense representations). These are all {\em distributed} \footnote{ Distributed representations: where $k$ out of $N$ representation elements or feature values can be independently varied, e.g., they are not mutually exclusive. Each concept is represented by having $k$ features being turned on or active, while each feature is involved in representing many concepts.} or sparse\footnote{Sparse representations: distributed representations where only a few of the elements can be varied at a time, i.e., $k